About | HeinOnline Law Journal Library | HeinOnline Law Journal Library | HeinOnline

11 Psych. Inj. & L. 1 (2018)

handle is hein.journals/psyinjl11 and id is 1 raw text is: Psychological Injury and Law (2018) 11:1-8
https://doi.org/10.1007/si2207-017-9309-3
CrmssMark
The Boston Naming Test as a Measure of Performance Validity
Laszlo A. Erdodi' - Alexa G. Dunn' - Kristian R. Seke' - Carly Charron' - Abigail McDermott' - Anca Enache'
Charlotte Maytham' - Jessica L. Hurtubise'
Received: 30 September 2017 /Accepted: 20 November 2017 /Published online: 12 January 2018
© Springer Science+Business Media, LLC, part of Springer Nature 2017
Abstract
This study was designed to evaluate the potential of the Boston Naming Test (BNT) as a performance validity test (PVT). The
classification accuracy of the BNT was examined against several criterion PVTs in a mixed clinical sample of 214 adult
outpatients physician referred for neuropsychological assessment. Mean age was 46.7 (SD= 12.5); mean education was 13.5
(SD = 2.5). All participants were native speakers of English. A BNT raw score < 50 produced high specificity (.87-.95), but low
and variable sensitivity (.15-.41). Similarly, a T score < 37 was specific (.87-.95), but not very sensitive (.15-.35) to psycho-
metrically defined non-credible responding. Ipsative analyses (i.e., case-by-case review of individual PVT profiles) suggest that
failing these cutoffs was associated with zero false positives when all available PVTs were taken into account. Results are
consistent with previous reports that the validity cutoffs on the BNT have high positive predictive power, but low negative
predictive power. As such, they are useful in ruling in invalid performance, but they cannot be used to rule it out.
Keywords Boston Naming Test - Performance validity - Embedded validity indicators

Introduction
The validity of neuropsychological assessment rests on the
assumption that examinees' performance on cognitive testing
is representative of their true ability level (Lezak, Howieson,
Bigler, & Tranel, 2012). During the early decades of neuro-
psychology, performance validity was considered to be self-
evident, with the implication that insufficient test taking effort
or overt malingering could be detected through informal be-
havioral observation. After the landmark study by Heaton,
Smith, Lehman, and Vogt (1978), however, it became appar-
ent that non-credible responding evades the trained eye of the
expert clinician. Therefore, empirical methods of assessing the
credibility of test scores were developed.
Stand-alone performance validity tests (PVTs) were specifi-
cally designed to identify response sets that are likely to under-
estimate the examinee's true ability. Long considered the gold
standard instruments, stand-alone PVTs have accumulated a
strong empirical evidence base in differentiating valid and in-
valid neurocognitive profile (Boone, 2013; Larrabee, 2012).
W Laszlo A. Erdodi
lerdodi@ gmail.com
Department of Psychology, University of Windsor, 168 Chrysler Hall
South, 401 Sunset Ave, Windsor, ON N9B 3P4, Canada

However, over time, their disadvantages also became apparent.
First, they require significant commitment in terms of test ma-
terial and administration time, while placing additional restric-
tions on test sequence-especially the complex ones involving
repeated trials and multiple time delays. Second, many of them
have multiple independent trials, with built-in delays that place
restrictions on the administration sequence of ability tests.
Third, they only measure performance validity, failing to pro-
vide information on diagnostic consideration-arguably the
main goal of a neuropsychological assessment (Erdodi,
Abeare, et al., 2017; Erdodi, Seke, et al., 2017).
Embedded validity indicators (EVI) provide a viable alter-
native to stand-alone PVTs. They utilize data already collected
for clinical purposes, while they simultaneously evaluate the
veracity of test performance. As such, they do not require
additional test material or clinician time to administer and
score the instrument. In the era of managed care, when clini-
cians operate under increasing volume pressures, EVIs pro-
vide a cost-effective solution to the dilemma between measur-
ing cognitive ability or performance validity.
EVIs are also instrumental in fulfilling the mandate of mon-
itoring test taking effort throughout the assessment and across
cognitive domains (Boone, 2009). While most stand-alone
PVTs are nested in the forced-choice recognition paradigm,
EVIs cover a wide range of cognitive functions, from attention

t Springer

What Is HeinOnline?

HeinOnline is a subscription-based resource containing thousands of academic and legal journals from inception; complete coverage of government documents such as U.S. Statutes at Large, U.S. Code, Federal Register, Code of Federal Regulations, U.S. Reports, and much more. Documents are image-based, fully searchable PDFs with the authority of print combined with the accessibility of a user-friendly and powerful database. For more information, request a quote or trial for your organization below.



Short-term subscription options include 24 hours, 48 hours, or 1 week to HeinOnline.

Contact us for annual subscription options:

Already a HeinOnline Subscriber?

profiles profiles most