Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 54
Filter
1.
Alzheimer Dis Assoc Disord ; 38(1): 98-100, 2024.
Article in English | MEDLINE | ID: mdl-38300875

ABSTRACT

The Mini-mental State Examination (MMSE) is a commonly used screening tool for cognitive impairment. Lenient scoring of spatial orientation errors (SOEs) on the MMSE is common and negatively affects its diagnostic utility. We examined the effect of lenient SOE scoring on MMSE classification accuracy in a consecutive case series of 103 older adults (age 60 or above) clinically referred for neuropsychological evaluation. Lenient scoring of SOEs on the MMSE occurred in 53 (51.4%) patients and lowered the sensitivity by 7% to 18%, with variable gains in specificity (0% to 11%) to psychometrically operationalized cognitive impairment. Results are consistent with previous reports that lenient scoring is widespread and attenuates the sensitivity of the MMSE. Given the higher clinical priority of correctly detecting early cognitive decline over specificity, a warning against lenient scoring of SOEs (on the MMSE and other screening tools) during medical education and in clinical practice is warranted.


Subject(s)
Cognitive Dysfunction , Orientation, Spatial , Humans , Aged , Middle Aged , Sensitivity and Specificity , Empathy , Cognitive Dysfunction/diagnosis , Neuropsychological Tests
2.
Neuropsychology ; 38(3): 281-292, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37917434

ABSTRACT

OBJECTIVE: This study was designed to replicate previous research on the clinical utility of the Verbal Paired Associates (VPA) and Visual Reproduction (VR) subtests of the WMS-IV as embedded performance validity tests (PVTs) and perform a critical item (CR) analysis within the VPA recognition trial. METHOD: Archival data were collected from a mixed clinical sample of 119 adults (MAge = 42.5, MEducation = 13.9). Classification accuracy was computed against psychometrically defined criterion groups based on the outcome of various free-standing and embedded PVTs. RESULTS: Age-corrected scaled scores ≤ 6 were specific (.89-.98) but had variable sensitivity (.36-.64). A VPA recognition cutoff of ≤ 34 produced a good combination of sensitivity (.46-.56) and specificity (.92-.93), as did a VR recognition cutoff of ≤ 4 (.48-.53 sensitivity at .86-.94 specificity). Critical item analysis expanded the VPA's sensitivity by 3.5%-7.0% and specificity by 5%-8%. Negative learning curves (declining output on subsequent encoding trials) were rare but highly specific (.99-1.00) to noncredible responding. CONCLUSIONS: Results largely support previous reports on the clinical utility of the VPA and VR as embedded PVTs. Sample-specific fluctuations in their classification accuracy warrant further research into the generalizability of the findings. Critical item analysis offers a cost-effective method for increasing confidence in the interpretation of the VPA recognition trial as a PVT. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Recognition, Psychology , Adult , Humans , Neuropsychological Tests , Reproducibility of Results
3.
J Int Neuropsychol Soc ; 29(10): 972-983, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37246143

ABSTRACT

OBJECTIVE: This study was designed to evaluate the effect of limited English proficiency (LEP) on neurocognitive profiles. METHOD: Romanian (LEP-RO; n = 59) and Arabic (LEP-AR; n = 30) native speakers were compared to Canadian native speakers of English (NSE; n = 24) on a strategically selected battery of neuropsychological tests. RESULTS: As predicted, participants with LEP demonstrated significantly lower performance on tests with high verbal mediation relative to US norms and the NSE sample (large effects). In contrast, several tests with low verbal mediation were robust to LEP. However, clinically relevant deviations from this general pattern were observed. The level of English proficiency varied significantly within the LEP-RO and was associated with a predictable performance pattern on tests with high verbal mediation. CONCLUSIONS: The heterogeneity in cognitive profiles among individuals with LEP challenges the notion that LEP status is a unitary construct. The level of verbal mediation is an imperfect predictor of the performance of LEP examinees during neuropsychological testing. Several commonly used measures were identified that are robust to the deleterious effects of LEP. Administering tests in the examinee's native language may not be the optimal solution to contain the confounding effect of LEP in cognitive evaluations.


Subject(s)
Limited English Proficiency , Humans , Cross-Cultural Comparison , Canada , Language , Cognition
4.
Appl Neuropsychol Adult ; : 1-10, 2023 Mar 07.
Article in English | MEDLINE | ID: mdl-36881969

ABSTRACT

OBJECTIVE: This study was design to evaluate the potential of the recognition trials for the Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA) subtests of the Wechsler Memory Scales-Fourth Edition (WMS-IV) to serve as embedded performance validity tests (PVTs). METHOD: The classification accuracy of the three WMS-IV subtests was computed against three different criterion PVTs in a sample of 103 adults with traumatic brain injury (TBI). RESULTS: The optimal cutoffs (LM ≤ 20, VR ≤ 3, VPA ≤ 36) produced good combinations of sensitivity (.33-.87) and specificity (.92-.98). An age-corrected scaled score of ≤5 on either of the free recall trials on the VPA was specific (.91-.92) and relatively sensitive (.48-.57) to psychometrically defined invalid performance. A VR I ≤ 5 or VR II ≤ 4 had comparable specificity, but lower sensitivity (.25-.42). There was no difference in failure rate as a function of TBI severity. CONCLUSIONS: In addition to LM, VR, and VPA can also function as embedded PVTs. Failing validity cutoffs on these subtests signals an increased risk of non-credible presentation and is robust to genuine neurocognitive impairment. However, they should not be used in isolation to determine the validity of an overall neurocognitive profile.

5.
Behav Sci Law ; 41(5): 445-462, 2023.
Article in English | MEDLINE | ID: mdl-36893020

ABSTRACT

This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.


Subject(s)
Memory and Learning Tests , Humans , Recognition, Psychology , Reproducibility of Results
6.
Assessment ; 30(8): 2476-2490, 2023 12.
Article in English | MEDLINE | ID: mdl-36752050

ABSTRACT

This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.


Subject(s)
Neuropsychological Tests , Humans , Sensitivity and Specificity , Psychometrics , Reproducibility of Results
7.
Appl Neuropsychol Child ; 12(2): 97-103, 2023.
Article in English | MEDLINE | ID: mdl-35148226

ABSTRACT

This study was designed to examine the effect of limited English proficiency (LEP) on the Hopkins Verbal Learning Test-Revised (HVLT-R). The HVLT-R was administered to 28 undergraduate student volunteers. Half were native speakers of English (NSE), half had LEP. The LEP sample performed significantly below NSE on individual acquisition trials and delayed free recall (large effects). In addition, participants with LEP scored 1.5-2 SDs below the normative mean. There was no difference in performance during recognition testing. LEP status was associated with a clinically significant deficit on the HVLT-R in a sample of cognitively healthy university students. Results suggest that low scores on auditory verbal learning tests in individuals with LEP should not be automatically interpreted as evidence of memory impairment or learning disability. LEP should be considered as grounds for academic accommodations. The generalizability of the findings is constrained by the small sample size.


Subject(s)
Limited English Proficiency , Humans , Young Adult , Neuropsychological Tests , Educational Status , Memory Disorders , Verbal Learning
8.
Assessment ; 30(5): 1467-1485, 2023 07.
Article in English | MEDLINE | ID: mdl-35757996

ABSTRACT

This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).


Subject(s)
Neuropsychological Tests , Adult , Humans , Reproducibility of Results
9.
J Atten Disord ; 27(1): 80-88, 2023 01.
Article in English | MEDLINE | ID: mdl-36113024

ABSTRACT

OBJECTIVE: The purpose of the present study was to further investigate the clinical utility of individual and composite indicators within the CPT-3 as embedded validity indicators (EVIs) given the discrepant findings of previous investigations. METHODS: A total of 201 adults undergoing psychoeducational evaluation for ADHD and/or Specific Learning Disorder (SLD) were divided into credible (n = 159) and non-credible (n = 42) groups based on five criterion measures. RESULTS: Receiver operating characteristic curves (ROC) revealed that 5/9 individual indicators and 2/4 composite indicators met minimally acceptable classification accuracy of ≥0.70 (AUC = 0.43-0.78). Individual (0.16-0.45) and composite indicators (0.23-0.35) demonstrated low sensitivity when using cutoffs that maintained specificity ≥90%. CONCLUSION: Given the lack of stability across studies, further research is needed before recommending any specific cutoff be used in clinical practice with individuals seeking psychoeducational assessment.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Specific Learning Disorder , Adult , Humans , Neuropsychological Tests , Attention Deficit Disorder with Hyperactivity/diagnosis , Reproducibility of Results , ROC Curve
10.
J Pers Assess ; 105(4): 520-530, 2023.
Article in English | MEDLINE | ID: mdl-36041087

ABSTRACT

This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.


Subject(s)
Memory and Learning Tests , Personality Assessment , Humans , Reproducibility of Results , Neuropsychological Tests , Malingering/diagnosis , Malingering/psychology
11.
Clin Neuropsychol ; 37(3): 617-649, 2023 04.
Article in English | MEDLINE | ID: mdl-35946813

ABSTRACT

ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.


Subject(s)
Patients , Humans , Neuropsychological Tests , Psychometrics , Reproducibility of Results , Sensitivity and Specificity
12.
Dev Neuropsychol ; 47(6): 273-294, 2022 09.
Article in English | MEDLINE | ID: mdl-35984309

ABSTRACT

Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.


Subject(s)
Limited English Proficiency , Humans , Neuropsychological Tests , Cross-Cultural Comparison , Reproducibility of Results
13.
Neuropsychology ; 36(7): 683-694, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35849361

ABSTRACT

OBJECTIVE: This study was designed to replicate previous research on critical item analysis within the Word Choice Test (WCT). METHOD: Archival data were collected from a mixed clinical sample of 119 consecutively referred adults (Mage = 51.7, Meducation = 14.7). The classification accuracy of the WCT was calculated against psychometrically defined criterion groups. RESULTS: Critical item analysis identified an additional 2%-5% of the sample that passed traditional cutoffs as noncredible. Passing critical items after failing traditional cutoffs was associated with weaker independent evidence of invalid performance, alerting the assessor to the elevated risk for false positives. Failing critical items in addition to failing select traditional cutoffs increased overall specificity. Non-White patients were 2.5 to 3.5 times more likely to Fail traditional WCT cutoffs, but select critical item cutoffs limited the risk to 1.5-2. CONCLUSIONS: Results confirmed the clinical utility of critical item analysis. Although the improvement in sensitivity was modest, critical items were effective at containing false positive errors in general, and especially in racially diverse patients. Critical item analysis appears to be a cost-effective and equitable method to improve an instrument's classification accuracy. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Neuropsychological Tests , Adult , Humans , Middle Aged , Psychometrics , Reproducibility of Results
14.
Arch Clin Neuropsychol ; 37(7): 1579-1600, 2022 Oct 19.
Article in English | MEDLINE | ID: mdl-35694764

ABSTRACT

OBJECTIVE: The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). METHOD: A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. RESULTS: Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures-with some notable exceptions. CONCLUSIONS: Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.


Subject(s)
Limited English Proficiency , Adult , Humans , Communication Barriers , Language , Neuropsychological Tests , Educational Status
15.
Cogn Behav Neurol ; 35(3): 155-168, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35507449

ABSTRACT

BACKGROUND: Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE: To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD: We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS: Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION: Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.


Subject(s)
Neuropsychological Tests , Humans , Language Tests
16.
Dev Neuropsychol ; 47(1): 17-31, 2022.
Article in English | MEDLINE | ID: mdl-35157548

ABSTRACT

This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.


Subject(s)
Emotions , Adult , Humans , Language Tests , Neuropsychological Tests , Reproducibility of Results , Self Report
17.
Appl Neuropsychol Adult ; 29(6): 1425-1439, 2022.
Article in English | MEDLINE | ID: mdl-33631077

ABSTRACT

OBJECTIVE: This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD: Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS: Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS: Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.


Subject(s)
Malingering , Recognition, Psychology , Adult , Humans , Malingering/diagnosis , Neuropsychological Tests , Psychometrics , Reproducibility of Results , Verbal Learning
18.
Appl Neuropsychol Adult ; 29(3): 351-363, 2022.
Article in English | MEDLINE | ID: mdl-32449371

ABSTRACT

This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.


Subject(s)
Cross-Cultural Comparison , Limited English Proficiency , Female , Humans , Male , Young Adult , Language Tests , Neuropsychological Tests , Psychometrics , Multilingualism
19.
Appl Neuropsychol Adult ; 29(5): 1221-1230, 2022.
Article in English | MEDLINE | ID: mdl-33403885

ABSTRACT

We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.


Subject(s)
Malingering , Australia , Humans , Malingering/diagnosis , Reproducibility of Results
20.
Appl Neuropsychol Child ; 11(4): 713-724, 2022.
Article in English | MEDLINE | ID: mdl-34424798

ABSTRACT

OBJECTIVE: This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD: The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS: A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS: Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.


Subject(s)
Emotions , Humans , Neuropsychological Tests , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...