Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 66
Filter
1.
Clin Neuropsychol ; : 1-20, 2024 Feb 13.
Article in English | MEDLINE | ID: mdl-38351710

ABSTRACT

Objectives: This study investigated the Wechsler Adult Intelligence Scale-Fourth Edition Letter-Number Sequencing (LNS) subtest as an embedded performance validity indicator among adults undergoing an attention-deficit/hyperactivity disorder (ADHD) evaluation, and its potential incremental value over Reliable Digit Span (RDS). Method: This cross-sectional study comprised 543 adults who underwent neuropsychological evaluation for ADHD. Patients were divided into valid (n = 480) and invalid (n = 63) groups based on multiple criterion performance validity tests. Results: LNS total raw scores, age-corrected scaled scores, and age- and education-corrected T-scores demonstrated excellent classification accuracy (area under the curve of .84, .83, and .82, respectively). The optimal cutoff for LNS raw score (≤16), age-corrected scaled score (≤7), and age- and education-corrected T-score (≤36) yielded .51 sensitivity and .94 specificity. Slightly lower sensitivity (.40) and higher specificity (.98) was associated with a more conservative T-score cutoff of ≤33. Multivariate models incorporating both LNS and RDS improved classification accuracy (area under the curve of .86), and LNS scores explained a significant but modest proportion of variance in validity status above and beyond RDS. Chaining LNS T-score of ≤33 with RDS cutoff of ≤7 increased sensitivity to .69 while maintaining ≥.90 specificity. Conclusions: Findings provide preliminary evidence for the criterion and construct validity of LNS as an embedded validity indicator in ADHD evaluations. Practitioners are encouraged to use LNS T-score cutoff of ≤33 or ≤36 to assess the validity of obtained test data. Employing either of these LNS cutoffs with RDS may enhance the detection of invalid performance.

2.
Arch Clin Neuropsychol ; 39(4): 454-463, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38102764

ABSTRACT

OBJECTIVE: To examine the normal frequency of obtaining one or more scores considered potentially problematic based on normative comparisons when completing the NIH Toolbox Emotion Battery (NIHTB-EB). METHOD: Participants (N = 753; ages 18-85, 62.4% women, 66.4% non-Hispanic White) from the NIHTB norming study completed 17 scales of emotional functioning fitting into three subdomains (i.e., Negative Affect, Psychological Well-being, Social Satisfaction). Scores were considered potentially problematic if they were 1 SD above/below the mean, depending on the orientation of the scale, and cutoffs for 1.5 and 2 SD were also included for reference. Multivariate base rates quantified the rate at which participants obtained one or more potentially problematic scale or subdomain scores. RESULTS: The portion of participants obtaining one or more potentially problematic scores on the NIHTB-EB scales and subdomains was 61.2 and 23.2%, respectively. Participants who were younger (i.e., 18-49) or had less education had higher rates of potentially problematic scores within specific subdomains. There were no significant differences by sex or race/ethnicity. CONCLUSIONS: Elevated scores on the NIHTB-EB were common in the normative sample and related to education/age. The multivariate base rates provided indicate obtaining one or more potentially problematic scores on the NIHTB-EB is broadly normal among adults, which may guard against overinterpreting a single score as clinically significant. These base rates should be considered in the context of other assessment findings, such as interviews, medical history or informant reports, to ensure that true emotional problems are not dismissed, and normal variation in emotional functioning is not pathologized.


Subject(s)
Emotions , National Institutes of Health (U.S.) , Humans , Female , Male , Middle Aged , Aged , Adult , Adolescent , United States , Young Adult , Aged, 80 and over , Emotions/physiology , Neuropsychological Tests/statistics & numerical data , Neuropsychological Tests/standards , Reference Values , Multivariate Analysis
3.
Appl Neuropsychol Adult ; : 1-14, 2023 Sep 13.
Article in English | MEDLINE | ID: mdl-37703401

ABSTRACT

This study investigated the individual and combined utility of 10 embedded validity indicators (EVIs) within executive functioning, attention/working memory, and processing speed measures in 585 adults referred for an attention-deficit/hyperactivity disorder (ADHD) evaluation. Participants were categorized into invalid and valid performance groups as determined by scores from empirical performance validity indicators. Analyses revealed that all of the EVIs could meaningfully discriminate invalid from valid performers (AUCs = .69-.78), with high specificity (≥90%) but low sensitivity (19%-51%). However, none of them explained more than 20% of the variance in validity status. Combining any of these 10 EVIs into a multivariate model significantly improved classification accuracy, explaining up to 36% of the variance in validity status. Integrating six EVIs from the Stroop Color and Word Test, Trail Making Test, Verbal Fluency Test, and Wechsler Adult Intelligence Scale-Fourth Edition was as efficacious (AUC = .86) as using all 10 EVIs together. Failing any two of these six EVIs or any three of the 10 EVIs yielded clinically acceptable specificity (≥90%) with moderate sensitivity (60%). Findings support the use of multivariate models to improve the identification of performance invalidity in ADHD evaluations, but chaining multiple EVIs may only be helpful to an extent.

4.
Arch Clin Neuropsychol ; 38(6): 929-943, 2023 Aug 24.
Article in English | MEDLINE | ID: mdl-36702773

ABSTRACT

OBJECTIVE: The purpose of this study was to explore racial/ethnic differences in neurobehavioral symptom reporting and symptom validity testing among military veterans with a history of traumatic brain injury (TBI). METHOD: Participants of this observational cross-sectional study (N = 9,646) were post-deployed Iraq-/Afghanistan-era veterans enrolled in the VA's Million Veteran Program with a clinician-confirmed history of TBI on the Comprehensive TBI Evaluation (CTBIE). Racial/ethnic groups included White, Black, Hispanic, Asian, Multiracial, Another Race, American Indian or Alaska Native, and Native Hawaiian or Other Pacific Islander. Dependent variables included neurobehavioral symptom domains and symptom validity assessed via the Neurobehavioral Symptom Inventory (NSI) and Validity-10, respectively. RESULTS: Chi-square analyses showed significant racial/ethnic group differences for vestibular, somatic/sensory, and affective symptoms as well as for all Validity-10 cutoff scores examined (≥33, ≥27, ≥26, >22, ≥22, ≥13, and ≥7). Follow-up analyses compared all racial/ethnic groups to one another, adjusting for sociodemographic- and injury-related characteristics. These analyses revealed that the affective symptom domain and the Validity-10 cutoff of ≥13 revealed the greatest number of racial/ethnic differences. CONCLUSIONS: Results showed significant racial/ethnic group differences on neurobehavioral symptom domains and symptom validity testing among veterans who completed the CTBIE. An enhanced understanding of how symptoms vary by race/ethnicity is vital so that clinical care can be appropriately tailored to the unique needs of all veterans. Results highlight the importance of establishing measurement invariance of the NSI across race/ethnicity and underscore the need for ongoing research to determine the most appropriate Validity-10 cutoff score(s) to use across racially/ethnically diverse veterans.


Subject(s)
Brain Injuries, Traumatic , Veterans , Humans , Veterans/psychology , Neuropsychological Tests , Brain Injuries, Traumatic/complications , Ethnicity , Hispanic or Latino
5.
Appl Neuropsychol Adult ; 30(3): 315-329, 2023.
Article in English | MEDLINE | ID: mdl-34261385

ABSTRACT

Using archival data from 2463 psychoeducational assessments of postsecondary students we investigated whether failure on either symptom or performance validity tests (SVTs or PVTs) was associated with score differences on various cognitive, achievement, or executive functioning performance measures or on symptom report measures related to mental health or attention complaints. In total, 14.6% of students failed one or more PVT, 33.6% failed one or more SVT, and 41.6% failed at least one validity test. Individuals who failed SVTs tended to have the highest levels of self-reported symptoms relative to other groups but did not score worse on performance-based psychological tests. Those who failed PVTs scored worse on performance-based tests relative to other groups. Failure on at least one PVT and one SVT resulted in both performance and self-reported symptoms suggestive of greater impairment compared with those who passed all validity measures. Findings also highlight the need for domain-specific SVTs; failing ADHD SVTs was associated only with extreme reports of ADHD and executive functioning symptoms while failing mental health SVTs related only to extreme reports of mental health complaints. Results support using at least one PVT and one SVT in psychoeducational assessments to aid in diagnostic certainty, given the frequency of non-credible presentation in this population of postsecondary students.


Subject(s)
Attention , Disability Evaluation , Humans , Neuropsychological Tests , Self Report , Reproducibility of Results
6.
Arch Clin Neuropsychol ; 38(5): 772-781, 2023 Jul 25.
Article in English | MEDLINE | ID: mdl-36578198

ABSTRACT

OBJECTIVE: This study explored the specificity of four embedded performance validity tests (PVTs) derived from common neuropsychological tasks in a sample of older veterans with verified cognitive decline and whose performance was deemed valid by licensed psychologists. METHOD: Participants were 180 veterans who underwent comprehensive neuropsychological evaluation, were determined to have valid performance following profile analysis/conceptualization, and were diagnosed with mild neurocognitive disorder (i.e., MCI; n = 64) or major neurocognitive disorder (i.e., Dementia; n = 116). All participants completed at least one of four embedded PVTs: Reliable Digit Span (RDS), California Verbal Learning Test-2nd ed. Short Form (CVLT-II SF) Forced choice, Trails B:A, and Delis-Kaplan Executive Function System (DKEFS) Letter and Category Fluency. RESULTS: Adequate specificity (i.e., ≥90%) was achieved at modified cut-scores for all embedded PVTs across MCI and Dementia groups. Trails B:A demonstrated near perfect specificity at its traditional cut-score (Trails B:A < 1.5). RDS ≤ 5 and CVLT-II SF Forced Choice ≤7 led to <10% false positive classification errors across MCI and dementia groups. DKEFS Letter and Category Fluency achieved 90% specificity at extremely low normative cut-scores. CONCLUSIONS: RDS, Trails B:A, and CVLT-II SF Forced Choice reflect promising embedded PVTs in the context of dementia evaluations. DKEFS Letter and Category Fluency appear too sensitive to genuine neurocognitive decline and, therefore, are inappropriate PVTs in adults with MCI or dementia. Additional research into embedded PVT sensitivity (via known-groups or analogue designs) in MCI and dementia is needed.


Subject(s)
Cognitive Dysfunction , Dementia , Veterans , Adult , Humans , Aged , Neuropsychological Tests , Veterans/psychology , Dementia/diagnosis , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/psychology , Memory and Learning Tests , Reproducibility of Results
7.
Appl Neuropsychol Adult ; : 1-5, 2022 Aug 09.
Article in English | MEDLINE | ID: mdl-35944507

ABSTRACT

Questionnaire-based symptom validity tests (SVTs) are an indispensable diagnostic tool for evaluating the credibility of patients' claimed symptomatology, both in forensic and in clinical assessment contexts. In 2019, the comprehensive professional manual of a new SVT, the Self-Report Symptom Inventory (SRSI), was published in German. Its English-language version was first tested in the UK. This experimental analogue study investigated 20 adults simulating minor head injury symptoms and 21 honestly responding participants. The effect sizes of differences between the two groups were large, with the simulating group endorsing a higher number of pseudosymptoms, both on the SRSI and the Structured Inventory of Malingered Symptomatology, and scoring lower on the Reliable Digit Span than the control group. The results are similar to those obtained in previous research of different SRSI language versions, supporting the effort to validate the English-language SRSI version.

8.
Arch Clin Neuropsychol ; 37(8): 1765-1771, 2022 Nov 21.
Article in English | MEDLINE | ID: mdl-35780310

ABSTRACT

The Automated Neuropsychological Assessment Metrics (ANAM) is one of the most widely used and validated neuropsychological instruments for assessing cognition. The ANAM Test System includes a reporting tool, the ANAM Validity Indicator Report that generates scores for the embedded effort measure, the ANAM Performance Validity Index (APVI). The current study seeks to develop a proxy for the APVI, using raw subtest summary test scores. This would be useful for situations where the APVI score is unavailable (e.g., validity report not generated at the time of the assessment) or when the item level data needed to generate this score are inaccessible. ANAM scores from a large data set of 1,000,000+ observations were used for this retrospective analysis. Results of linear regression analysis suggest that the APVI can be reasonably estimated from the raw subtest summary test scores that are presented on the ANAM Performance Report. Clinically, this means that an important step in the interpretation process, checking the validity of test data, can still be performed even when the APVI is not available.


Subject(s)
Cognition Disorders , Humans , Neuropsychological Tests , Retrospective Studies , Cognition Disorders/psychology , Cognition , Reproducibility of Results
9.
Appl Neuropsychol Adult ; : 1-10, 2022 May 30.
Article in English | MEDLINE | ID: mdl-35635794

ABSTRACT

Performance validity tests are susceptible to false positives from genuine cognitive impairment (e.g., dementia); this has not been explored with the short form of the California Verbal Learning Test II (CVLT-II-SF). In a memory clinic sample, we examined whether CVLT-II-SF Forced Choice Recognition (FCR) scores differed across diagnostic groups, and how the severity of impairment [Clinical Dementia Rating Sum of Boxes (CDR-SOB) or Mini-Mental State Examination (MMSE)] modulated test performance. Three diagnostic groups were identified: subjective cognitive impairment (SCI; n = 85), amnestic mild cognitive impairment (a-MCI; n = 17), and dementia due to Alzheimer's Disease (AD; n = 50). Significant group differences in FCR were observed using one-way ANOVA; post-hoc analysis indicated the AD group performed significantly worse than the other groups. Using multiple regression, FCR performance was modeled as a function of the diagnostic group, severity (MMSE or CDR-SOB), and their interaction. Results yielded significant main effects for MMSE and diagnostic group, with a significant interaction. CDR-SOB analyses were non-significant. Increases in impairment disproportionately impacted FCR performance for persons with AD, adding caution to research-based cutoffs for performance validity in dementia. Caution is warranted when assessing performance validity in dementia populations. Future research should examine whether CVLT-II-SF-FCR is appropriately specific for best-practice testing batteries for dementia.

10.
Arch Clin Neuropsychol ; 37(6): 1199-1207, 2022 Aug 23.
Article in English | MEDLINE | ID: mdl-35435228

ABSTRACT

OBJECTIVE: Individuals with early-onset dysexecutive Alzheimer's disease (dAD) have high rates of failed performance validity testing (PVT), which can lead to symptom misinterpretation and misdiagnosis. METHOD: The aim of this retrospective study is to evaluate rates of failure on a common PVT, the test of memory malingering (TOMM), in a sample of clinical patients with biomarker-confirmed early-onset dAD who completed neuropsychological testing. RESULTS: We identified seventeen patients with an average age of symptom onset at 52.25 years old. Nearly fifty percent of patients performed below recommended cut-offs on Trials 1 and 2 of the TOMM. Four of six patients who completed outside neuropsychological testing were misdiagnosed with alternative etiologies to explain their symptomatology, with two of these patients' performances deemed unreliable based on the TOMM. CONCLUSIONS: Low scores on the TOMM should be interpreted in light of contextual and optimally biological information and do not necessarily rule out a neurodegenerative etiology.


Subject(s)
Alzheimer Disease , Malingering , Alzheimer Disease/complications , Alzheimer Disease/diagnosis , Diagnostic Errors , Humans , Malingering/diagnosis , Malingering/psychology , Memory Disorders/diagnosis , Memory Disorders/etiology , Memory and Learning Tests , Middle Aged , Neuropsychological Tests , Reproducibility of Results , Retrospective Studies
11.
Front Psychol ; 13: 789762, 2022.
Article in English | MEDLINE | ID: mdl-35369141

ABSTRACT

Feigning (i.e., grossly exaggerating or fabricating) symptoms distorts diagnostic evaluations. Therefore, dedicated tools known as symptom validity tests (SVTs) have been developed to help clinicians differentiate feigned from genuine symptom presentations. While a deviant SVT score is an indicator of a feigned symptom presentation, a non-deviant score provides support for the hypothesis that the symptom presentation is valid. Ideally, non-deviant SVT scores should temper suspicion of feigning even in cases where the patient fits the DSM's stereotypical yet faulty profile of the "antisocial" feigner. Across three studies, we tested whether non-deviant SVT scores, indeed, have this corrective effect. We gave psychology students (Study 1, N = 55) and clinical experts (Study 2, N = 42; Study 3, N = 93) a case alluding to the DSM profile of feigning. In successive steps, they received information about the case, among which non-deviant SVT outcomes. After each step, participants rated how strongly they suspected feigning and how confident they were about their judgment. Both students and experts showed suspicion rates around the midpoint of the scale (i.e., 50) and did not respond to non-deviant SVT outcomes with lowered suspicion rates. In Study 4, we educated participants (i.e., psychology students, N = 92) about the shortcomings of the DSM's antisocial typology of feigning and the importance of the negative predictive power of SVTs, after which they processed the case information. Judgments remained roughly similar to those in Studies 1-3. Taken together, our findings suggest that students and experts alike have difficulties understanding that non-deviant scores on SVTs reduce the probability of feigning as a correct differential diagnosis.

12.
Arch Clin Neuropsychol ; 37(4): 814-825, 2022 May 16.
Article in English | MEDLINE | ID: mdl-35060601

ABSTRACT

OBJECTIVE: Strict competency frameworks exist for training in, and provision of, clinical neuropsychological assessment practice. However, as in all disciplines, daily clinical practice may drift from the gold standard practice without routine monitoring and audit. A simple-to-use, but thorough and evidence-based audit tool has been developed to facilitate the tracking, maintenance, and discussion of best practice over time. METHOD: A literature search and liaison with experienced neuropsychology colleagues did not unearth any pre-existing audit standards. Therefore, 39 new standards were generated, which were guided by best practice literature and clinical neuropsychology colleague discussions, to form the proposed self-assessment audit tool. Due to the diverse nature of services, both core and supplementary standards are proposed to enable the audit to be tailored to suit individual services' needs. RESULTS: During its development, the tool has so far been trialed in two U.K. National Health Service clinical services in different localities, on three occasions, with a total patient population of N = 78 in order to refine the standards and to generate practice recommendations. CONCLUSIONS: This audit tool is presented for services to self-assess their neuropsychological assessment practice. The authors plan to take this work forward with the British Psychological Society's Division of Neuropsychology as a policy document for self-assessment and peer review. Other potential developments include contributing to clinical neuropsychology training tools and refining audit standards for use more widely, such as in pediatric services, or internationally with diverse populations.


Subject(s)
Self-Assessment , State Medicine , Child , Humans , Neuropsychological Tests , Neuropsychology
13.
Appl Neuropsychol Adult ; 29(1): 10-22, 2022.
Article in English | MEDLINE | ID: mdl-31852281

ABSTRACT

It is now widely understood that ADHD can be feigned easily and convincingly. Despite this, almost no methods exist to assist clinicians in identifying when such behavior occurs. Recently, new validity indicators specific to feigned ADHD were reported for the Personality Assessment Inventory (PAI). Derived from a logistic regression, these algorithms are said to have excellent specificity and good sensitivity in identifying feigned ADHD. However, these authors compared those with genuine ADHD only to nonclinical undergraduate students (asked to respond honestly or asked to simulate ADHD); no criterion group of definite malingerers was included. We therefore investigated these new validity indicators with 331 postsecondary students who underwent assessment for possible ADHD and compared scores of those who were eventually diagnosed with ADHD (n = 111) to those who were not [Clinical controls (66), Definite malingerers (36); No diagnosis (117)]. The two proposed PAI algorithms were found to have poor positive predictive value (.19 and .17). Self-report validity measures from the Connors' Adult Attention Rating Scale, and the Negative Impression Management scale on the PAI returned more positive results. Overall, more research is needed to better identify noncredible ADHD presentation, as the PAI-based methods proposed by Aita et al. appear inadequate as symptom validity measures.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Adult , Attention Deficit Disorder with Hyperactivity/diagnosis , Humans , Malingering/diagnosis , Personality Assessment , Personality Inventory , Reproducibility of Results , Students
14.
Appl Neuropsychol Adult ; 29(6): 1344-1351, 2022.
Article in English | MEDLINE | ID: mdl-33662216

ABSTRACT

The current study examined characteristics of the Structured Inventory of Malingered Symptomatology (SIMS) in a sample of 110 patients at an adult neuropsychology clinic. Subjects with especially high or low suspicion of invalid reporting were identified based on clinician-completed questions. SIMS elevation rates were examined at different cutoffs and between these groups and were correlated with other indicators of validity. High rates of SIMS elevations were found at the standard cutoff (>14) for the total sample (45.5%), low suspicion cases (24.4%), and high suspicion cases (95.7%). Other indicators of invalidity were low (secondary gain = 8.5%, clinical suspicion of exaggeration in interview M = 2.37/5, medical records concerning for invalidity = 2.4%, mixed/poor performance validity = 6.1%). Elevations correlated with clinician concern for over-reporting in interview, subject-reported cognitive concern (r = -.610) and psychological measures (BDI-II r = -.602, PROMIS r = -.409) but not with neuropsychological memory tests or performance validity measures (all p > .23). The SIMS should be interpreted with caution, as elevations appeared largely related to cognitive concern and psychiatric distress rather than true malingering. A cutoff of > 16 could be used in neuropsychological populations, although this is still of modest specificity.


Subject(s)
Malingering , Adult , Humans , Malingering/diagnosis , Malingering/psychology , Neuropsychological Tests , Reproducibility of Results
15.
Arch Clin Neuropsychol ; 37(1): 50-62, 2022 Jan 17.
Article in English | MEDLINE | ID: mdl-34050354

ABSTRACT

OBJECTIVE: This study examined the degree to which verbal and visuospatial memory abilities influence performance validity test (PVT) performance in a mixed clinical pediatric sample. METHOD: Data from 252 consecutive clinical pediatric cases (Mage=11.23 years, SD=4.02; 61.9% male) seen for outpatient neuropsychological assessment were collected. Measures of learning and memory (e.g., The California Verbal Learning Test-Children's Version; Child and Adolescent Memory Profile [ChAMP]), performance validity (Test of Memory Malingering Trial 1 [TOMM T1]; Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition Digit Span indices; ChAMP Overall Validity Index), and intellectual abilities (e.g., WISC-V) were included. RESULTS: Learning/memory abilities were not significantly correlated with TOMM T1 and accounted for relatively little variance in overall TOMM T1 performance (i.e., ≤6%). Conversely, ChAMP Validity Index scores were significantly correlated with verbal and visual learning/memory abilities, and learning/memory accounted for significant variance in PVT performance (12%-26%). Verbal learning/memory performance accounted for 5%-16% of the variance across the Digit Span PVTs. No significant differences in TOMM T1 and Digit Span PVT scores emerged between verbal/visual learning/memory impairment groups. ChAMP validity scores were lower for the visual learning/memory impairment group relative to the nonimpaired group. CONCLUSIONS: Findings highlight the utility of including PVTs as standard practice for pediatric populations, particularly when memory is a concern. Consistent with the adult literature, TOMM T1 outperformed other PVTs in its utility even among the diverse clinical sample with/without learning/memory impairment. In contrast, use of Digit Span indices appear to be best suited in the presence of visuospatial (but not verbal) learning/memory concerns. Finally, the ChAMP's embedded validity measure was most strongly impacted by learning/memory performance.


Subject(s)
Malingering , Memory Disorders , Adolescent , Adult , Child , Female , Humans , Male , Neuropsychological Tests , Reproducibility of Results , Verbal Learning
16.
Psychol Inj Law ; 14(1): 1, 2021.
Article in English | MEDLINE | ID: mdl-33758641
17.
Arch Clin Neuropsychol ; 36(7): 1326-1340, 2021 Oct 13.
Article in English | MEDLINE | ID: mdl-33388765

ABSTRACT

OBJECTIVE: Performance validity tests (PVTs) are an integral component of neuropsychological assessment. There is a need for the development of more PVTs, especially those employing covert determinations. The aim of the present study was to provide initial validation of a new computerized PVT, the Perceptual Assessment of Memory (PASSOM). METHOD: Participants were 58 undergraduate students randomly assigned to a simulator (SIM) or control (CON) group. All participants were provided written instructions for their role prior to testing and were administered the PASSOM as part of a brief battery of neurocognitive tests. Indices of interest included response accuracy for Trials 1 and 2, and total errors across Trials, as well as response time (RT) for Trials 1 and 2, and total RT for both Trials. RESULTS: The SIM group produced significantly more errors than the CON group for Trials 1 and 2, and committed more total errors across trials. Significantly longer response latencies were found for the SIM group compared to the CON group for all RT indices examined. Linear regression modeling indicated excellent group classification for all indices studied, with areas under the curve ranging from 0.92 to 0.95. Sensitivity and specificity rates were good for several cut scores across all of the accuracy and RT indices, and sensitivity improved greatly by combining RT cut scores with the more traditional accuracy cut scores. CONCLUSION: Findings demonstrate the ability of the PASSOM to distinguish individuals instructed to feign cognitive impairment from those told to perform to the best of their ability.


Subject(s)
Cognitive Dysfunction , Malingering , Humans , Neuropsychological Tests , Reproducibility of Results , Sensitivity and Specificity
18.
Article in English | MEDLINE | ID: mdl-32998629

ABSTRACT

The results of neuropsychological tests may be distorted by patients who exaggerate cognitive deficits. Eighty-three patients with cognitive deficit [Amnestic Mild Cognitive Impairment (aMCI), n = 53; Alzheimer's disease (AD) dementia, n = 30], 44 healthy older adults (HA), and 30 simulators of AD (s-AD) underwent comprehensive neuropsychological assessment. Receiver Operating Characteristic (ROC) analysis revealed high specificity but low sensitivity of the Delayed Matching to Sample Task (DMS48) in differentiating s-AD from AD dementia (87 and 53%, respectively) and from aMCI (96 and 57%). The sensitivity was considerably increased by using the DMS48/Rey Auditory Verbal Learning Test (RAVLT) ratio (specificity and sensitivity 93% and 93% for AD dementia and 96% and 80% for aMCI). The DMS48 differentiates s-AD from both aMCI and AD dementia with high specificity but low sensitivity. Its predictive value greatly increased when evaluated together with the RAVLT.


Subject(s)
Alzheimer Disease , Cognition Disorders , Cognitive Dysfunction , Aged , Alzheimer Disease/diagnosis , Cognitive Dysfunction/diagnosis , Humans , Malingering/diagnosis , Neuropsychological Tests
19.
Arch Clin Neuropsychol ; 36(3): 403-413, 2021 Apr 21.
Article in English | MEDLINE | ID: mdl-31740920

ABSTRACT

OBJECTIVE: Performance validity research has emphasized the need for briefer measures and, more recently, abbreviated versions of established free-standing tests to minimize neuropsychological evaluation costs/time burden. This study examined the accuracy of multiple abbreviated versions of the Dot Counting Test ("quick" DCT) for detecting invalid performance in isolation and in combination with the Test of Memory Malingering Trial 1 (TOMMT1). METHOD: Data from a mixed clinical sample of 107 veterans (80 valid/27 invalid per independent validity measures and structured criteria) were included in this cross-sectional study; 47% of valid participants were cognitively impaired. Sensitivities/specificities of various 6- and 4-card DCT combinations were calculated and compared to the full, 12-card DCT. Combined models with the most accurate 6- and 4-card combinations and TOMMT1 were then examined. RESULTS: Receiver operator characteristic curve analyses were significant for all 6- and 4-card DCT combinations with areas under the curve of .868-.897. The best 6-card combination (cards, 1-3-5-8-11-12) had 56% sensitivity/90% specificity (E-score cut-off, ≥14.5), and the best 4-card combination (cards, 3-4-8-11) had 63% sensitivity/94% specificity (cut-off, ≥16.75). The full DCT had 70% sensitivity/90% specificity (cut-off, ≥16.00). Logistic regression revealed 95% classification accuracy when 6-card or 4-card "quick" combinations were combined with TOMMT1, with the DCT combinations and TOMMT1 both emerging as significant predictors. CONCLUSIONS: Abbreviated DCT versions utilizing 6- and 4-card combinations yielded comparable sensitivity/specificity as the full DCT. When these "quick" DCT combinations were further combined with an abbreviated memory-based performance validity test (i.e., TOMMT1), overall classification accuracy for identifying invalid performance was 95%.


Subject(s)
Memory and Learning Tests , Memory , Cross-Sectional Studies , Humans , Malingering , Neuropsychological Tests , Reproducibility of Results
20.
Appl Neuropsychol Adult ; 28(4): 486-496, 2021.
Article in English | MEDLINE | ID: mdl-31519112

ABSTRACT

Given the prevalence of compensation seeking patients who exaggerate or fabricate their symptoms, the assessment of performance and symptom validity throughout testing is vital in neuropsychological evaluations. Two of the most commonly utilized performance validity tests (PVTs) are the Word Memory Test (WMT) and the Test of Memory Malingering (TOMM). While both have proven successful in detecting invalid performance, some studies suggest greater sensitivity in the WMT relative to the TOMM. To improve upon previous research, this study compared performance in individuals who completed both the WMT and TOMM during a neuropsychological evaluation. Participants included 268 cases from a clinical private practice consisting of primarily disability claimants. One-way multivariate analysis of variance (MANOVA) compared neuropsychological performance of participants who passed both PVTs (n = 198) versus those who failed the WMT but passed the TOMM (n = 70). Global suppression of neuropsychological scores was found for participants who failed the WMT but passed the TOMM, as well as more psychiatric symptoms reported on questionnaires, relative to those who passed both PVTs. These findings suggest that those passing the TOMM but failing the WMT demonstrated performance invalidity, which illustrates the WMT's enhanced sensitivity.


Subject(s)
Malingering , Memory Disorders , Humans , Malingering/diagnosis , Memory Disorders/diagnosis , Memory Disorders/etiology , Memory and Learning Tests , Neuropsychological Tests , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL