Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Appl Neuropsychol Adult ; 28(1): 24-34, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-30987451

RESUMEN

Use of multiple performance validity tests (PVTs) may best identify invalid performance, though few studies have examined the utility and accuracy of combining PVTs. This study examined the following PVTs in the Advanced Clinical Solutions (ACS) package to determine their utility alone and in concert: Word Choice Test (WCT), Reliable Digit Span (RDS), and Logical Memory Recognition (LMR). Ninety-three veterans participated in clinical neuropsychological evaluations to determine presence of cognitive impairment; 25% of the performances were deemed invalid via criterion PVTs. Classification accuracy of the ACS measures was assessed via receiver operating characteristic curves, while logistic regressions determined utility of combining these PVTs. The WCT demonstrated superior classification accuracy compared to the two embedded measures of the ACS, even in veterans with cognitive impairment. The two embedded measures (even when used in concert) exhibited inadequate classification accuracy. A combined model with all three ACS PVTs similarly demonstrated little benefit of the embedded indicators over the WCT alone. Results suggest the ACS WCT has utility for detecting invalid performance in a clinical sample with likely cognitive impairment, though the embedded ACS measures (RDS and LMR) may have limited incremental utility, particularly in individuals with cognitive impairment.


Asunto(s)
Disfunción Cognitiva/diagnóstico , Simulación de Enfermedad/diagnóstico , Pruebas Neuropsicológicas/normas , Psicometría/normas , Desempeño Psicomotor , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Veteranos
2.
Appl Neuropsychol Adult ; 28(6): 727-736, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-31835915

RESUMEN

The Test of Memory Malingering (TOMM) and Word Memory Test (WMT) are among the most well-known performance validity tests (PVTs) and regarded as gold standard measures. Due to the many factors that impact PVT selection, it is imperative that clinicians make informed clinical decisions with respect to additional or alternative PVTs that demonstrate similar classification accuracy as these well-validated measures. The present archival study evaluated the agreement/classification accuracy of a large battery consisting of multiple other freestanding/embedded PVTs in a mixed clinical sample of 126 veterans. We examined failure rates for all standalone/embedded PVTs using established cut-scores and calculated pass/fail agreement rates and diagnostic odds ratios for various combinations of PVTs using the TOMM and WMT as criterion measures. TOMM and WMT demonstrated the best agreement, followed by Word Choice Test (WCT). The Rey Fifteen Item Test had an excessive number of false-negative errors and reduced classification accuracy. The Digit Span age-corrected scaled score (DS-ACSS) had highest agreement. Findings lend further support to the use of a combination of embedded and standalone PVTs in identifying suboptimal performance. Results provide data to enhance clinical decision making for neuropsychologists who implement combinations of PVTs in a larger clinical battery.


Asunto(s)
Simulación de Enfermedad , Pruebas de Memoria y Aprendizaje , Humanos , Simulación de Enfermedad/diagnóstico , Memoria , Pruebas Neuropsicológicas , Reproducibilidad de los Resultados
3.
Clin Neuropsychol ; 34(2): 406-422, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31282285

RESUMEN

Objective: The Boston Naming Test, Second Edition (BNT-2) and the Neuropsychological Assessment Battery (NAB) Naming Test are common measures to assess visual confrontation naming ability. The comparably newer NAB Naming Test is a potential alternative to the BNT-2, given the latter's history of criticism. A recent psychometric investigation of the NAB Naming Test demonstrated sufficient reliability and validity in a large clinical sample; however, their study was limited by a lack of ethnic, racial, and language diversity, all of which can impact scores on naming tests.Method: The present study examined convergent and discriminant validity and internal consistency of the NAB Naming Test in a diverse clinical sample comprised of 225 veterans (87.6% men, 51.1% White/Caucasian, 29.3% bilingual, 64.0% with cognitive impairment). All but three participants identified as White/Caucasian, Hispanic/Latino or Black/African American. These psychometric properties were examined for the overall sample and for monolingual (English) and bilingual (English/Spanish) participants separately.Results: As expected, the NAB Naming Test demonstrated sufficient internal consistency and a negatively skewed distribution for the overall sample and monolingual and bilingual participants. Evidence for adequate convergent and discriminative validity was also established for monolingual and bilingual participants separately.Conclusion: In a diverse clinical sample with differing levels of self-reported language status, the NAB Naming Test demonstrated adequate psychometric properties. Although it represents a viable option in neuropsychological practice, continued awareness of patient-specific factors that could impact performance is recommended.


Asunto(s)
Pruebas del Lenguaje/normas , Pruebas Neuropsicológicas/normas , Psicometría/métodos , Estudios Transversales , Femenino , Humanos , Lenguaje , Masculino , Persona de Mediana Edad , Multilingüismo , Reproducibilidad de los Resultados , Estudios Retrospectivos
4.
Neuropsychology ; 34(1): 43-52, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31414828

RESUMEN

OBJECTIVE: Premorbid estimates of intellectual functioning are a key to assessment. This study aimed to compare 3 common measures and assess their accuracy: the Test of Premorbid Functioning (TOPF), Oklahoma Premorbid Intelligence Estimate (OPIE-3), and what is commonly referred to as the Barona equation. We also sought to provide appropriate adjustment considering the Flynn effect. METHOD: The sample consisted of a cross-section of 189 outpatient veterans receiving neuropsychological assessment including the TOPF and Wechsler Adult Intelligence Scale, 4th ed. (WAIS-IV). Paired sample t tests assessed differences between IQ models. Correlations for all models and actual WAIS-IV Full Scale IQ (FSIQ) to establish which model best predicted variance in current IQ. Mean differences were evaluated to establish how closely the models approximated WAIS-IV FSIQ. RESULTS: The Barona equation estimated higher premorbid IQ than TOPF Simple Demographics Model; however, differences between the models were nonsignificant after a Flynn effect correction for the Barona equation (.23 IQ points per year). The OPIE-3 correlated with FSIQ but overestimated the FSIQ, demonstrating the Flynn effect. TOPF performance models (include word reading) characterized the variance of IQ scores best, but the Flynn-adjusted Barona equation had the smallest mean difference from the actual WAIS-IV FSIQ of any prediction model. CONCLUSION: Demographic models for premorbid IQ accurately estimate IQ in adult populations when normed on the test used to measure IQ, or when adjusted for the Flynn effect. A Flynn-corrected Barona score provided a more accurate estimation of WAIS-IV FSIQ than the TOPF or the OPIE-3. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Algoritmos , Pruebas de Inteligencia , Modelos Psicológicos , Adulto , Anciano , Estudios Transversales , Demografía , Etnicidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Veteranos , Escalas de Wechsler
5.
Clin Neuropsychol ; 33(6): 1083-1101, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-30475095

RESUMEN

Objective: Performance validity tests (PVTs) are essential in neuropsychological evaluations; however, it has been questioned how PVTs function in the context of cognitive impairment, and whether cognitive impairment alone is sufficient to cause PVT failure. Further, there is concern that some clinicians will disregard failed PVTs due to their perception that failures represent false-positive errors secondary to cognitive impairment. This study examined patterns associated with cognitively impaired versus noncredible performance across a battery of PVTs and neuropsychological tests. Additionally, the impact of VA service-connection and disability-seeking status on test validity was investigated. Method: A mixed-clinical sample of 103 veterans were administered six PVTs and neuropsychological tests. Performance was compared across three groups: valid-cognitively unimpaired, valid-cognitively impaired, and noncredible. Results: Significant PVT score differences and failure rates emerged across the three groups, with nonsignificant to small differences between valid-unimpaired and valid-impaired groups, and large differences between impaired and noncredible groups. In contrast, there were nonsignificant to small differences on neuropsychological tests between the valid-impaired and noncredible groups, indicating that impaired participants performed significantly better on PVTs despite comparable neurocognitive test scores. Service-connection rating itself was not associated with PVT failure, but an active disability claim to increase and/or establish service connection was associated with worse PVT performance. Conclusion: This study supports the use of multiple PVTs during evaluations of patients with varied cognitive abilities. Results indicated increased risk of PVT failure in patients who were seeking initiation/increase in service-connected payments, and shows that cognitive impairment does not cause PVT failure.


Asunto(s)
Disfunción Cognitiva/psicología , Pruebas Neuropsicológicas/normas , Veteranos/psicología , Estudios Transversales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA