Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 207
Filtrar
1.
J Int Neuropsychol Soc ; : 1-10, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39291402

RESUMO

OBJECTIVES: This study investigated the relationship between various intrapersonal factors and the discrepancy between subjective and objective cognitive difficulties in adults with attention-deficit hyperactivity disorder (ADHD). The first aim was to examine these associations in patients with valid cognitive symptom reporting. The next aim was to investigate the same associations in patients with invalid scores on tests of cognitive symptom overreporting. METHOD: The sample comprised 154 adults who underwent a neuropsychological evaluation for ADHD. Patients were divided into groups based on whether they had valid cognitive symptom reporting and valid test performance (n = 117) or invalid cognitive symptom overreporting but valid test performance (n = 37). Scores from multiple symptom and performance validity tests were used to group patients. Using patients' scores from a cognitive concerns self-report measure and composite index of objective performance tests, we created a subjective-objective discrepancy index to quantify the extent of cognitive concerns that exceeded difficulties on objective testing. Various measures were used to assess intrapersonal factors thought to influence the subjective-objective cognitive discrepancy, including demographics, estimated premorbid intellectual ability, internalizing symptoms, somatic symptoms, and perceived social support. RESULTS: Patients reported greater cognitive difficulties on subjective measures than observed on objective testing. The discrepancy between subjective and objective scores was most strongly associated with internalizing and somatic symptoms. These associations were observed in both validity groups. CONCLUSIONS: Subjective cognitive concerns may be more indicative of the extent of internalizing and somatic symptoms than actual cognitive impairment in adults with ADHD, regardless if they have valid scores on cognitive symptom overreporting tests.

2.
Appl Neuropsychol Adult ; : 1-16, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39264233

RESUMO

The Self-Report Symptom Inventory (SRSI) is a novel tool designed to detect symptom overreporting and other forms of noncredible responding. Unlike existing scales, the SRSI includes genuine and pseudosymptoms scales covering cognitive, affective, motor, pain, and post-traumatic stress disorder domains. The present study aims to investigate the psychometric properties of the Italian Version of the SRSI (SRSI-It), in particular, its factor structure, reliability, convergent and discriminant validity, and diagnostic accuracy. Data from 1180 healthy participants showed a hierarchical structure with higher-order constructs for genuine symptoms and pseudosymptoms, each comprising five subscales. The SRSI-It showed a strong convergent validity with the Structured Inventory of Malingered Symptomatology and discriminant validity through low correlations with the Psychopathic Personality Inventory-Revised. Receiver operating characteristic analysis determined cut scores of 6 (95% specificity) and 9 (98% specificity) for pseudosymptoms, with a Ratio Index score of 0.289 (82% specificity). In summary, the SRSI-It appears to be a promising tool for identifying symptom exaggeration in clinical and forensic contexts, ultimately enhancing the quality and reliability of evaluations in these contexts.

3.
Clin Neuropsychol ; : 1-13, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39138860

RESUMO

Objective: This study examined the performance validity test (PVT) pass/fail rate in a sample of presurgical epilepsy candidates; assessed whether performance validity was associated with reduced performance across cognitive domains; investigated the relationship between performance validity and self-report mood questionnaires; and assessed whether PVT performance was associated with demographic or clinical factors (i.e. sex, race/ethnicity, age, years of education, reported history of special education, seizure longevity, and number of anti-seizure medications). Methods: One hundred and eighty-three presurgical epilepsy candidates were examined. Each patient's assessment battery included a stand-alone performance validity measure and two embedded validity measures. Results: PVT failure rate in this sample (10%) was associated with reduced performance on all neurocognitive measures: Full Scale IQ (FSIQ; r = -0.26), CVLT-II Total Learning (r = -0.36) and Long Delay Free Recall (LDFR; r = -0.38), BVMT-R Delayed Recall (r = -0.28), and Wisconsin Card Sorting Test (Categories Completed; r = -0.32). In addition, PVT failure rate was associated with elevated scores on the Beck Anxiety Inventory (r = .22) but not on the Beck Depression Inventory (BDI-II; r = .14). Correlations that were significant at the α = 0.05 level maintained significance following post hoc Bonferroni correction. The valid and invalid groups did not differ significantly in sex, race/ethnicity, age, years of education, reported history of special education, seizure longevity, and number of anti-seizure medications. Conclusions: Results from this study suggest that PVT performance was not impacted by demographic or clinical factors and therefore may be a reliable indicator of performance validity in a presurgical epilepsy sample.

4.
J Clin Exp Neuropsychol ; 46(6): 535-556, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39120111

RESUMO

INTRODUCTION: Intraindividual variability across a battery of neuropsychological tests (IIV-dispersion) can reflect normal variation in scores or arise from cognitive impairment. An alternate interpretation is IIV-dispersion reflects reduced engagement/invalid test data, although extant research addressing this interpretation is significantly limited. METHOD: We used a sample of 97 older adult (mean age: 69.92), predominantly White (57%) or Black/African American (34%), and predominantly cis-gender male (87%) veterans. Examinees completed a comprehensive neuropsychological battery, including measures of reduced engagement/invalid test data (a symptom validity test [SVT], multiple performance validity tests [PVTs]), as part of a clinical evaluation. IIV-dispersion was indexed using the coefficient of variance (CoV). We tested 1) the relationships of raw scores and "failures" on SVT/PVTs with IIV-dispersion, 2) the relationship between IIV-dispersion and validity/neurocognitive disorder status, and 3) whether IIV-dispersion discriminated the validity/neurocognitive disorder groups using receiver operating characteristic (ROC) curves. RESULTS: IIV-dispersion was significantly and independently associated with a selection of PVTs, with small to very large effect sizes. Participants with invalid profiles and cognitively impaired participants with valid profiles exhibited medium to large (d = .55-1.09) elevations in IIV-dispersion compared to cognitively unimpaired participants with valid profiles. A non-significant but small to medium (d = .35-.60) elevation in IIV-dispersion was observed for participants with invalid profiles compared to those with a neurocognitive disorder. IIV-dispersion was largely accurate at differentiating participants without a neurocognitive disorder from invalid participants and those with a neurocognitive disorder (areas under the Curve [AUCs]=.69-.83), while accuracy was low for differentiating invalid participants from those with a neurocognitive disorder (AUCs=.58-.65). CONCLUSIONS: These preliminary data suggest IIV-dispersion may be sensitive to both neurocognitive disorders and compromised engagement. Clinicians and researchers should exercise due diligence and consider test validity (e.g. PVTs, behavioral signs of engagement) as an alternate explanation prior to interpretation of intraindividual variability as an indicator of cognitive impairment.


Assuntos
Testes Neuropsicológicos , Veteranos , Humanos , Masculino , Testes Neuropsicológicos/normas , Feminino , Idoso , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Disfunção Cognitiva/diagnóstico , Disfunção Cognitiva/fisiopatologia , Disfunção Cognitiva/etiologia , Idoso de 80 Anos ou mais , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/etiologia
5.
Child Neuropsychol ; : 1-17, 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39072667

RESUMO

This study aimed to validate a novel parent-report measure of ADHD symptom inflation, the Parent-Reported ADHD Symptom Infrequency Scale (PRASIS), in a clinical sample. The PRASIS is composed of an Infrequency subscale and an ADHD subscale. Online participants were assigned to one of three groups: mothers of children with diagnosed ADHD (n = 110), mothers of children with diagnosed ODD and/or anxiety (n = 116), and mothers of children without ADHD, ODD, or anxiety. The third group was then randomized to either receive instructions to complete the questionnaire honestly (controls, n = 164) or to complete the questionnaire as if they were trying to convince a provider that their child has ADHD (simulators, n = 141). Results indicated good to excellent internal consistency (INF α = .83, ADHD Total α = .93); strong convergent validity of the PRASIS ADHD scale with the ADHD Rating Scale-5 (r(529) = .85, p < .001); excellent group discrimination of the PRASIS Infrequency scale and the PRASIS ADHD scale (η2 = 0.38-0.42); and specificity of 86.7, sensitivity of 67.4%, and an AUC of .86 for the Infrequency scale. Overall, these outcomes supported the utility of the PRASIS in samples including mothers of children with psychiatric diagnoses of ODD and/or anxiety.

6.
Appl Neuropsychol Adult ; : 1-8, 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39073594

RESUMO

Recent reports indicate that the Memory Integrated Language Test (MIL) and Making Change Test Abbreviated Index (MCT-AI), two web-based performance validity tests (PVTs), have good sensitivity and specificity when used independently. This study investigated whether using these PVTs together could improve the detection of invalid performance in a mixed neuropsychiatric sample. Participants were 129 adult outpatients who underwent a neuropsychological evaluation and were classified into valid (n = 104) or invalid (n = 25) performance groups based on several commonly used PVTs. Using cut scores of ≤41 on the MIL and ≥1.05 on the MCT-AI together enhanced classification accuracy, yielding an area under the curve of .84 (95% CI: .75, .93). As compared to using the MIL and MCT-AI independently, the combined use increased the sensitivity from .10-.31 to.70 while maintaining ≥.90 specificity. Findings also indicated that failing either the MIL or MCT-AI was associated with somewhat lower cognitive test scores, but failing both was associated with markedly lower scores. Overall, using the MIL and MCT-AI together may be an effective way to identify invalid test performance during a neuropsychological evaluation. Furthermore, pairing these tests is consistent with current practice guidelines to include multiple PVTs in a neuropsychological test battery.

7.
Neurooncol Pract ; 11(3): 319-327, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38737617

RESUMO

Background: Performance validity tests (PVTs) and symptom validity tests (SVTs) are essential to neuropsychological evaluations, helping ensure findings reflect true abilities or concerns. It is unclear how PVTs and SVTs perform in children who received radiotherapy for brain tumors. Accordingly, we investigated the rate of noncredible performance on validity indicators as well as associations with fatigue and lower intellectual functioning. Methods: Embedded PVTs and SVTs were investigated in 98 patients with pediatric craniopharyngioma undergoing proton radiotherapy (PRT). The contribution of fatigue, sleepiness, and lower intellectual functioning to embedded PVT performance was examined. Further, we investigated PVTs and SVTs in relation to cognitive performance at pre-PRT baseline and change over time. Results: SVTs on parent measures were not an area of concern. PVTs identified 0-31% of the cohort as demonstrating possible noncredible performance at baseline, with stable findings 1 year following PRT. Reliable digit span (RDS) noted the highest PVT failure rate; RDS has been criticized for false positives in pediatric populations, especially children with neurological impairment. Objective sleepiness was strongly associated with PVT failure, stressing need to consider arousal level when interpreting cognitive performance in children with craniopharyngioma. Lower intellectual functioning also needs to be considered when interpreting task engagement indices as it was strongly associated with PVT failure. Conclusions: Embedded PVTs should be used with caution in pediatric craniopharyngioma patients who have received PRT. Future research should investigate different cut-off scores and validity indicator combinations to best differentiate noncredible performance due to task engagement versus variable arousal and/or lower intellectual functioning.

8.
Clin Neuropsychol ; : 1-14, 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38775455

RESUMO

OBJECTIVE: The Making Change Test (MCT) is a brief, digitized freestanding performance validity test (PVT) designed for tele-neuropsychology (TeleNP). The objective of this study was to report the initial validation of the MCT in a mixed neuropsychiatric sample referred for neuropsychological evaluation using a known-groups design. METHOD: The sample consisted of 136 adult outpatients who underwent a neuropsychological evaluation. Patients were classified as valid (n = 115) or invalid (n = 21) based on several established PVTs. Two validity indicators were calculated and assessed, including an Accuracy Response-Score and an Abbreviated Index. The Accuracy Response-Score incorporated both response time and errors. The Abbreviated Index aggregated response time and errors across the most sensitive test items in terms of predicting performance validity status. RESULTS: Correlational analyses indicated that the MCT Accuracy Response-Score and Abbreviated Index were more similar to non-memory-based PVTs than memory-based PVTs. Both the MCT Accuracy Response-Score and Abbreviated Index indicated acceptable classification accuracy (area under the curve of .77). The optimal cut score for the MCT Accuracy Response-Score (≥24) yielded a sensitivity of .38 and specificity of .90. The optimal cut score associated with the Abbreviated Index yielded slightly better operating characteristics, with a sensitivity of .50 and specificity of .90. CONCLUSIONS: Initial findings provide support for the criterion and construct validity of the MCT and suggest a promising TeleNP future for this performance validity tool. However, additional support is necessary before the MCT can be used clinically.

9.
J Clin Exp Neuropsychol ; 46(2): 95-110, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38726688

RESUMO

Overreporting is a common problem that complicates psychological evaluations. A challenge facing the effective detection of overreporting is that many of the identified strategies (e.g., symptom severity approaches; see Rogers & Bender, 2020) are not incorporated into broadband measures of personality and psychopathology (e.g., Minnesota Multiphasic Personality Inventory family of instruments). While recent efforts have worked to incorporate some of these newer strategies, no such work has been conducted on the MMPI-3. For instance, recent symptom severity approaches have been used to identify patterns of multivariate base rate "skyline" elevations on the BASC, and similar strategies have been adopted into the PAI to measure psychopathology (Multi-Feigning Index; Gaines et al., 2013) and cognitive symptoms (Cognitive Bias Scale of Scales; Boress et al., 2022b). This study used data from a simulation study (n = 318) and an Active-Duty (AD) clinical sample (n = 290) to develop and cross-validate such a scale on the MMPI-2-RF and MMPI-3. Results suggest that the MMPI SOS (Scale of Scales) scores perform equitably to existing measures of overreporting on the MMPI-2-RF and MMPI-3 and incrementally predict a PVT-classified "known-group" of Active Duty service members. Effects were generally large in magnitude. Classification accuracy achieved desired specificity (.90) and approximated expected sensitivity (.30). Implications of these findings are discussed, which emphasize how alternative overreporting detection strategies may be useful to consider for the MMPI. These alternative strategies have room for expansion and refinement.


Assuntos
MMPI , Psicometria , Humanos , MMPI/normas , Feminino , Masculino , Adulto , Pessoa de Meia-Idade , Psicometria/normas , Psicometria/métodos , Psicometria/instrumentação , Simulação de Doença/diagnóstico , Reprodutibilidade dos Testes , Adulto Jovem
10.
Appl Neuropsychol Adult ; : 1-8, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38557276

RESUMO

The current study examined whether the Memory Similarities Extended Test (M-SET), a memory test based on the Similarities subtest of the Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II), has value in neuropsychological testing. The relationship of M-SET measures of cued recall (CR) and recognition memory (REC) to brain injury severity and memory scores from the Wechsler Memory Scale, Fourth Edition (WMS-IV) was analyzed in examinees with traumatic brain injuries ranging from mild to severe. Examinees who passed standard validity tests were divided into groups with intracranial injury (CT + ve, n = 18) and without intracranial injury (CT-ve, n = 50). In CT + ve only, CR was significantly correlated with Logical Memory I (LMI: rs = .62) and Logical Memory II (LMII: rs = .65). In both groups, there were smaller correlations with delayed visual memory (VRII: rs = .38; rs = .44) and psychomotor speed (Coding: rs = .29; rs = .29). The REC score was neither an indicator of memory ability nor an internal indicator of performance validity. There were no differences in M-SET or WMS-IV scores for CT-ve and CT + ve, and reasons for this are discussed. It is concluded that M-SET has utility as an incidental cued recall measure.

11.
Clin Neuropsychol ; 38(7): 1647-1666, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38351710

RESUMO

Objectives: This study investigated the Wechsler Adult Intelligence Scale-Fourth Edition Letter-Number Sequencing (LNS) subtest as an embedded performance validity indicator among adults undergoing an attention-deficit/hyperactivity disorder (ADHD) evaluation, and its potential incremental value over Reliable Digit Span (RDS). Method: This cross-sectional study comprised 543 adults who underwent neuropsychological evaluation for ADHD. Patients were divided into valid (n = 480) and invalid (n = 63) groups based on multiple criterion performance validity tests. Results: LNS total raw scores, age-corrected scaled scores, and age- and education-corrected T-scores demonstrated excellent classification accuracy (area under the curve of .84, .83, and .82, respectively). The optimal cutoff for LNS raw score (≤16), age-corrected scaled score (≤7), and age- and education-corrected T-score (≤36) yielded .51 sensitivity and .94 specificity. Slightly lower sensitivity (.40) and higher specificity (.98) was associated with a more conservative T-score cutoff of ≤33. Multivariate models incorporating both LNS and RDS improved classification accuracy (area under the curve of .86), and LNS scores explained a significant but modest proportion of variance in validity status above and beyond RDS. Chaining LNS T-score of ≤33 with RDS cutoff of ≤7 increased sensitivity to .69 while maintaining ≥.90 specificity. Conclusions: Findings provide preliminary evidence for the criterion and construct validity of LNS as an embedded validity indicator in ADHD evaluations. Practitioners are encouraged to use LNS T-score cutoff of ≤33 or ≤36 to assess the validity of obtained test data. Employing either of these LNS cutoffs with RDS may enhance the detection of invalid performance.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Escalas de Wechsler , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Masculino , Feminino , Adulto , Escalas de Wechsler/normas , Estudos Transversais , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Adulto Jovem , Sensibilidade e Especificidade , Psicometria/normas , Psicometria/instrumentação , Adolescente
12.
J Clin Exp Neuropsychol ; 46(2): 152-161, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38353609

RESUMO

INTRODUCTION: There are very few symptom validity indices directly examining overreported posttraumatic stress disorder (PTSD) symptomatology, and, until recently, there were no symptom validity indices embedded within the PTSD Checklist for the DSM-5 (PCL-5), which is one of the most commonly used PTSD measures. Given this, the current study sought to develop and cross-validate symptom validity indices for the PCL-5. METHOD: Multiple criterion groups comprised of Veteran patients were utilized (N = 210). Patients were determined to be valid or invalid responders based on Personality Asessment Inventory symptom validity indices. Three PCL-5 symptom validity indices were then examined: the PCL-5 Symptom Severity scale (PSS), the PCL-5 Extreme Symptom scale (PES), and the PCL-5 Rare Items scale (PRI). RESULTS: Area under the curve statistics ranged from .78 to .85. The PSS and PES both met classification accuracy statistic goals, with the PES achieving the highest sensitivity rate (.39) when maintaining specificity at .90 or above across all criterion groups. When an ad hoc analysis was performed, which included only patients with exceptionally strong evidence of invalidity, sensitivity rates increased to .60 for the PES while maintaining specificity at .90. CONCLUSIONS: These findings provide preliminary support for new PTSD symptom validity indices embedded within one of the most frequently used PTSD measures.


Assuntos
Simulação de Doença , Transtornos de Estresse Pós-Traumáticos , Veteranos , Humanos , Transtornos de Estresse Pós-Traumáticos/diagnóstico , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Simulação de Doença/diagnóstico , Reprodutibilidade dos Testes , Escalas de Graduação Psiquiátrica/normas , Idoso , Psicometria/normas
13.
J Clin Exp Neuropsychol ; 46(2): 86-94, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38375629

RESUMO

INTRODUCTION: Telehealth assessment (TA) is a quickly emerging practice, offered with increasing frequency across many different clinical contexts. TA is also well-received by most patients, and there are numerous guidelines and training opportunities which can support effective telehealth practice. Although there are extensive recommended practices, these guidelines have rarely been evaluated empirically, particularly on personality measures. While existing research is limited, it does generally support the idea that TA and in-person assessment (IA) produce fairly equitable test scores. The MMPI-3, a recently released and highly popular personality and psychopathology measure has been the subject of several of those experimental or student (non-client) based studies; however, no study to date has evaluated these trends within a clinical sample. This study empirically tests for differences in TA and IA test scores on the MMPI-3 validity scores when following recommended administration procedures. METHOD: Data were from a retrospective chart review. Veterans (n = 550) who underwent psychological assessment in a Veterans Affairs Medical Center ADHD evaluation clinic were contrasted between in person and telehealth assessment modalities on the MMPI-2-RF and MMPI-3. Groups were compared using t tests, chi square, and base rates. RESULTS: Results suggest that there were minimal differences in elevation rates or mean scores across modality, supporting the use of TA. CONCLUSIONS: This study's findings support the use of the MMPI via TA with ADHD evaluations, Veterans, and in neuro/psychological evaluation settings more generally. Observed elevation rates and mean scores of this study were notably different from those seen in other VA service clinics sampled nationally, which is an area of future investigation.


Assuntos
MMPI , Telemedicina , Humanos , Masculino , Telemedicina/normas , Telemedicina/métodos , Feminino , Adulto , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , MMPI/normas , Estudos Retrospectivos , Veteranos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico
14.
Arch Clin Neuropsychol ; 39(6): 692-701, 2024 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-38366222

RESUMO

OBJECTIVE: Adverse childhood experiences (ACEs) are commonly reported in individuals presenting for attention-deficit hyperactivity disorder (ADHD) evaluation. Performance validity tests (PVTs) and symptom validity tests (SVTs) are essential to ADHD evaluations in young adults, but extant research suggests that those who report ACEs may be inaccurately classified as invalid on these measures. The current study aimed to assess the degree to which ACE exposure differentiated PVT and SVT performance and ADHD symptom reporting in a multi-racial sample of adults presenting for ADHD evaluation. METHOD: This study included 170 adults referred for outpatient neuropsychological ADHD evaluation who completed the ACE Checklist and a neurocognitive battery that included multiple PVTs and SVTs. Analysis of variance was used to examine differences in PVT and SVT performance among those with high (≥4) and low (≤3) reported ACEs. RESULTS: Main effects of the ACE group were observed, such that high ACE group reporting demonstrated higher scores on SVTs assessing ADHD symptom over-reporting and infrequent psychiatric and somatic symptoms on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. Conversely, no significant differences emerged in total PVT failures across ACE groups. CONCLUSIONS: Those with high ACE exposure were more likely to have higher scores on SVTs assessing over-reporting and infrequent responses. In contrast, ACE exposure did not affect PVT performance. Thus, ACE exposure should be considered specifically when evaluating SVT performance in the context of ADHD evaluations, and more work is needed to understand factors that contribute to different patterns of symptom reporting as a function of ACE exposure.


Assuntos
Experiências Adversas da Infância , Transtorno do Deficit de Atenção com Hiperatividade , Testes Neuropsicológicos , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Transtorno do Deficit de Atenção com Hiperatividade/etnologia , Masculino , Feminino , Adulto , Experiências Adversas da Infância/estatística & dados numéricos , Testes Neuropsicológicos/normas , Testes Neuropsicológicos/estatística & dados numéricos , Adulto Jovem , Pessoa de Meia-Idade , Adolescente , Simulação de Doença/diagnóstico , Reprodutibilidade dos Testes
15.
Mil Psychol ; 36(2): 192-202, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37651693

RESUMO

Following the development of the Cognitive Bias Scale (CBS), three other cognitive over-reporting indicators were created. This study cross-validates these new Cognitive Bias Scale of Scales (CB-SOS) measurements in a military sample and contrasts their performance to the CBS. We analyzed data from 288 active-duty soldiers who underwent neuropsychological evaluation. Groups were established based on performance validity testing (PVT) failure. Medium effects (d = .71 to .74) were observed between those passing and failing PVTs. The CB-SOS scales have high specificity (≥.90) but low sensitivity across the suggested cut scores. While all CB-SOS were able to achieve .90, lower scores were typically needed. CBS demonstrated incremental validity beyond CB-SOS-1 and CB-SOS-3; only CB-SOS-2 was incremental beyond CBS. In a military sample, the CB-SOS scales have more limited sensitivity than in its original validation, indicating an area of limited utility despite easier calculation. The CBS performs comparably, if not better, than CB-SOS scales. CB-SOS-2's differences in performance in this study and its initial validation suggest that its psychometric properties may be sample dependent. Given their ease of calculation and relatively high specificity, our study supports the interpretation of elevated CB-SOS scores indicating those who are likely to fail concurrent PVTs.


Assuntos
Militares , Humanos , Militares/psicologia , Testes Neuropsicológicos , Personalidade , Determinação da Personalidade , Cognição
16.
Clin Neuropsychol ; 38(3): 738-762, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-37615421

RESUMO

Objective: The present study aims to evaluate the classification accuracy and resistance to coaching of the Inventory of Problems-29 (IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants instructed to feign. Method: Using a simulation design, 37 outpatients with mTBI (clinical control group) and 213 non-clinical instructed feigners under several coaching conditions completed the Spanish versions of the IOP-29, IOP-M, Structured Inventory of Malingered Symptomatology, and Rivermead Post Concussion Symptoms Questionnaire. Results: The IOP-29 discriminated well between clinical patients and instructed feigners, with an excellent classification accuracy for the recommended cutoff score (FDS ≥ .50; sensitivity = 87.10% for coached group and 89.09% for uncoached; specificity = 95.12%). The IOP-M also showed an excellent classification accuracy (cutoff ≤ 29; sensitivity = 87.27% for coached group and 93.55% for uncoached; specificity = 97.56%). Both instruments proved to be resistant to symptom information coaching and performance warnings. Conclusions: The results confirm that both of the IOP measures offer a similarly valid but different perspective compared to SIMS when assessing the credibility of symptoms of mTBI. The encouraging findings indicate that both tests are a valuable addition to the symptom validity practices of forensic professionals. Additional research in multiple contexts and with diverse conditions is warranted.


Assuntos
Concussão Encefálica , Tutoria , Humanos , Concussão Encefálica/complicações , Concussão Encefálica/diagnóstico , Testes Neuropsicológicos , Sensibilidade e Especificidade , Simulação de Doença/diagnóstico , Reprodutibilidade dos Testes
17.
Arch Clin Neuropsychol ; 39(1): 35-50, 2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-37449530

RESUMO

OBJECTIVE: Marketed as a validity test that detects feigning of posttraumatic stress disorder (PTSD), the Morel Emotional Numbing Test for PTSD (MENT) instructs examinees that PTSD may negatively affect performance on the measure. This study explored the potential that MENT performance depends on inclusion of "PTSD" in its instructions and the nature of the MENT as a performance validity versus a symptom validity test (PVT/SVT). METHOD: 358 participants completed the MENT as a part of a clinical neuropsychological evaluation. Participants were either administered the MENT with the standard instructions (SIs) that referenced "PTSD" or revised instructions (RIs) that did not. Others were administered instructions that referenced "ADHD" rather than PTSD (AI). Comparisons were conducted on those who presented with concerns for potential traumatic-stress related symptoms (SI vs. RI-1) or attention deficit (AI vs. RI-2). RESULTS: Participants in either the SI or AI condition produced more MENT errors than those in their respective RI conditions. The relationship between MENT errors and other S/PVTs was significantly stronger in the SI: RI-1 comparison, such that errors correlated with self-reported trauma-related symptoms in the SI but not RI-1 condition. MENT failure also predicted PVT failure at nearly four times the rate of SVT failure. CONCLUSIONS: Findings suggest that the MENT relies on overt reference to PTSD in its instructions, which is linked to the growing body of literature on "diagnosis threat" effects. The MENT may be considered a measure of suggestibility. Ethical considerations are discussed, as are the construct(s) measured by PVTs versus SVTs.


Assuntos
Simulação de Doença , Transtornos de Estresse Pós-Traumáticos , Humanos , Testes Neuropsicológicos , Simulação de Doença/diagnóstico , Simulação de Doença/psicologia , Emoções , Transtornos de Estresse Pós-Traumáticos/diagnóstico , Transtornos de Estresse Pós-Traumáticos/psicologia
18.
J Int Neuropsychol Soc ; 30(4): 410-419, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38014547

RESUMO

OBJECTIVE: Performance validity (PVTs) and symptom validity tests (SVTs) are necessary components of neuropsychological testing to identify suboptimal performances and response bias that may impact diagnosis and treatment. The current study examined the clinical and functional characteristics of veterans who failed PVTs and the relationship between PVT and SVT failures. METHOD: Five hundred and sixteen post-9/11 veterans participated in clinical interviews, neuropsychological testing, and several validity measures. RESULTS: Veterans who failed 2+ PVTs performed significantly worse than veterans who failed one PVT in verbal memory (Cohen's d = .60-.69), processing speed (Cohen's d = .68), working memory (Cohen's d = .98), and visual memory (Cohen's d = .88-1.10). Individuals with 2+ PVT failures had greater posttraumatic stress (PTS; ß = 0.16; p = .0002), and worse self-reported depression (ß = 0.17; p = .0001), anxiety (ß = 0.15; p = .0007), sleep (ß = 0.10; p = .0233), and functional outcomes (ß = 0.15; p = .0009) compared to veterans who passed PVTs. 7.8% veterans failed the SVT (Validity-10; ≥19 cutoff); Multiple PVT failures were significantly associated with Validity-10 failure at the ≥19 and ≥23 cutoffs (p's < .0012). The Validity-10 had moderate correspondence in predicting 2+ PVTs failures (AUC = 0.83; 95% CI = 0.76, 0.91). CONCLUSION: PVT failures are associated with psychiatric factors, but not traumatic brain injury (TBI). PVT failures predict SVT failure and vice versa. Standard care should include SVTs and PVTs in all clinical assessments, not just neuropsychological assessments, particularly in clinically complex populations.


Assuntos
Lesões Encefálicas Traumáticas , Veteranos , Humanos , Veteranos/psicologia , Testes Neuropsicológicos , Ansiedade/diagnóstico , Ansiedade/etiologia , Memória de Curto Prazo , Reprodutibilidade dos Testes , Simulação de Doença/diagnóstico
19.
Arch Clin Neuropsychol ; 39(4): 454-463, 2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38102764

RESUMO

OBJECTIVE: To examine the normal frequency of obtaining one or more scores considered potentially problematic based on normative comparisons when completing the NIH Toolbox Emotion Battery (NIHTB-EB). METHOD: Participants (N = 753; ages 18-85, 62.4% women, 66.4% non-Hispanic White) from the NIHTB norming study completed 17 scales of emotional functioning fitting into three subdomains (i.e., Negative Affect, Psychological Well-being, Social Satisfaction). Scores were considered potentially problematic if they were 1 SD above/below the mean, depending on the orientation of the scale, and cutoffs for 1.5 and 2 SD were also included for reference. Multivariate base rates quantified the rate at which participants obtained one or more potentially problematic scale or subdomain scores. RESULTS: The portion of participants obtaining one or more potentially problematic scores on the NIHTB-EB scales and subdomains was 61.2 and 23.2%, respectively. Participants who were younger (i.e., 18-49) or had less education had higher rates of potentially problematic scores within specific subdomains. There were no significant differences by sex or race/ethnicity. CONCLUSIONS: Elevated scores on the NIHTB-EB were common in the normative sample and related to education/age. The multivariate base rates provided indicate obtaining one or more potentially problematic scores on the NIHTB-EB is broadly normal among adults, which may guard against overinterpreting a single score as clinically significant. These base rates should be considered in the context of other assessment findings, such as interviews, medical history or informant reports, to ensure that true emotional problems are not dismissed, and normal variation in emotional functioning is not pathologized.


Assuntos
Emoções , National Institutes of Health (U.S.) , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Adulto , Adolescente , Estados Unidos , Adulto Jovem , Idoso de 80 Anos ou mais , Emoções/fisiologia , Testes Neuropsicológicos/estatística & dados numéricos , Testes Neuropsicológicos/normas , Valores de Referência , Análise Multivariada
20.
Artigo em Inglês | MEDLINE | ID: mdl-38073319

RESUMO

OBJECTIVE: The objective of this study was to determine base rates of response bias in veterans and service members (SM) referred specifically for attention-deficit/hyperactivity disorder (ADHD) evaluation. METHOD: Observational study of various performance validity tests (PVTs) and symptom validity tests (SVTs) in a sample of SMs (n = 94) and veterans (n = 504) referred for clinical evaluation of ADHD. RESULTS: SVT and PVT failure rates were similar between the samples, but they were lower than previous Veterans Affairs (VA) and SM studies that were not exclusive to ADHD evaluations. Invalid reporting across all SVT scales on the Minnesota Multiphasic Personality Inventory and Personality Assessment Inventory was relatively uncommon, with rates of invalid scores falling at less than 7%. In both samples, free-standing PVTs were failed at about 22%. CONCLUSIONS: Although the base rates of PVT and SVT failures in ADHD-specific evaluations were lower than previously published data on non-ADHD-specific evaluations in veterans and SMs, the current study continues to support the inclusion of these measures.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA