Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38085328

RESUMEN

The use of Structured Diagnostic Assessments (SDAs) is a solution for unreliability in psychiatry and the gold standard for diagnosis. However, except for studies between the 50 s and 70 s, reliability without the use of Non-SDAs (NSDA) is seldom tested, especially in non-Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries. We aim to measure reliability between examiners with NSDAs for psychiatric disorders. We compared diagnostic agreement after clinician change, in an outpatient academic setting. We used inter-rater Kappa measuring 8 diagnostic groups: Depression (DD: F32, F33), Anxiety Related Disorders (ARD: F40-F49, F50-F59), Personality Disorders (PD: F60-F69), Bipolar Disorder (BD: F30, F31, F34.0, F38.1), Organic Mental Disorders (Org: F00-F09), Neurodevelopment Disorders (ND: F70-F99) and Schizophrenia Spectrum Disorders (SSD: F20-F29). Cohen's Kappa measured agreement between groups, and Baphkar's test assessed if any diagnostic group have a higher tendency to change after a new diagnostic assessment. We analyzed 739 reevaluation pairs, from 99 subjects who attended IPUB's outpatient clinic. Overall inter-rater Kappa was moderate, and none of the groups had a different tendency to change. NSDA evaluation was moderately reliable, but the lack of some prevalent hypothesis inside the pairs raised concerns about NSDA sensitivity to some diagnoses. Diagnostic momentum bias (that is, a tendency to keep the last diagnosis observed) may have inflated the observed agreement. This research was approved by IPUB's ethical committee, registered under the CAAE33603220.1.0000.5263, and the UTN-U1111-1260-1212.

2.
Front Psychiatry ; 13: 793743, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35308869

RESUMEN

Background and Objectives: The use of "operational criteria" is a solution for low reliability, contrasting with a prototypical classification that is used in clinics. We aim to measure the reliability of prototypical and ICD-10 diagnoses. Methods: This is a retrospective study, with a convenience sample of subjects treated in a university clinic. Residents reviewed their diagnosis using ICD-10 criteria, and Cohen's kappa statistic was performed on operational and prototype diagnoses. Results: Three out of 30 residents participated, reviewing 146 subjects under their care. Diagnoses were grouped in eight classes: organic (diagnoses from F00 to F09), substance disorders (F10-F19), schizophrenia spectrum disorders (F20-F29), bipolar affective disorder (F30, F31, F34.0, F38.1), depression (F32, F33), anxiety-related disorders (F40-F49), personality disorders (F60-F69), and neurodevelopmental disorders (F70-F99). Overall, agreement was high [K = 0.77, 95% confidence interval (CI) = 0.69-0.85], with a lower agreement related to personality disorders (K = 0.58, 95% CI = 0.38-0.76) and higher with schizophrenia spectrum disorders (K = 0.91, 95% CI = 0.82-0.99). Discussion: Use of ICD-10 criteria did not significantly increase the number of diagnoses. It changed few diagnoses, implying that operational criteria were irrelevant to clinical opinion. This suggests that reliability among interviewers is more related to information gathering than diagnostic definitions. Also, it suggests an incorporation of diagnostic criteria according to training, which then became part of the clinician's prototypes. Residents should be trained in the use of diagnostic categories, but presence/absence checking is not needed to achieve operational compatible diagnoses.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...