Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ophthalmology ; 123(11): 2338-2344, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27591053

RESUMEN

PURPOSE: To identify patterns of interexpert discrepancy in plus disease diagnosis in retinopathy of prematurity (ROP). DESIGN: We developed 2 datasets of clinical images as part of the Imaging and Informatics in ROP study and determined a consensus reference standard diagnosis (RSD) for each image based on 3 independent image graders and the clinical examination results. We recruited 8 expert ROP clinicians to classify these images and compared the distribution of classifications between experts and the RSD. PARTICIPANTS: Eight participating experts with more than 10 years of clinical ROP experience and more than 5 peer-reviewed ROP publications who analyzed images obtained during routine ROP screening in neonatal intensive care units. METHODS: Expert classification of images of plus disease in ROP. MAIN OUTCOME MEASURES: Interexpert agreement (weighted κ statistic) and agreement and bias on ordinal classification between experts (analysis of variance [ANOVA]) and the RSD (percent agreement). RESULTS: There was variable interexpert agreement on diagnostic classifications between the 8 experts and the RSD (weighted κ, 0-0.75; mean, 0.30). The RSD agreement ranged from 80% to 94% for the dataset of 100 images and from 29% to 79% for the dataset of 34 images. However, when images were ranked in order of disease severity (by average expert classification), the pattern of expert classification revealed a consistent systematic bias for each expert consistent with unique cut points for the diagnosis of plus disease and preplus disease. The 2-way ANOVA model suggested a highly significant effect of both image and user on the average score (dataset A: P < 0.05 and adjusted R2 = 0.82; and dataset B: P < 0.05 and adjusted R2 = 0.6615). CONCLUSIONS: There is wide variability in the classification of plus disease by ROP experts, which occurs because experts have different cut points for the amounts of vascular abnormality required for presence of plus and preplus disease. This has important implications for research, teaching, and patient care for ROP and suggests that a continuous ROP plus disease severity score may reflect more accurately the behavior of expert ROP clinicians and may better standardize classification in the future.


Asunto(s)
Tamizaje Neonatal/métodos , Retina/diagnóstico por imagen , Vasos Retinianos/diagnóstico por imagen , Retinopatía de la Prematuridad/diagnóstico , Diagnóstico Diferencial , Femenino , Humanos , Recién Nacido , Masculino , Fotograbar , Curva ROC , Reproducibilidad de los Resultados , Retinopatía de la Prematuridad/clasificación
2.
Ophthalmology ; 123(11): 2345-2351, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27566853

RESUMEN

PURPOSE: To determine expert agreement on relative retinopathy of prematurity (ROP) disease severity and whether computer-based image analysis can model relative disease severity, and to propose consideration of a more continuous severity score for ROP. DESIGN: We developed 2 databases of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP (i-ROP) cohort study and recruited expert physician, nonexpert physician, and nonphysician graders to classify and perform pairwise comparisons on both databases. PARTICIPANTS: Six participating expert ROP clinician-scientists, each with a minimum of 10 years of clinical ROP experience and 5 ROP publications, and 5 image graders (3 physicians and 2 nonphysician graders) who analyzed images that were obtained during routine ROP screening in neonatal intensive care units. METHODS: Images in both databases were ranked by average disease classification (classification ranking), by pairwise comparison using the Elo rating method (comparison ranking), and by correlation with the i-ROP computer-based image analysis system. MAIN OUTCOME MEASURES: Interexpert agreement (weighted κ statistic) compared with the correlation coefficient (CC) between experts on pairwise comparisons and correlation between expert rankings and computer-based image analysis modeling. RESULTS: There was variable interexpert agreement on diagnostic classification of disease (plus, preplus, or normal) among the 6 experts (mean weighted κ, 0.27; range, 0.06-0.63), but good correlation between experts on comparison ranking of disease severity (mean CC, 0.84; range, 0.74-0.93) on the set of 34 images. Comparison ranking provided a severity ranking that was in good agreement with ranking obtained by classification ranking (CC, 0.92). Comparison ranking on the larger dataset by both expert and nonexpert graders demonstrated good correlation (mean CC, 0.97; range, 0.95-0.98). The i-ROP system was able to model this continuous severity with good correlation (CC, 0.86). CONCLUSIONS: Experts diagnose plus disease on a continuum, with poor absolute agreement on classification but good relative agreement on disease severity. These results suggest that the use of pairwise rankings and a continuous severity score, such as that provided by the i-ROP system, may improve agreement on disease severity in the future.


Asunto(s)
Competencia Clínica , Técnicas de Diagnóstico Oftalmológico/tendencias , Procesamiento de Imagen Asistido por Computador/métodos , Retina/diagnóstico por imagen , Retinopatía de la Prematuridad/diagnóstico , Humanos , Recién Nacido , Curva ROC , Reproducibilidad de los Resultados , Retinopatía de la Prematuridad/clasificación , Índice de Severidad de la Enfermedad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...