Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Low Genit Tract Dis ; 28(3): 224-230, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38713522

RESUMEN

OBJECTIVE: A deep learning classifier that improves the accuracy of colposcopic impression. METHODS: Colposcopy images taken 56 seconds after acetic acid application were processed by a cervix detection algorithm to identify the cervical region. We optimized models based on the SegFormer architecture to classify each cervix as high-grade or negative/low-grade. The data were split into histologically stratified, random training, validation, and test subsets (80%-10%-10%). We replicated a 10-fold experiment to align with a prior study utilizing expert reviewer analysis of the same images. To evaluate the model's robustness across different cameras, we retrained it after dividing the dataset by camera type. Subsequently, we retrained the model on a new, histologically stratified random data split and integrated the results with patients' age and referral data to train a Gradient Boosted Tree model for final classification. Model accuracy was assessed by the receiver operating characteristic area under the curve (AUC), Youden's index (YI), sensitivity, and specificity compared to the histology. RESULTS: Out of 5,485 colposcopy images, 4,946 with histology and a visible cervix were used. The model's average performance in the 10-fold experiment was AUC = 0.75, YI = 0.37 (sensitivity = 63%, specificity = 74%), outperforming the experts' average YI of 0.16. Transferability across camera types was effective, with AUC = 0.70, YI = 0.33. Integrating image-based predictions with referral data improved outcomes to AUC = 0.81 and YI = 0.46. The use of model predictions alongside the original colposcopic impression boosted overall performance. CONCLUSIONS: Deep learning cervical image classification demonstrated robustness and outperformed experts. Further improved by including additional patient information, it shows potential for clinical utility complementing colposcopy.


Asunto(s)
Cuello del Útero , Colposcopía , Aprendizaje Profundo , Neoplasias del Cuello Uterino , Humanos , Femenino , Colposcopía/métodos , Cuello del Útero/patología , Neoplasias del Cuello Uterino/diagnóstico , Neoplasias del Cuello Uterino/patología , Neoplasias del Cuello Uterino/clasificación , Adulto , Persona de Mediana Edad , Sensibilidad y Especificidad , Procesamiento de Imagen Asistido por Computador/métodos , Adulto Joven , Anciano
2.
IEEE Access ; 11: 21300-21312, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37008654

RESUMEN

Artificial Intelligence (AI)-based medical computer vision algorithm training and evaluations depend on annotations and labeling. However, variability between expert annotators introduces noise in training data that can adversely impact the performance of AI algorithms. This study aims to assess, illustrate and interpret the inter-annotator agreement among multiple expert annotators when segmenting the same lesion(s)/abnormalities on medical images. We propose the use of three metrics for the qualitative and quantitative assessment of inter-annotator agreement: 1) use of a common agreement heatmap and a ranking agreement heatmap; 2) use of the extended Cohen's kappa and Fleiss' kappa coefficients for a quantitative evaluation and interpretation of inter-annotator reliability; and 3) use of the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm, as a parallel step, to generate ground truth for training AI models and compute Intersection over Union (IoU), sensitivity, and specificity to assess the inter-annotator reliability and variability. Experiments are performed on two datasets, namely cervical colposcopy images from 30 patients and chest X-ray images from 336 tuberculosis (TB) patients, to demonstrate the consistency of inter-annotator reliability assessment and the importance of combining different metrics to avoid bias assessment.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...