Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Eur Radiol ; 33(9): 6582-6591, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37042979

RESUMEN

OBJECTIVES: While fully supervised learning can yield high-performing segmentation models, the effort required to manually segment large training sets limits practical utility. We investigate whether data mined line annotations can facilitate brain MRI tumor segmentation model development without requiring manually segmented training data. METHODS: In this retrospective study, a tumor detection model trained using clinical line annotations mined from PACS was leveraged with unsupervised segmentation to generate pseudo-masks of enhancing tumors on T1-weighted post-contrast images (9911 image slices; 3449 adult patients). Baseline segmentation models were trained and employed within a semi-supervised learning (SSL) framework to refine the pseudo-masks. Following each self-refinement cycle, a new model was trained and tested on a held-out set of 319 manually segmented image slices (93 adult patients), with the SSL cycles continuing until Dice score coefficient (DSC) peaked. DSCs were compared using bootstrap resampling. Utilizing the best-performing models, two inference methods were compared: (1) conventional full-image segmentation, and (2) a hybrid method augmenting full-image segmentation with detection plus image patch segmentation. RESULTS: Baseline segmentation models achieved DSC of 0.768 (U-Net), 0.831 (Mask R-CNN), and 0.838 (HRNet), improving with self-refinement to 0.798, 0.871, and 0.873 (each p < 0.001), respectively. Hybrid inference outperformed full image segmentation alone: DSC 0.884 (Mask R-CNN) vs. 0.873 (HRNet), p < 0.001. CONCLUSIONS: Line annotations mined from PACS can be harnessed within an automated pipeline to produce accurate brain MRI tumor segmentation models without manually segmented training data, providing a mechanism to rapidly establish tumor segmentation capabilities across radiology modalities. KEY POINTS: • A brain MRI tumor detection model trained using clinical line measurement annotations mined from PACS was leveraged to automatically generate tumor segmentation pseudo-masks. • An iterative self-refinement process automatically improved pseudo-mask quality, with the best-performing segmentation pipeline achieving a Dice score of 0.884 on a held-out test set. • Tumor line measurement annotations generated in routine clinical radiology practice can be harnessed to develop high-performing segmentation models without manually segmented training data, providing a mechanism to rapidly establish tumor segmentation capabilities across radiology modalities.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Adulto , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen
2.
Radiology ; 303(1): 80-89, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35040676

RESUMEN

Background Artificial intelligence (AI) applications for cancer imaging conceptually begin with automated tumor detection, which can provide the foundation for downstream AI tasks. However, supervised training requires many image annotations, and performing dedicated post hoc image labeling is burdensome and costly. Purpose To investigate whether clinically generated image annotations can be data mined from the picture archiving and communication system (PACS), automatically curated, and used for semisupervised training of a brain MRI tumor detection model. Materials and Methods In this retrospective study, the cancer center PACS was mined for brain MRI scans acquired between January 2012 and December 2017 and included all annotated axial T1 postcontrast images. Line annotations were converted to boxes, excluding boxes shorter than 1 cm or longer than 7 cm. The resulting boxes were used for supervised training of object detection models using RetinaNet and Mask region-based convolutional neural network (R-CNN) architectures. The best-performing model trained from the mined data set was used to detect unannotated tumors on training images themselves (self-labeling), automatically correcting many of the missing labels. After self-labeling, new models were trained using this expanded data set. Models were scored for precision, recall, and F1 using a held-out test data set comprising 754 manually labeled images from 100 patients (403 intra-axial and 56 extra-axial enhancing tumors). Model F1 scores were compared using bootstrap resampling. Results The PACS query extracted 31 150 line annotations, yielding 11 880 boxes that met inclusion criteria. This mined data set was used to train models, yielding F1 scores of 0.886 for RetinaNet and 0.908 for Mask R-CNN. Self-labeling added 18 562 training boxes, improving model F1 scores to 0.935 (P < .001) and 0.954 (P < .001), respectively. Conclusion The application of semisupervised learning to mined image annotations significantly improved tumor detection performance, achieving an excellent F1 score of 0.954. This development pipeline can be extended for other imaging modalities, repurposing unused data silos to potentially enable automated tumor detection across radiologic modalities. © RSNA, 2022 Online supplemental material is available for this article.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Encéfalo , Humanos , Imagen por Resonancia Magnética , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA