Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Int J Comput Assist Radiol Surg ; 18(11): 2083-2090, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37306856

RESUMEN

PURPOSE: Neuroendocrine tumors (NETs) are a rare form of cancer that can occur anywhere in the body and commonly metastasizes. The large variance in location and aggressiveness of the tumors makes it a difficult cancer to treat. Assessments of the whole-body tumor burden in a patient image allow for better tracking of disease progression and inform better treatment decisions. Currently, radiologists rely on qualitative assessments of this metric since manual segmentation is unfeasible within a typical busy clinical workflow. METHODS: We address these challenges by extending the application of the nnU-net pipeline to produce automatic NET segmentation models. We utilize the ideal imaging type of 68Ga-DOTATATE PET/CT to produce segmentation masks from which to calculate total tumor burden metrics. We provide a human-level baseline for the task and perform ablation experiments of model inputs, architectures, and loss functions. RESULTS: Our dataset is comprised of 915 PET/CT scans and is divided into a held-out test set (87 cases) and 5 training subsets to perform cross-validation. The proposed models achieve test Dice scores of 0.644, on par with our inter-annotator Dice score on a subset 6 patients of 0.682. If we apply our modified Dice score to the predictions, the test performance reaches a score of 0.80. CONCLUSION: In this paper, we demonstrate the ability to automatically generate accurate NET segmentation masks given PET images through supervised learning. We publish the model for extended use and to support the treatment planning of this rare cancer.


Asunto(s)
Carcinoma Neuroendocrino , Tumores Neuroendocrinos , Cintigrafía , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía de Emisión de Positrones/métodos , Tumores Neuroendocrinos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
2.
Eur Radiol ; 33(9): 6582-6591, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37042979

RESUMEN

OBJECTIVES: While fully supervised learning can yield high-performing segmentation models, the effort required to manually segment large training sets limits practical utility. We investigate whether data mined line annotations can facilitate brain MRI tumor segmentation model development without requiring manually segmented training data. METHODS: In this retrospective study, a tumor detection model trained using clinical line annotations mined from PACS was leveraged with unsupervised segmentation to generate pseudo-masks of enhancing tumors on T1-weighted post-contrast images (9911 image slices; 3449 adult patients). Baseline segmentation models were trained and employed within a semi-supervised learning (SSL) framework to refine the pseudo-masks. Following each self-refinement cycle, a new model was trained and tested on a held-out set of 319 manually segmented image slices (93 adult patients), with the SSL cycles continuing until Dice score coefficient (DSC) peaked. DSCs were compared using bootstrap resampling. Utilizing the best-performing models, two inference methods were compared: (1) conventional full-image segmentation, and (2) a hybrid method augmenting full-image segmentation with detection plus image patch segmentation. RESULTS: Baseline segmentation models achieved DSC of 0.768 (U-Net), 0.831 (Mask R-CNN), and 0.838 (HRNet), improving with self-refinement to 0.798, 0.871, and 0.873 (each p < 0.001), respectively. Hybrid inference outperformed full image segmentation alone: DSC 0.884 (Mask R-CNN) vs. 0.873 (HRNet), p < 0.001. CONCLUSIONS: Line annotations mined from PACS can be harnessed within an automated pipeline to produce accurate brain MRI tumor segmentation models without manually segmented training data, providing a mechanism to rapidly establish tumor segmentation capabilities across radiology modalities. KEY POINTS: • A brain MRI tumor detection model trained using clinical line measurement annotations mined from PACS was leveraged to automatically generate tumor segmentation pseudo-masks. • An iterative self-refinement process automatically improved pseudo-mask quality, with the best-performing segmentation pipeline achieving a Dice score of 0.884 on a held-out test set. • Tumor line measurement annotations generated in routine clinical radiology practice can be harnessed to develop high-performing segmentation models without manually segmented training data, providing a mechanism to rapidly establish tumor segmentation capabilities across radiology modalities.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Adulto , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen
3.
JCO Clin Cancer Inform ; 6: e2200066, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-36084275

RESUMEN

PURPOSE: To evaluate whether a custom programmatic workflow manager reduces reporting turnaround times (TATs) from a body oncologic imaging workflow at a tertiary cancer center. METHODS: A custom software program was developed and implemented in the programming language R. Other aspects of the workflow were left unchanged. TATs were measured over a 12-month period (June-May). The same prior 12-month period served as a historical control. Median TATs of magnetic resonance imaging (MRI) and computed tomography (CT) examinations were compared with a Wilcoxon test. A chi-square test was used to compare the numbers of examinations reported within 24 hours and after 72 hours as well as the proportions of examinations assigned according to individual radiologist preferences. RESULTS: For all MRI and CT examinations (124,507 in 2019/2020 and 138,601 in 2020/2021), the median TAT decreased from 4 (interquartile range: 1-22 hours) to 3 hours (1-17 hours). Reports completed within 24 hours increased from 78% (124,127) to 89% (138,601). For MRI, TAT decreased from 22 (5-49 hours) to 8 hours (2-21 hours), and reports completed within 24 hours increased from 55% (14,211) to 80% (23,744). For CT, TAT decreased from 3 (1-19 hours) to 2 hours (1-13 hours), and reports completed within 24 hours increased from 84% (82,342) to 92% (99,922). Delayed reports (with a TAT > 72 hours) decreased from 17.0% (4,176) to 2.2% (649) for MRI and from 2.5% (2,500) to 0.7% (745) for CT. All differences were statistically significant (P < .001). CONCLUSION: The custom workflow management software program significantly decreased MRI and CT report TATs.


Asunto(s)
Neoplasias , Tomografía Computarizada por Rayos X , Humanos , Imagen por Resonancia Magnética , Oncología Médica , Neoplasias/diagnóstico por imagen , Informe de Investigación , Flujo de Trabajo
4.
Radiol Artif Intell ; 4(1): e200231, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35146431

RESUMEN

PURPOSE: To develop a deep network architecture that would achieve fully automated radiologist-level segmentation of cancers at breast MRI. MATERIALS AND METHODS: In this retrospective study, 38 229 examinations (composed of 64 063 individual breast scans from 14 475 patients) were performed in female patients (age range, 12-94 years; mean age, 52 years ± 10 [standard deviation]) who presented between 2002 and 2014 at a single clinical site. A total of 2555 breast cancers were selected that had been segmented on two-dimensional (2D) images by radiologists, as well as 60 108 benign breasts that served as examples of noncancerous tissue; all these were used for model training. For testing, an additional 250 breast cancers were segmented independently on 2D images by four radiologists. Authors selected among several three-dimensional (3D) deep convolutional neural network architectures, input modalities, and harmonization methods. The outcome measure was the Dice score for 2D segmentation, which was compared between the network and radiologists by using the Wilcoxon signed rank test and the two one-sided test procedure. RESULTS: The highest-performing network on the training set was a 3D U-Net with dynamic contrast-enhanced MRI as input and with intensity normalized for each examination. In the test set, the median Dice score of this network was 0.77 (interquartile range, 0.26). The performance of the network was equivalent to that of the radiologists (two one-sided test procedures with radiologist performance of 0.69-0.84 as equivalence bounds, P < .001 for both; n = 250). CONCLUSION: When trained on a sufficiently large dataset, the developed 3D U-Net performed as well as fellowship-trained radiologists in detailed 2D segmentation of breast cancers at routine clinical MRI.Keywords: MRI, Breast, Segmentation, Supervised Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning AlgorithmsPublished under a CC BY 4.0 license. Supplemental material is available for this article.

5.
Radiol Artif Intell ; 3(6): e210013, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34870216

RESUMEN

Integration of artificial intelligence (AI) applications within clinical workflows is an important step for leveraging developed AI algorithms. In this report, generalizable components for deploying AI systems into clinical practice are described that were implemented in a clinical pilot study using lymphoscintigraphy examinations as a prospective use case (July 1, 2019-October 31, 2020). Deployment of the AI algorithm consisted of seven software components, as follows: (a) image delivery, (b) quality control, (c) a results database, (d) results processing, (e) results presentation and delivery, (f) error correction, and (g) a dashboard for performance monitoring. A total of 14 users used the system (faculty radiologists and trainees) to assess the degree of satisfaction with the components and overall workflow. Analyses included the assessment of the number of examinations processed, error rates, and corrections. The AI system processed 1748 lymphoscintigraphy examinations. The system enabled radiologists to correct 146 AI results, generating real-time corrections to the radiology report. All AI results and corrections were successfully stored in a database for downstream use by the various integration components. A dashboard allowed monitoring of the AI system performance in real time. All 14 survey respondents "somewhat agreed" or "strongly agreed" that the AI system was well integrated into the clinical workflow. In all, a framework of processes and components for integrating AI algorithms into clinical workflows was developed. The implementation described could be helpful for assessing and monitoring AI performance in clinical practice. Keywords: PACS, Computer Applications-General (Informatics), Diagnosis © RSNA, 2021.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA