Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Magn Reson Med ; 91(5): 1803-1821, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38115695

RESUMEN

PURPOSE: K trans $$ {K}^{\mathrm{trans}} $$ has often been proposed as a quantitative imaging biomarker for diagnosis, prognosis, and treatment response assessment for various tumors. None of the many software tools for K trans $$ {K}^{\mathrm{trans}} $$ quantification are standardized. The ISMRM Open Science Initiative for Perfusion Imaging-Dynamic Contrast-Enhanced (OSIPI-DCE) challenge was designed to benchmark methods to better help the efforts to standardize K trans $$ {K}^{\mathrm{trans}} $$ measurement. METHODS: A framework was created to evaluate K trans $$ {K}^{\mathrm{trans}} $$ values produced by DCE-MRI analysis pipelines to enable benchmarking. The perfusion MRI community was invited to apply their pipelines for K trans $$ {K}^{\mathrm{trans}} $$ quantification in glioblastoma from clinical and synthetic patients. Submissions were required to include the entrants' K trans $$ {K}^{\mathrm{trans}} $$ values, the applied software, and a standard operating procedure. These were evaluated using the proposed OSIP I gold $$ \mathrm{OSIP}{\mathrm{I}}_{\mathrm{gold}} $$ score defined with accuracy, repeatability, and reproducibility components. RESULTS: Across the 10 received submissions, the OSIP I gold $$ \mathrm{OSIP}{\mathrm{I}}_{\mathrm{gold}} $$ score ranged from 28% to 78% with a 59% median. The accuracy, repeatability, and reproducibility scores ranged from 0.54 to 0.92, 0.64 to 0.86, and 0.65 to 1.00, respectively (0-1 = lowest-highest). Manual arterial input function selection markedly affected the reproducibility and showed greater variability in K trans $$ {K}^{\mathrm{trans}} $$ analysis than automated methods. Furthermore, provision of a detailed standard operating procedure was critical for higher reproducibility. CONCLUSIONS: This study reports results from the OSIPI-DCE challenge and highlights the high inter-software variability within K trans $$ {K}^{\mathrm{trans}} $$ estimation, providing a framework for ongoing benchmarking against the scores presented. Through this challenge, the participating teams were ranked based on the performance of their software tools in the particular setting of this challenge. In a real-world clinical setting, many of these tools may perform differently with different benchmarking methodology.


Asunto(s)
Medios de Contraste , Imagen por Resonancia Magnética , Humanos , Reproducibilidad de los Resultados , Imagen por Resonancia Magnética/métodos , Programas Informáticos , Algoritmos
2.
Med Image Anal ; 68: 101908, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33383334

RESUMEN

Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.


Asunto(s)
Neoplasias de la Mama , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Detección Precoz del Cáncer , Femenino , Humanos , Mamografía , Redes Neurales de la Computación
3.
IEEE Trans Med Imaging ; 39(4): 1184-1194, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31603772

RESUMEN

We present a deep convolutional neural network for breast cancer screening exam classification, trained, and evaluated on over 200000 exams (over 1000000 images). Our network achieves an AUC of 0.895 in predicting the presence of cancer in the breast, when tested on the screening population. We attribute the high accuracy to a few technical advances. 1) Our network's novel two-stage architecture and training procedure, which allows us to use a high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. 2) A custom ResNet-based network used as a building block of our model, whose balance of depth and width is optimized for high-resolution medical images. 3) Pretraining the network on screening BI-RADS classification, a related task with more noisy labels. 4) Combining multiple input views in an optimal way among a number of possible choices. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and show that our model is as accurate as experienced radiologists when presented with the same data. We also show that a hybrid model, averaging the probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately. To further understand our results, we conduct a thorough analysis of our network's performance on different subpopulations of the screening population, the model's design, training procedure, errors, and properties of its internal representations. Our best models are publicly available at https://github.com/nyukat/breast_cancer_classifier.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Aprendizaje Profundo , Detección Precoz del Cáncer/métodos , Interpretación de Imagen Asistida por Computador/métodos , Mamografía/métodos , Mama/diagnóstico por imagen , Femenino , Humanos , Radiólogos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...