Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros




Base de datos
Asunto de la revista
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 5383, 2024 03 05.
Artículo en Inglés | MEDLINE | ID: mdl-38443410

RESUMEN

Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Humanos , Femenino , Imagen por Resonancia Magnética , Radiografía , Densidad de la Mama , Neoplasias de la Mama/diagnóstico por imagen
2.
IEEE Trans Med Imaging ; 42(12): 3860-3870, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37695965

RESUMEN

Anomaly detection (AD) aims to determine if an instance has properties different from those seen in normal cases. The success of this technique depends on how well a neural network learns from normal instances. We observe that the learning difficulty scales exponentially with the input resolution, making it infeasible to apply AD to high-resolution images. Resizing them to a lower resolution is a compromising solution and does not align with clinical practice where the diagnosis could depend on image details. In this work, we propose to train the network and perform inference at the patch level, through the sliding window algorithm. This simple operation allows the network to receive high-resolution images but introduces additional training difficulties, including inconsistent image structure and higher variance. We address these concerns by setting the network's objective to learn augmentation-invariant features. We further study the augmentation function in the context of medical imaging. In particular, we observe that the resizing operation, a key augmentation in general computer vision literature, is detrimental to detection accuracy, and the inverting operation can be beneficial. We also propose a new module that encourages the network to learn from adjacent patches to boost detection performance. Extensive experiments are conducted on breast tomosynthesis and chest X-ray datasets and our method improves 8.03% and 5.66% AUC on image-level classification respectively over the current leading techniques. The experimental results demonstrate the effectiveness of our approach.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Aprendizaje Automático Supervisado
3.
Med Image Anal ; 89: 102918, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37595404

RESUMEN

Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.


Asunto(s)
Neoplasias Encefálicas , Humanos , S-Adenosilmetionina , Tomografía Computarizada por Rayos X
4.
IEEE Trans Med Imaging ; 42(8): 2439-2450, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37028063

RESUMEN

Near-infrared diffuse optical tomography (DOT) is a promising functional modality for breast cancer imaging; however, the clinical translation of DOT is hampered by technical limitations. Specifically, conventional finite element method (FEM)-based optical image reconstruction approaches are time-consuming and ineffective in recovering full lesion contrast. To address this, we developed a deep learning-based reconstruction model (FDU-Net) comprised of a Fully connected subnet, followed by a convolutional encoder-Decoder subnet, and a U-Net for fast, end-to-end 3D DOT image reconstruction. The FDU-Net was trained on digital phantoms that include randomly located singular spherical inclusions of various sizes and contrasts. Reconstruction performance was evaluated in 400 simulated cases with realistic noise profiles for the FDU-Net and conventional FEM approaches. Our results show that the overall quality of images reconstructed by FDU-Net is significantly improved compared to FEM-based methods and a previously proposed deep-learning network. Importantly, once trained, FDU-Net demonstrates substantially better capability to recover true inclusion contrast and location without using any inclusion information during reconstruction. The model was also generalizable to multi-focal and irregularly shaped inclusions unseen during training. Finally, FDU-Net, trained on simulated data, could successfully reconstruct a breast tumor from a real patient measurement. Overall, our deep learning-based approach demonstrates marked superiority over the conventional DOT image reconstruction methods while also offering over four orders of magnitude acceleration in computational time. Once adapted to the clinical breast imaging workflow, FDU-Net has the potential to provide real-time accurate lesion characterization by DOT to assist the clinical diagnosis and management of breast cancer.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Humanos , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Fantasmas de Imagen , Neoplasias de la Mama/diagnóstico por imagen , Algoritmos
5.
JAMA Netw Open ; 6(2): e230524, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36821110

RESUMEN

Importance: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. Objectives: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. Design, Setting, and Participants: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. Main Outcomes and Measures: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. Results: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. Conclusions and Relevance: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Humanos , Femenino , Benchmarking , Mamografía/métodos , Algoritmos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Neoplasias de la Mama/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA