Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Neuroimage ; 278: 120289, 2023 09.
Article in English | MEDLINE | ID: mdl-37495197

ABSTRACT

Deep artificial neural networks (DNNs) have moved to the forefront of medical image analysis due to their success in classification, segmentation, and detection challenges. A principal challenge in large-scale deployment of DNNs in neuroimage analysis is the potential for shifts in signal-to-noise ratio, contrast, resolution, and presence of artifacts from site to site due to variances in scanners and acquisition protocols. DNNs are famously susceptible to these distribution shifts in computer vision. Currently, there are no benchmarking platforms or frameworks to assess the robustness of new and existing models to specific distribution shifts in MRI, and accessible multi-site benchmarking datasets are still scarce or task-specific. To address these limitations, we propose ROOD-MRI: a novel platform for benchmarking the Robustness of DNNs to Out-Of-Distribution (OOD) data, corruptions, and artifacts in MRI. This flexible platform provides modules for generating benchmarking datasets using transforms that model distribution shifts in MRI, implementations of newly derived benchmarking metrics for image segmentation, and examples for using the methodology with new models and tasks. We apply our methodology to hippocampus, ventricle, and white matter hyperintensity segmentation in several large studies, providing the hippocampus dataset as a publicly available benchmark. By evaluating modern DNNs on these datasets, we demonstrate that they are highly susceptible to distribution shifts and corruptions in MRI. We show that while data augmentation strategies can substantially improve robustness to OOD data for anatomical segmentation tasks, modern DNNs using augmentation still lack robustness in more challenging lesion-based segmentation tasks. We finally benchmark U-Nets and vision transformers, finding robustness susceptibility to particular classes of transforms across architectures. The presented open-source platform enables generating new benchmarking datasets and comparing across models to study model design that results in improved robustness to OOD data and corruptions in MRI.


Subject(s)
Algorithms , Deep Learning , Humans , Benchmarking , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
2.
Int J Comput Assist Radiol Surg ; 19(6): 1121-1128, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38598142

ABSTRACT

PURPOSE: The standard of care for prostate cancer (PCa) diagnosis is the histopathological analysis of tissue samples obtained via transrectal ultrasound (TRUS) guided biopsy. Models built with deep neural networks (DNNs) hold the potential for direct PCa detection from TRUS, which allows targeted biopsy and subsequently enhances outcomes. Yet, there are ongoing challenges with training robust models, stemming from issues such as noisy labels, out-of-distribution (OOD) data, and limited labeled data. METHODS: This study presents LensePro, a unified method that not only excels in label efficiency but also demonstrates robustness against label noise and OOD data. LensePro comprises two key stages: first, self-supervised learning to extract high-quality feature representations from abundant unlabeled TRUS data and, second, label noise-tolerant prototype-based learning to classify the extracted features. RESULTS: Using data from 124 patients who underwent systematic prostate biopsy, LensePro achieves an AUROC, sensitivity, and specificity of 77.9%, 85.9%, and 57.5%, respectively, for detecting PCa in ultrasound. Our model shows it is effective for detecting OOD data in test time, critical for clinical deployment. Ablation studies demonstrate that each component of our method improves PCa detection by addressing one of the three challenges, reinforcing the benefits of a unified approach. CONCLUSION: Through comprehensive experiments, LensePro demonstrates its state-of-the-art performance for TRUS-based PCa detection. Although further research is necessary to confirm its clinical applicability, LensePro marks a notable advancement in enhancing automated computer-aided systems for detecting prostate cancer in ultrasound.


Subject(s)
Neural Networks, Computer , Prostatic Neoplasms , Humans , Male , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Prostatic Neoplasms/diagnosis , Image-Guided Biopsy/methods , Sensitivity and Specificity , Ultrasonography/methods , Deep Learning , Ultrasonography, Interventional/methods
3.
Med Image Anal ; 97: 103291, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39121545

ABSTRACT

In positron emission tomography (PET) and X-ray computed tomography (CT), reducing radiation dose can cause significant degradation in image quality. For image quality enhancement in low-dose PET and CT, we propose a novel theoretical adversarial and variational deep neural network (DNN) framework relying on expectation maximization (EM) based learning, termed adversarial EM (AdvEM). AdvEM proposes an encoder-decoder architecture with a multiscale latent space, and generalized-Gaussian models enabling datum-specific robust statistical modeling in latent space and image space. The model robustness is further enhanced by including adversarial learning in the training protocol. Unlike typical variational-DNN learning, AdvEM proposes latent-space sampling from the posterior distribution, and uses a Metropolis-Hastings scheme. Unlike existing schemes for PET or CT image enhancement which train using pairs of low-dose images with their corresponding normal-dose versions, we propose a semi-supervised AdvEM (ssAdvEM) framework that enables learning using a small number of normal-dose images. AdvEM and ssAdvEM enable per-pixel uncertainty estimates for their outputs. Empirical analyses on real-world PET and CT data involving many baselines, out-of-distribution data, and ablation studies show the benefits of the proposed framework.


Subject(s)
Deep Learning , Radiation Dosage , Humans , Tomography, X-Ray Computed/methods , Positron-Emission Tomography/methods , Image Enhancement/methods , Neural Networks, Computer , Supervised Machine Learning , Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL