Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
Add more filters

Publication year range
1.
Sci Rep ; 14(1): 9380, 2024 04 23.
Article in English | MEDLINE | ID: mdl-38654066

ABSTRACT

Vision transformers (ViTs) have revolutionized computer vision by employing self-attention instead of convolutional neural networks and demonstrated success due to their ability to capture global dependencies and remove spatial biases of locality. In medical imaging, where input data may differ in size and resolution, existing architectures require resampling or resizing during pre-processing, leading to potential spatial resolution loss and information degradation. This study proposes a co-ordinate-based embedding that encodes the geometry of medical images, capturing physical co-ordinate and resolution information without the need for resampling or resizing. The effectiveness of the proposed embedding is demonstrated through experiments with UNETR and SwinUNETR models for infarct segmentation on MRI dataset with AxTrace and AxADC contrasts. The dataset consists of 1142 training, 133 validation and 143 test subjects. Both models with the addition of co-ordinate based positional embedding achieved substantial improvements in mean Dice score by 6.5% and 7.6%. The proposed embedding showcased a statistically significant advantage p-value< 0.0001 over alternative approaches. In conclusion, the proposed co-ordinate-based pixel-wise positional embedding method offers a promising solution for Transformer-based models in medical image analysis. It effectively leverages physical co-ordinate information to enhance performance without compromising spatial resolution and provides a foundation for future advancements in positional embedding techniques for medical applications.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Algorithms , Neural Networks, Computer
2.
Acad Radiol ; 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38908922

ABSTRACT

RATIONALE AND OBJECTIVES: To assess a deep learning application (DLA) for acute ischemic stroke (AIS) detection on brain magnetic resonance imaging (MRI) in the emergency room (ER) and the effect of T2-weighted imaging (T2WI) on its performance. MATERIALS AND METHODS: We retrospectively analyzed brain MRIs taken through the ER from March to October 2021 that included diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) sequences. MRIs were processed by the DLA, and sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were evaluated, with three neuroradiologists establishing the gold standard for detection performance. In addition, we examined the impact of axial T2WI, when available, on the accuracy and processing time of DLA. RESULTS: The study included 947 individuals (mean age ± standard deviation, 64 years ± 16; 461 men, 486 women), with 239 (25%) positive for AIS. The overall performance of DLA was as follows: sensitivity, 90%; specificity, 89%; accuracy, 89%; and AUROC, 0.95. The average processing time was 24 s. In the subgroup with T2WI, T2WI did not significantly impact MRI assessments but did result in longer processing times (35 s without T2WI compared to 48 s with T2WI, p < 0.001). CONCLUSION: The DLA successfully identified AIS in the ER setting with an average processing time of 24 s. The absence of performance acquire with axial T2WI suggests that the DLA can diagnose AIS with just axial DWI and FLAIR sequences, potentially shortening the exam duration in the ER.

3.
Article in English | MEDLINE | ID: mdl-39059508

ABSTRACT

PURPOSE: The purpose of this study was to investigate an extended self-adapting nnU-Net framework for detecting and segmenting brain metastases (BM) on magnetic resonance imaging (MRI). METHODS AND MATERIALS: Six different nnU-Net systems with adaptive data sampling, adaptive Dice loss, or different patch/batch sizes were trained and tested for detecting and segmenting intraparenchymal BM with a size ≥2 mm on 3 Dimensional (3D) post-Gd T1-weighted MRI volumes using 2092 patients from 7 institutions (1712, 195, and 185 patients for training, validation, and testing, respectively). Gross tumor volumes of BM delineated by physicians for stereotactic radiosurgery were collected retrospectively and curated at each institute. Additional centralized data curation was carried out to create gross tumor volumes of uncontoured BM by 2 radiologists to improve the accuracy of ground truth. The training data set was augmented with synthetic BMs of 1025 MRI volumes using a 3D generative pipeline. BM detection was evaluated by lesion-level sensitivity and false-positive (FP) rate. BM segmentation was assessed by lesion-level Dice similarity coefficient, 95-percentile Hausdorff distance, and average Hausdorff distance (HD). The performances were assessed across different BM sizes. Additional testing was performed using a second data set of 206 patients. RESULTS: Of the 6 nnU-Net systems, the nnU-Net with adaptive Dice loss achieved the best detection and segmentation performance on the first testing data set. At an FP rate of 0.65 ± 1.17, overall sensitivity was 0.904 for all sizes of BM, 0.966 for BM ≥0.1 cm3, and 0.824 for BM <0.1 cm3. Mean values of Dice similarity coefficient, 95-percentile Hausdorff distance, and average HD of all detected BMs were 0.758, 1.45, and 0.23 mm, respectively. Performances on the second testing data set achieved a sensitivity of 0.907 at an FP rate of 0.57 ± 0.85 for all BM sizes, and an average HD of 0.33 mm for all detected BM. CONCLUSIONS: Our proposed extension of the self-configuring nnU-Net framework substantially improved small BM detection sensitivity while maintaining a controlled FP rate. Clinical utility of the extended nnU-Net model for assisting early BM detection and stereotactic radiosurgery planning will be investigated.

4.
Radiology ; 263(3): 856-64, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22474671

ABSTRACT

PURPOSE: To develop and evaluate a technique for the registration of in vivo prostate magnetic resonance (MR) images to digital histopathologic images by using image-guided specimen slicing based on strand-shaped fiducial markers relating specimen imaging to histopathologic examination. MATERIALS AND METHODS: The study was approved by the institutional review board (the University of Western Ontario Health Sciences Research Ethics Board, London, Ontario, Canada), and written informed consent was obtained from all patients. This work proposed and evaluated a technique utilizing developed fiducial markers and real-time three-dimensional visualization in support of image guidance for ex vivo prostate specimen slicing parallel to the MR imaging planes prior to digitization, simplifying the registration process. Means, standard deviations, root-mean-square errors, and 95% confidence intervals are reported for all evaluated measurements. RESULTS: The slicing error was within the 2.2 mm thickness of the diagnostic-quality MR imaging sections, with a tissue block thickness standard deviation of 0.2 mm. Rigid registration provided negligible postregistration overlap of the smallest clinically important tumors (0.2 cm(3)) at histologic examination and MR imaging, whereas the tested nonrigid registration method yielded a mean target registration error of 1.1 mm and provided useful coregistration of such tumors. CONCLUSION: This method for the registration of prostate digital histopathologic images to in vivo MR images acquired by using an endorectal receive coil was sufficiently accurate for coregistering the smallest clinically important lesions with 95% confidence.


Subject(s)
Magnetic Resonance Imaging/instrumentation , Prostate/pathology , Prostatic Neoplasms/pathology , Contrast Media , Fiducial Markers , Gadolinium DTPA , Humans , Image Interpretation, Computer-Assisted , Imaging, Three-Dimensional/instrumentation , Magnetic Resonance Imaging, Interventional , Male , Prostate/surgery , Prostatectomy , Prostatic Neoplasms/surgery
5.
J Magn Reson Imaging ; 36(6): 1402-12, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22851455

ABSTRACT

PURPOSE: To present and evaluate a method for registration of whole-mount prostate digital histology images to ex vivo magnetic resonance (MR) images. MATERIALS AND METHODS: Nine radical prostatectomy specimens were marked with 10 strand-shaped fiducial markers per specimen, imaged with T1- and T2-weighted 3T MRI protocols, sliced at 4.4-mm intervals, processed for whole-mount histology, and the resulting histological sections (3-5 per specimen, 34 in total) were digitized. The correspondence between fiducial markers on histology and MR images yielded an initial registration, which was refined by a local optimization technique, yielding the least-squares best-fit affine transformation between corresponding fiducial points on histology and MR images. Accuracy was quantified as the postregistration 3D distance between landmarks (3-7 per section, 184 in total) on histology and MR images, and compared to a previous state-of-the-art registration method. RESULTS: The proposed method and previous method had mean (SD) target registration errors of 0.71 (0.38) mm and 1.21 (0.74) mm, respectively, requiring 3 and 11 hours of processing time, respectively. CONCLUSION: The proposed method registers digital histology to prostate MR images, yielding 70% reduced processing time and mean accuracy sufficient to achieve 85% overlap on histology and ex vivo MR images for a 0.2 cc spherical tumor.


Subject(s)
Biopsy/instrumentation , Fiducial Markers , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/instrumentation , Pattern Recognition, Automated/methods , Prostate/pathology , Subtraction Technique , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Biopsy/methods , Equipment Design , Equipment Failure Analysis , Humans , Image Enhancement/methods , Magnetic Resonance Imaging/methods , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity , Young Adult
6.
Radiol Artif Intell ; 4(3): e210115, 2022 May.
Article in English | MEDLINE | ID: mdl-35652116

ABSTRACT

Purpose: To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. Materials and Methods: This retrospective study included 46 057 studies from seven "internal" centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16 764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score-defined subsets using bootstrapping. Results: The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). Conclusion: AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation.Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

7.
IEEE Trans Med Imaging ; 40(1): 335-345, 2021 01.
Article in English | MEDLINE | ID: mdl-32966215

ABSTRACT

Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.


Subject(s)
Lung Neoplasms , Solitary Pulmonary Nodule , Early Detection of Cancer , Humans , Image Processing, Computer-Assisted , Lung , Lung Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed
8.
J Med Imaging (Bellingham) ; 8(3): 037001, 2021 May.
Article in English | MEDLINE | ID: mdl-34041305

ABSTRACT

Purpose: We investigate the impact of various deep-learning-based methods for detecting and segmenting metastases with different lesion volume sizes on 3D brain MR images. Approach: A 2.5D U-Net and a 3D U-Net were selected. We also evaluated weak learner fusion of the prediction features generated by the 2.5D and the 3D networks. A 3D fully convolutional one-stage (FCOS) detector was selected as a representative of bounding-box regression-based detection methods. A total of 422 3D post-contrast T1-weighted scans from patients with brain metastases were used. Performances were analyzed based on lesion volume, total metastatic volume per patient, and number of lesions per patient. Results: The performance of detection of the 2.5D and 3D U-Net methods had recall of > 0.83 and precision of > 0.44 for lesion volume > 0.3 cm 3 but deteriorated as metastasis size decreased below 0.3 cm 3 to 0.58 to 0.74 in recall and 0.16 to 0.25 in precision. Compared the two U-Nets for detection capability, high precision was achieved by the 2.5D network, but high recall was achieved by the 3D network for all lesion sizes. The weak learner fusion achieved a balanced performance between the 2.5D and 3D U-Nets; particularly, it increased precision to 0.83 for lesion volumes of 0.1 to 0.3 cm 3 but decreased recall to 0.59. The 3D FCOS detector did not outperform the U-Net methods in detecting either the small or large metastases presumably because of the limited data size. Conclusions: Our study provides the performances of four deep learning methods in relationship to lesion size, total metastasis volume, and number of lesions per patient, providing insight into further development of the deep learning networks.

9.
Ann Biomed Eng ; 49(2): 573-584, 2021 Feb.
Article in English | MEDLINE | ID: mdl-32779056

ABSTRACT

Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.


Subject(s)
Models, Theoretical , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnosis , Humans , Male , Markov Chains , Ultrasonography
10.
Med Image Anal ; 68: 101855, 2021 02.
Article in English | MEDLINE | ID: mdl-33260116

ABSTRACT

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.


Subject(s)
Artifacts , Magnetic Resonance Imaging , Humans , Machine Learning , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL