Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-39059508

RESUMEN

PURPOSE: To investigate an extended self-adapting nnU-Net framework for detecting and segmenting brain metastases (BM) on MRI. APPROACH: Six different nnU-Net systems with adaptive data sampling, adaptive Dice loss (ADL) or different patch/batch sizes were trained and tested for detecting and segmenting intraparenchymal BM with a size ≥ 2 mm on 3D post-Gd T1-weighted MRI volumes using 2092 patients from seven institutions (1712, 195 and 185 patients for training, validation and testing, respectively). Gross tumor volumes (GTVs) of BM delineated by physicians for stereotactic radiosurgery (SRS) were collected retrospectively and curated at each institute. Additional centralized data curation was carried out to create GTVs of uncontoured BM by two radiologists to improve accuracy of ground truth. The training dataset was augmented with synthetic BMs of 1025 MRI volumes using a 3D generative pipeline. BM detection was evaluated by lesion-level sensitivity and false-positive (FP) rate. BM segmentation was assessed by lesion-level Dice similarity coefficient (DSC), 95-percentile Hausdorff distance (HD95) and average HD. The performances were assessed across different BM sizes. Additional testing was performed using a second dataset of 206 patients. RESULTS: Of the six nnU-Net systems, the nnU-Net with ADL achieved the best detection and segmentation performance on the first testing dataset. At an FP rate of 0.65±1.17, overall sensitivity was 0.904 for all sizes of BM, 0.966 for BM ≥ 0.1 cm3 and 0.824 for BM < 0.1 cm3. Mean values of DSC, HD95 and average HD of all detected BMs were 0.758, 1.45 mm and 0.23 mm, respectively. Performances on the second testing dataset achieved sensitivity of 0.907 at an FP rate of 0.57±0.85 for all BM sizes, and average HD of 0.33 mm for all detected BM. CONCLUSIONS: Our proposed extension of the self-configuring nnU-Net framework substantially improved small BM detection sensitivity while maintaining a controlled FP rate. Clinical utility of the extended nnU-Net model for assisting early BM detection and SRS planning will be investigated.

2.
Acad Radiol ; 2024 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-38908922

RESUMEN

RATIONALE AND OBJECTIVES: To assess a deep learning application (DLA) for acute ischemic stroke (AIS) detection on brain magnetic resonance imaging (MRI) in the emergency room (ER) and the effect of T2-weighted imaging (T2WI) on its performance. MATERIALS AND METHODS: We retrospectively analyzed brain MRIs taken through the ER from March to October 2021 that included diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) sequences. MRIs were processed by the DLA, and sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were evaluated, with three neuroradiologists establishing the gold standard for detection performance. In addition, we examined the impact of axial T2WI, when available, on the accuracy and processing time of DLA. RESULTS: The study included 947 individuals (mean age ± standard deviation, 64 years ± 16; 461 men, 486 women), with 239 (25%) positive for AIS. The overall performance of DLA was as follows: sensitivity, 90%; specificity, 89%; accuracy, 89%; and AUROC, 0.95. The average processing time was 24 s. In the subgroup with T2WI, T2WI did not significantly impact MRI assessments but did result in longer processing times (35 s without T2WI compared to 48 s with T2WI, p < 0.001). CONCLUSION: The DLA successfully identified AIS in the ER setting with an average processing time of 24 s. The absence of performance acquire with axial T2WI suggests that the DLA can diagnose AIS with just axial DWI and FLAIR sequences, potentially shortening the exam duration in the ER.

3.
Sci Rep ; 14(1): 9380, 2024 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-38654066

RESUMEN

Vision transformers (ViTs) have revolutionized computer vision by employing self-attention instead of convolutional neural networks and demonstrated success due to their ability to capture global dependencies and remove spatial biases of locality. In medical imaging, where input data may differ in size and resolution, existing architectures require resampling or resizing during pre-processing, leading to potential spatial resolution loss and information degradation. This study proposes a co-ordinate-based embedding that encodes the geometry of medical images, capturing physical co-ordinate and resolution information without the need for resampling or resizing. The effectiveness of the proposed embedding is demonstrated through experiments with UNETR and SwinUNETR models for infarct segmentation on MRI dataset with AxTrace and AxADC contrasts. The dataset consists of 1142 training, 133 validation and 143 test subjects. Both models with the addition of co-ordinate based positional embedding achieved substantial improvements in mean Dice score by 6.5% and 7.6%. The proposed embedding showcased a statistically significant advantage p-value< 0.0001 over alternative approaches. In conclusion, the proposed co-ordinate-based pixel-wise positional embedding method offers a promising solution for Transformer-based models in medical image analysis. It effectively leverages physical co-ordinate information to enhance performance without compromising spatial resolution and provides a foundation for future advancements in positional embedding techniques for medical applications.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación
4.
Radiol Artif Intell ; 4(3): e210115, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35652116

RESUMEN

Purpose: To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. Materials and Methods: This retrospective study included 46 057 studies from seven "internal" centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16 764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score-defined subsets using bootstrapping. Results: The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). Conclusion: AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation.Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

5.
J Med Imaging (Bellingham) ; 8(3): 037001, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34041305

RESUMEN

Purpose: We investigate the impact of various deep-learning-based methods for detecting and segmenting metastases with different lesion volume sizes on 3D brain MR images. Approach: A 2.5D U-Net and a 3D U-Net were selected. We also evaluated weak learner fusion of the prediction features generated by the 2.5D and the 3D networks. A 3D fully convolutional one-stage (FCOS) detector was selected as a representative of bounding-box regression-based detection methods. A total of 422 3D post-contrast T1-weighted scans from patients with brain metastases were used. Performances were analyzed based on lesion volume, total metastatic volume per patient, and number of lesions per patient. Results: The performance of detection of the 2.5D and 3D U-Net methods had recall of > 0.83 and precision of > 0.44 for lesion volume > 0.3 cm 3 but deteriorated as metastasis size decreased below 0.3 cm 3 to 0.58 to 0.74 in recall and 0.16 to 0.25 in precision. Compared the two U-Nets for detection capability, high precision was achieved by the 2.5D network, but high recall was achieved by the 3D network for all lesion sizes. The weak learner fusion achieved a balanced performance between the 2.5D and 3D U-Nets; particularly, it increased precision to 0.83 for lesion volumes of 0.1 to 0.3 cm 3 but decreased recall to 0.59. The 3D FCOS detector did not outperform the U-Net methods in detecting either the small or large metastases presumably because of the limited data size. Conclusions: Our study provides the performances of four deep learning methods in relationship to lesion size, total metastasis volume, and number of lesions per patient, providing insight into further development of the deep learning networks.

6.
Sci Rep ; 11(1): 6876, 2021 03 25.
Artículo en Inglés | MEDLINE | ID: mdl-33767226

RESUMEN

With the rapid growth and increasing use of brain MRI, there is an interest in automated image classification to aid human interpretation and improve workflow. We aimed to train a deep convolutional neural network and assess its performance in identifying abnormal brain MRIs and critical intracranial findings including acute infarction, acute hemorrhage and mass effect. A total of 13,215 clinical brain MRI studies were categorized to training (74%), validation (9%), internal testing (8%) and external testing (8%) datasets. Up to eight contrasts were included from each brain MRI and each image volume was reformatted to common resolution to accommodate for differences between scanners. Following reviewing the radiology reports, three neuroradiologists assigned each study to abnormal vs normal, and identified three critical findings including acute infarction, acute hemorrhage, and mass effect. A deep convolutional neural network was constructed by a combination of localization feature extraction (LFE) modules and global classifiers to identify the presence of 4 variables in brain MRIs including abnormal, acute infarction, acute hemorrhage and mass effect. Training, validation and testing sets were randomly defined on a patient basis. Training was performed on 9845 studies using balanced sampling to address class imbalance. Receiver operating characteristic (ROC) analysis was performed. The ROC analysis of our models for 1050 studies within our internal test data showed AUC/sensitivity/specificity of 0.91/83%/86% for normal versus abnormal brain MRI, 0.95/92%/88% for acute infarction, 0.90/89%/81% for acute hemorrhage, and 0.93/93%/85% for mass effect. For 1072 studies within our external test data, it showed AUC/sensitivity/specificity of 0.88/80%/80% for normal versus abnormal brain MRI, 0.97/90%/97% for acute infarction, 0.83/72%/88% for acute hemorrhage, and 0.87/79%/81% for mass effect. Our proposed deep convolutional network can accurately identify abnormal and critical intracranial findings on individual brain MRIs, while addressing the fact that some MR contrasts might not be available in individual studies.


Asunto(s)
Encéfalo/anatomía & histología , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Redes Neurales de la Computación , Neuroimagen/métodos , Humanos , Curva ROC
7.
Ann Biomed Eng ; 49(2): 573-584, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32779056

RESUMEN

Prostate cancer (PCa) is a common, serious form of cancer in men that is still prevalent despite ongoing developments in diagnostic oncology. Current detection methods lead to high rates of inaccurate diagnosis. We present a method to directly model and exploit temporal aspects of temporal enhanced ultrasound (TeUS) for tissue characterization, which improves malignancy prediction. We employ a probabilistic-temporal framework, namely, hidden Markov models (HMMs), for modeling TeUS data obtained from PCa patients. We distinguish malignant from benign tissue by comparing the respective log-likelihood estimates generated by the HMMs. We analyze 1100 TeUS signals acquired from 12 patients. Our results show improved malignancy identification compared to previous results, demonstrating over 85% accuracy and AUC of 0.95. Incorporating temporal information directly into the models leads to improved tissue differentiation in PCa. We expect our method to generalize and be applied to other types of cancer in which temporal-ultrasound can be recorded.


Asunto(s)
Modelos Teóricos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico , Humanos , Masculino , Cadenas de Markov , Ultrasonografía
8.
IEEE Trans Med Imaging ; 40(1): 335-345, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32966215

RESUMEN

Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Detección Precoz del Cáncer , Humanos , Procesamiento de Imagen Asistido por Computador , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X
9.
Med Image Anal ; 68: 101855, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33260116

RESUMEN

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Humanos , Aprendizaje Automático , Incertidumbre
10.
Neuroimage Clin ; 29: 102522, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33360973

RESUMEN

INTRODUCTION: During the last decade, a multitude of novel quantitative and semiquantitative MRI techniques have provided new information about the pathophysiology of neurological diseases. Yet, selection of the most relevant contrasts for a given pathology remains challenging. In this work, we developed and validated a method, Gated-Attention MEchanism Ranking of multi-contrast MRI in brain pathology (GAMER MRI), to rank the relative importance of MR measures in the classification of well understood ischemic stroke lesions. Subsequently, we applied this method to the classification of multiple sclerosis (MS) lesions, where the relative importance of MR measures is less understood. METHODS: GAMER MRI was developed based on the gated attention mechanism, which computes attention weights (AWs) as proxies of importance of hidden features in the classification. In the first two experiments, we used Trace-weighted (Trace), apparent diffusion coefficient (ADC), Fluid-Attenuated Inversion Recovery (FLAIR), and T1-weighted (T1w) images acquired in 904 acute/subacute ischemic stroke patients and in 6,230 healthy controls and patients with other brain pathologies to assess if GAMER MRI could produce clinically meaningful importance orders in two different classification scenarios. In the first experiment, GAMER MRI with a pretrained convolutional neural network (CNN) was used in conjunction with Trace, ADC, and FLAIR to distinguish patients with ischemic stroke from those with other pathologies and healthy controls. In the second experiment, GAMER MRI with a patch-based CNN used Trace, ADC and T1w to differentiate acute ischemic stroke lesions from healthy tissue. The last experiment explored the performance of patch-based CNN with GAMER MRI in ranking the importance of quantitative MRI measures to distinguish two groups of lesions with different pathological characteristics and unknown quantitative MR features. Specifically, GAMER MRI was applied to assess the relative importance of the myelin water fraction (MWF), quantitative susceptibility mapping (QSM), T1 relaxometry map (qT1), and neurite density index (NDI) in distinguishing 750 juxtacortical lesions from 242 periventricular lesions in 47 MS patients. Pair-wise permutation t-tests were used to evaluate the differences between the AWs obtained for each quantitative measure. RESULTS: In the first experiment, we achieved a mean test AUC of 0.881 and the obtained AWs of FLAIR and the sum of AWs of Trace and ADC were 0.11 and 0.89, respectively, as expected based on previous knowledge. In the second experiment, we achieved a mean test F1 score of 0.895 and a mean AW of Trace = 0.49, of ADC = 0.28, and of T1w = 0.23, thereby confirming the findings of the first experiment. In the third experiment, MS lesion classification achieved test balanced accuracy = 0.777, sensitivity = 0.739, and specificity = 0.814. The mean AWs of T1map, MWF, NDI, and QSM were 0.29, 0.26, 0.24, and 0.22 (p < 0.001), respectively. CONCLUSIONS: This work demonstrates that the proposed GAMER MRI might be a useful method to assess the relative importance of MRI measures in neurological diseases with focal pathology. Moreover, the obtained AWs may in fact help to choose the best combination of MR contrasts for a specific classification problem.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular , Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética , Accidente Cerebrovascular/diagnóstico por imagen
11.
Ann Biomed Eng ; 48(12): 3025, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32901381

RESUMEN

The authors have noted an omission in the original acknowledgements. The correct acknowledgements are as follows: Acknowledgements: This work was partially supported by Grants from NSERC Discovery to Hagit Shatkay and Parvin Mousavi, NSERC and CIHR CHRP to Parvin Mousavi and NIH R01 LM012527, NIH U54 GM104941, NSF IIS EAGER #1650851 & NSF HDR #1940080 to Hagit Shatkay.

12.
Eur J Radiol ; 126: 108918, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32171914

RESUMEN

PURPOSE: To evaluate the performance of an artificial intelligence (AI) based software solution tested on liver volumetric analyses and to compare the results to the manual contour segmentation. MATERIALS AND METHODS: We retrospectively obtained 462 multiphasic CT datasets with six series for each patient: three different contrast phases and two slice thickness reconstructions (1.5/5 mm), totaling 2772 series. AI-based liver volumes were determined using multi-scale deep-reinforcement learning for 3D body markers detection and 3D structure segmentation. The algorithm was trained for liver volumetry on approximately 5000 datasets. We computed the absolute error of each automatically- and manually-derived volume relative to the mean manual volume. The mean processing time/dataset and method was recorded. Variations of liver volumes were compared using univariate generalized linear model analyses. A subgroup of 60 datasets was manually segmented by three radiologists, with a further subgroup of 20 segmented three times by each, to compare the automatically-derived results with the ground-truth. RESULTS: The mean absolute error of the automatically-derived measurement was 44.3 mL (representing 2.37 % of the averaged liver volumes). The liver volume was neither dependent on the contrast phase (p = 0.697), nor on the slice thickness (p = 0.446). The mean processing time/dataset with the algorithm was 9.94 s (sec) compared to manual segmentation with 219.34 s. We found an excellent agreement between both approaches with an ICC value of 0.996. CONCLUSION: The results of our study demonstrate that AI-powered fully automated liver volumetric analyses can be done with excellent accuracy, reproducibility, robustness, speed and agreement with the manual segmentation.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Hepatopatías/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Inteligencia Artificial , Aprendizaje Profundo , Humanos , Hígado/diagnóstico por imagen , Reproducibilidad de los Resultados , Estudios Retrospectivos
13.
Med Image Anal ; 58: 101558, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31526965

RESUMEN

Convolutional neural networks (CNNs) have recently led to significant advances in automatic segmentations of anatomical structures in medical images, and a wide variety of network architectures are now available to the research community. For applications such as segmentation of the prostate in magnetic resonance images (MRI), the results of the PROMISE12 online algorithm evaluation platform have demonstrated differences between the best-performing segmentation algorithms in terms of numerical accuracy using standard metrics such as the Dice score and boundary distance. These small differences in the segmented regions/boundaries outputted by different algorithms may potentially have an unsubstantial impact on the results of downstream image analysis tasks, such as estimating organ volume and multimodal image registration, which inform clinical decisions. This impact has not been previously investigated. In this work, we quantified the accuracy of six different CNNs in segmenting the prostate in 3D patient T2-weighted MRI scans and compared the accuracy of organ volume estimation and MRI-ultrasound (US) registration errors using the prostate segmentations produced by different networks. Networks were trained and tested using a set of 232 patient MRIs with labels provided by experienced clinicians. A statistically significant difference was found among the Dice scores and boundary distances produced by these networks in a non-parametric analysis of variance (p < 0.001 and p < 0.001, respectively), where the following multiple comparison tests revealed that the statistically significant difference in segmentation errors were caused by at least one tested network. Gland volume errors (GVEs) and target registration errors (TREs) were then estimated using the CNN-generated segmentations. Interestingly, there was no statistical difference found in either GVEs or TREs among different networks, (p = 0.34 and p = 0.26, respectively). This result provides a real-world example that these networks with different segmentation performances may potentially provide indistinguishably adequate registration accuracies to assist prostate cancer imaging applications. We conclude by recommending that the differences in the accuracy of downstream image analysis tasks that make use of data output by automatic segmentation methods, such as CNNs, within a clinical pipeline should be taken into account when selecting between different network architectures, in addition to reporting the segmentation accuracy.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía , Humanos , Masculino , Carga Tumoral
14.
J Med Imaging (Bellingham) ; 6(1): 011003, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30840715

RESUMEN

Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.

15.
Med Phys ; 45(10): 4607-4618, 2018 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-30153334

RESUMEN

PURPOSE: Multiparametric MRI (mpMRI) has shown promise in the detection and localization of prostate cancer foci. Although techniques have been previously introduced to delineate lesions from mpMRI, these techniques were evaluated in datasets with T2 maps available. The generation of T2 map is not included in the clinical prostate mpMRI consensus guidelines; the acquisition of which requires repeated T2-weighted (T2W) scans and would significantly lengthen the scan time currently required for the clinically recommended acquisition protocol, which includes T2W, diffusion-weighted (DW), and dynamic contrast-enhanced (DCE) imaging. The goal of this study is to develop and evaluate an algorithm that provides pixel-accurate lesion delineation from images acquired based on the clinical protocol. METHODS: Twenty-five pixel-based features were extracted from the T2-weighted (T2W), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) images. The pixel-wise classification was performed on the reduced space generated by locality alignment discriminant analysis (LADA), a version of linear discriminant analysis (LDA) localized to patches in the feature space. Postprocessing procedures, including the removal of isolated points identified and filling of holes inside detected regions, were performed to improve delineation accuracy. The segmentation result was evaluated against the lesions manually delineated by four expert observers according to the Prostate Imaging-Reporting and Data System (PI-RADS) detection guideline. RESULTS: The LADA-based classifier (60 ± 11%) achieved a higher sensitivity than the LDA-based classifier (51 ± 10%), thereby demonstrating, for the first time, that higher classification performance was attained on the reduced space generated by LADA than by LDA. Further sensitivity improvement (75 ± 14%) was obtained after postprocessing, approaching the sensitivities attained by previous mpMRI lesion delineation studies in which nonclinical T2 maps were available. CONCLUSION: The proposed algorithm delineated lesions accurately and efficiently from images acquired following the clinical protocol. The development of this framework may potentially accelerate the clinical uses of mpMRI in prostate cancer diagnosis and treatment planning.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Neoplasias de la Próstata/diagnóstico por imagen , Algoritmos , Análisis Discriminante , Humanos , Modelos Lineales , Masculino
16.
Med Image Anal ; 49: 1-13, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30007253

RESUMEN

One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía , Puntos Anatómicos de Referencia , Humanos , Imagenología Tridimensional , Masculino , Próstata/anatomía & histología , Próstata/diagnóstico por imagen
17.
IEEE Trans Med Imaging ; 37(8): 1822-1834, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29994628

RESUMEN

Automatic segmentation of abdominal anatomy on computed tomography (CT) images can support diagnosis, treatment planning, and treatment delivery workflows. Segmentation methods using statistical models and multi-atlas label fusion (MALF) require inter-subject image registrations, which are challenging for abdominal images, but alternative methods without registration have not yet achieved higher accuracy for most abdominal organs. We present a registration-free deep-learning-based segmentation algorithm for eight organs that are relevant for navigation in endoscopic pancreatic and biliary procedures, including the pancreas, the gastrointestinal tract (esophagus, stomach, and duodenum) and surrounding organs (liver, spleen, left kidney, and gallbladder). We directly compared the segmentation accuracy of the proposed method to the existing deep learning and MALF methods in a cross-validation on a multi-centre data set with 90 subjects. The proposed method yielded significantly higher Dice scores for all organs and lower mean absolute distances for most organs, including Dice scores of 0.78 versus 0.71, 0.74, and 0.74 for the pancreas, 0.90 versus 0.85, 0.87, and 0.83 for the stomach, and 0.76 versus 0.68, 0.69, and 0.66 for the esophagus. We conclude that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.


Asunto(s)
Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Abdominal/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Sistema Digestivo/diagnóstico por imagen , Humanos , Riñón/diagnóstico por imagen , Bazo/diagnóstico por imagen
18.
IEEE Trans Biomed Eng ; 65(8): 1798-1809, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29989922

RESUMEN

OBJECTIVES: Temporal enhanced ultrasound (TeUS) is a new ultrasound-based imaging technique that provides tissue-specific information. Recent studies have shown the potential of TeUS for improving tissue characterization in prostate cancer diagnosis. We study the temporal properties of TeUS-temporal order and length-and present a new framework to assess their impact on tissue information. METHODS: We utilize a probabilistic modeling approach using hidden Markov models (HMMs) to capture the temporal signatures of malignant and benign tissues from TeUS signals of nine patients. We model signals of benign and malignant tissues (284 and 286 signals, respectively) in their original temporal order as well as under order permutations. We then compare the resulting models using the Kullback-Liebler divergence and assess their performance differences in characterization. Moreover, we train HMMs using TeUS signals of different durations and compare their model performance when differentiating tissue types. RESULTS: Our findings demonstrate that models of order-preserved signals perform statistically significantly better (85% accuracy) in tissue characterization compared to models of order-altered signals (62% accuracy). The performance degrades as more changes in signal order are introduced. Additionally, models trained on shorter sequences perform as accurately as models of longer sequences. CONCLUSION: The work presented here strongly indicates that temporal order has substantial impact on TeUS performance; thus, it plays a significant role in conveying tissue-specific information. Furthermore, shorter TeUS signals can relay sufficient information to accurately distinguish between tissue types. SIGNIFICANCE: Understanding the impact of TeUS properties facilitates the process of its adopting in diagnostic procedures and provides insights on improving its acquisition.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía/métodos , Humanos , Masculino , Cadenas de Markov , Sensibilidad y Especificidad , Procesos Estocásticos
19.
Comput Biol Med ; 96: 252-265, 2018 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29653354

RESUMEN

Multiparametric magnetic resonance imaging (mpMRI) has been established as the state-of-the-art examination for the detection and localization of prostate cancer lesions. Prostate Imaging-Reporting and Data System (PI-RADS) has been established as a scheme to standardize the reporting of mpMRI findings. Although lesion delineation and PI-RADS ratings could be performed manually, human delineation and ratings are subjective and time-consuming. In this article, we developed and validated a self-tuned graph-based model for PI-RADS rating prediction. 34 features were obtained at the pixel level from T2-weighted (T2W), apparent diffusion coefficient (ADC) and dynamic contrast enhanced (DCE) images, from which PI-RADS scores were predicted. Two major innovations were involved in this self-tuned graph-based model. First, graph-based approaches are sensitive to the choice of the edge weight. The proposed model tuned the edge weights automatically based on the structure of the data, thereby obviating empirical edge weight selection. Second, the feature weights were tuned automatically to give heavier weights to features important for PI-RADS rating estimation. The proposed framework was evaluated for its lesion localization performance in mpMRI datasets of 12 patients. In the evaluation, the PI-RADS score distribution map generated by the algorithm and from the observers' ratings were binarized by thresholds of 3 and 4. The sensitivity, specificity and accuracy obtained in these two threshold settings ranged from 65 to 77%, 86 to 93% and 85 to 88% respectively, which are comparable to results obtained in previous studies in which non-clinical T2 maps were available. The proposed algorithm took 10s to estimate the PI-RADS score distribution in an axial image. The efficiency achievable suggests that this technique can be developed into a prostate MR analysis system suitable for clinical use after a thorough validation involving more patients.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Algoritmos , Humanos , Masculino , Próstata/diagnóstico por imagen , Sensibilidad y Especificidad
20.
Int J Comput Assist Radiol Surg ; 13(6): 875-883, 2018 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29663274

RESUMEN

PURPOSE: Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. METHODS: A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. RESULTS: The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). CONCLUSIONS: The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.


Asunto(s)
Endosonografía/métodos , Imagenología Tridimensional/métodos , Páncreas/diagnóstico por imagen , Pancreatectomía/métodos , Neoplasias Pancreáticas/cirugía , Cirugía Asistida por Computador/métodos , Humanos , Páncreas/cirugía , Neoplasias Pancreáticas/diagnóstico por imagen , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA