Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Eur Radiol ; 33(8): 5859-5870, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37150781

RESUMEN

OBJECTIVES: An appropriate and fast clinical referral suggestion is important for intra-axial mass-like lesions (IMLLs) in the emergency setting. We aimed to apply an interpretable deep learning (DL) system to multiparametric MRI to obtain clinical referral suggestion for IMLLs, and to validate it in the setting of nontraumatic emergency neuroradiology. METHODS: A DL system was developed in 747 patients with IMLLs ranging 30 diseases who underwent pre- and post-contrast T1-weighted (T1CE), FLAIR, and diffusion-weighted imaging (DWI). A DL system that segments IMLLs, classifies tumourous conditions, and suggests clinical referral among surgery, systematic work-up, medical treatment, and conservative treatment, was developed. The system was validated in an independent cohort of 130 emergency patients, and performance in referral suggestion and tumour discrimination was compared with that of radiologists using receiver operating characteristics curve, precision-recall curve analysis, and confusion matrices. Multiparametric interpretable visualisation of high-relevance regions from layer-wise relevance propagation overlaid on contrast-enhanced T1WI and DWI was analysed. RESULTS: The DL system provided correct referral suggestions in 94 of 130 patients (72.3%) and performed comparably to radiologists (accuracy 72.6%, McNemar test; p = .942). For distinguishing tumours from non-tumourous conditions, the DL system (AUC, 0.90 and AUPRC, 0.94) performed similarly to human readers (AUC, 0.81~0.92, and AUPRC, 0.88~0.95). Solid portions of tumours showed a high overlap of relevance, but non-tumours did not (Dice coefficient 0.77 vs. 0.33, p < .001), demonstrating the DL's decision. CONCLUSIONS: Our DL system could appropriately triage patients using multiparametric MRI and provide interpretability through multiparametric heatmaps, and may thereby aid neuroradiologic diagnoses in emergency settings. CLINICAL RELEVANCE STATEMENT: Our AI triages patients with raw MRI images to clinical referral pathways in brain intra-axial mass-like lesions. We demonstrate that the decision is based on the relative relevance between contrast-enhanced T1-weighted and diffusion-weighted images, providing explainability across multiparametric MRI data. KEY POINTS: • A deep learning (DL) system using multiparametric MRI suggested clinical referral to patients with intra-axial mass-like lesions (IMLLs) similar to radiologists (accuracy 72.3% vs. 72.6%). • In the differentiation of tumourous and non-tumourous conditions, the DL system (AUC, 0.90) performed similar with radiologists (AUC, 0.81-0.92). • The DL's decision basis for differentiating tumours from non-tumours can be quantified using multiparametric heatmaps obtained via the layer-wise relevance propagation method.


Asunto(s)
Aprendizaje Profundo , Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias , Humanos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Inteligencia Artificial , Imagen por Resonancia Magnética/métodos , Neoplasias/diagnóstico por imagen , Estudios Retrospectivos
2.
Eur Radiol ; 33(9): 6124-6133, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37052658

RESUMEN

OBJECTIVES: To establish a robust interpretable multiparametric deep learning (DL) model for automatic noninvasive grading of meningiomas along with segmentation. METHODS: In total, 257 patients with pathologically confirmed meningiomas (162 low-grade, 95 high-grade) who underwent a preoperative brain MRI, including T2-weighted (T2) and contrast-enhanced T1-weighted images (T1C), were included in the institutional training set. A two-stage DL grading model was constructed for segmentation and classification based on multiparametric three-dimensional U-net and ResNet. The models were validated in the external validation set consisting of 61 patients with meningiomas (46 low-grade, 15 high-grade). Relevance-weighted Class Activation Mapping (RCAM) method was used to interpret the DL features contributing to the prediction of the DL grading model. RESULTS: On external validation, the combined T1C and T2 model showed a Dice coefficient of 0.910 in segmentation and the highest performance for meningioma grading compared to the T2 or T1C only models, with an area under the curve (AUC) of 0.770 (95% confidence interval: 0.644-0.895) and accuracy, sensitivity, and specificity of 72.1%, 73.3%, and 71.7%, respectively. The AUC and accuracy of the combined DL grading model were higher than those of the human readers (AUCs of 0.675-0.690 and accuracies of 65.6-68.9%, respectively). The RCAM of the DL grading model showed activated maps at the surface regions of meningiomas indicating that the model recognized the features at the tumor margin for grading. CONCLUSIONS: An interpretable multiparametric DL model combining T1C and T2 can enable fully automatic grading of meningiomas along with segmentation. KEY POINTS: • The multiparametric DL model showed robustness in grading and segmentation on external validation. • The diagnostic performance of the combined DL grading model was higher than that of the human readers. • The RCAM interpreted that DL grading model recognized the meaningful features at the tumor margin for grading.


Asunto(s)
Aprendizaje Profundo , Neoplasias Meníngeas , Meningioma , Humanos , Meningioma/diagnóstico por imagen , Meningioma/patología , Imagen por Resonancia Magnética/métodos , Neuroimagen , Clasificación del Tumor , Estudios Retrospectivos , Neoplasias Meníngeas/diagnóstico por imagen , Neoplasias Meníngeas/patología
3.
Med Image Anal ; 83: 102628, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36283200

RESUMEN

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Asunto(s)
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagen
4.
IEEE Trans Med Imaging ; 40(9): 2306-2317, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33929957

RESUMEN

Accelerating MRI scans is one of the principal outstanding problems in the MRI research community. Towards this goal, we hosted the second fastMRI competition targeted towards reconstructing MR images with subsampled k-space data. We provided participants with data from 7,299 clinical brain scans (de-identified via a HIPAA-compliant procedure by NYU Langone Health), holding back the fully-sampled data from 894 of these scans for challenge evaluation purposes. In contrast to the 2019 challenge, we focused our radiologist evaluations on pathological assessment in brain images. We also debuted a new Transfer track that required participants to submit models evaluated on MRI scanners from outside the training set. We received 19 submissions from eight different groups. Results showed one team scoring best in both SSIM scores and qualitative radiologist evaluations. We also performed analysis on alternative metrics to mitigate the effects of background noise and collected feedback from the participants to inform future challenges. Lastly, we identify common failure modes across the submissions, highlighting areas of need for future research in the MRI reconstruction community.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Humanos , Aprendizaje Automático , Neuroimagen
5.
Med Image Anal ; 70: 102017, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33721693

RESUMEN

Quantitative tissue characteristics, which provide valuable diagnostic information, can be represented by magnetic resonance (MR) parameter maps using magnetic resonance imaging (MRI); however, a long scan time is necessary to acquire them, which prevents the application of quantitative MR parameter mapping to real clinical protocols. For fast MR parameter mapping, we propose a deep model-based MR parameter mapping network called DOPAMINE that combines a deep learning network with a model-based method to reconstruct MR parameter maps from undersampled multi-channel k-space data. DOPAMINE consists of two networks: 1) an MR parameter mapping network that uses a deep convolutional neural network (CNN) that estimates initial parameter maps from undersampled k-space data (CNN-based mapping), and 2) a reconstruction network that removes aliasing artifacts in the parameter maps with a deep CNN (CNN-based reconstruction) and an interleaved data consistency layer by an embedded MR model-based optimization procedure. We demonstrated the performance of DOPAMINE in brain T1 map reconstruction with a variable flip angle (VFA) model. To evaluate the performance of DOPAMINE, we compared it with conventional parallel imaging, low-rank based reconstruction, model-based reconstruction, and state-of-the-art deep-learning-based mapping methods for three different reduction factors (R = 3, 5, and 7) and two different sampling patterns (1D Cartesian and 2D Poisson-disk). Quantitative metrics indicated that DOPAMINE outperformed other methods in reconstructing T1 maps for all sampling patterns and reduction factors. DOPAMINE exhibited quantitatively and qualitatively superior performance to that of conventional methods in reconstructing MR parameter maps from undersampled multi-channel k-space data. The proposed method can thus reduce the scan time of quantitative MR parameter mapping that uses a VFA model.


Asunto(s)
Dopamina , Procesamiento de Imagen Asistido por Computador , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Redes Neurales de la Computación
6.
Med Image Anal ; 63: 101689, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32299061

RESUMEN

This study developed a domain-transform framework comprising domain-transform manifold learning with an initial analytic transform to accelerate Cartesian magnetic resonance imaging (DOTA-MRI). The proposed method directly transforms undersampled Cartesian k-space data into a reconstructed image. In Cartesian undersampling, the k-space is fully or zero sampled in the data-acquisition direction (i.e., the frequency-encoding direction or the x-direction); one-dimensional (1D) inverse Fourier transform (IFT) along the x-direction on the undersampled k-space does not induce any aliasing. To exploit this, the algorithm first applies an analytic x-direction 1D IFT to the undersampled Cartesian k-space input, and subsequently transforms it into a reconstructed image using deep neural networks. The initial analytic transform (i.e., 1D IFT) allows the fully connected layers of the neural network to learn 1D global transform only in the phase-encoding direction (i.e., the y-direction) instead of 2D transform. This drastically reduces the number of parameters to be learned from O(N2) to O(N) compared with the existing manifold learning algorithm (i.e., automated transform by manifold approximation) (AUTOMAP). This enables DOTA-MRI to be applied to high-resolution MR datasets, which has previously proved difficult to implement in AUTOMAP because of the enormous memory requirements involved. After the initial analytic transform, the manifold learning phase uses a symmetric network architecture comprising three types of layers: front-end convolutional layers, fully connected layers for the 1D global transform, and back-end convolutional layers. The front-end convolutional layers take 1D IFT of the undersampled k-space (i.e., undersampled data in the intermediate domain or in the ky-x domain) as input and performs data-domain restoration. The following fully connected layers learn the 1D global transform between the ky-x domain and the image domain (i.e., the y-x domain). Finally, the back-end convolutional layers reconstruct the final image by denoising in the image domain. DOTA-MRI exhibited superior performance over nine other existing algorithms, including state-of-the-art deep learning-based algorithms. The generality of the algorithm was demonstrated by experiments conducted under various sampling ratios, datasets, and noise levels.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Algoritmos , Análisis de Fourier , Humanos , Redes Neurales de la Computación
7.
Taehan Yongsang Uihakhoe Chi ; 81(6): 1305-1333, 2020 Nov.
Artículo en Coreano | MEDLINE | ID: mdl-36237722

RESUMEN

Deep learning has recently achieved remarkable results in the field of medical imaging. However, as a deep learning network becomes deeper to improve its performance, it becomes more difficult to interpret the processes within. This can especially be a critical problem in medical fields where diagnostic decisions are directly related to a patient's survival. In order to solve this, explainable artificial intelligence techniques are being widely studied, and an attention mechanism was developed as part of this approach. In this paper, attention techniques are divided into two types: post hoc attention, which aims to analyze a network that has already been trained, and trainable attention, which further improves network performance. Detailed comparisons of each method, examples of applications in medical imaging, and future perspectives will be covered.

8.
Magn Reson Med ; 81(6): 3840-3853, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30666723

RESUMEN

PURPOSE: To develop and evaluate a method of parallel imaging time-of-flight (TOF) MRA using deep multistream convolutional neural networks (CNNs). METHODS: A deep parallel imaging network ("DPI-net") was developed to reconstruct 3D multichannel MRA from undersampled data. It comprises 2 deep-learning networks: a network of multistream CNNs for extracting feature maps of multichannel images and a network of reconstruction CNNs for reconstructing images from the multistream network output feature maps. The images were evaluated using normalized root mean square error (NRMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) values, and the visibility of blood vessels was assessed by measuring the vessel sharpness of middle and posterior cerebral arteries on axial maximum intensity projection (MIP) images. Vessel sharpness was compared using paired t tests, between DPI-net, 2 conventional parallel imaging methods (SAKE and ESPIRiT), and a deep-learning method (U-net). RESULTS: DPI-net showed superior performance in reconstructing vessel signals in both axial slices and MIP images for all reduction factors. This was supported by the quantitative metrics, with DPI-net showing the lowest NRMSE, the highest PSNR and SSIM (except R = 3.8 on sagittal MIP images, and R = 5.7 on axial slices and sagittal MIP images), and significantly higher vessel sharpness values than the other methods. CONCLUSION: DPI-net was effective in reconstructing 3D TOF MRA from highly undersampled multichannel MR data, achieving superior performance, both quantitatively and qualitatively, over conventional parallel imaging and other deep-learning methods.


Asunto(s)
Angiografía Cerebral/métodos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Angiografía por Resonancia Magnética/métodos , Algoritmos , Encéfalo/irrigación sanguínea , Encéfalo/diagnóstico por imagen , Humanos
9.
Sci Rep ; 8(1): 9450, 2018 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-29930257

RESUMEN

Black-blood (BB) imaging is used to complement contrast-enhanced 3D gradient-echo (CE 3D-GRE) imaging for detecting brain metastases, requiring additional scan time. In this study, we proposed deep-learned 3D BB imaging with an auto-labelling technique and 3D convolutional neural networks for brain metastases detection without additional BB scan. Patients were randomly selected for training (29 sets) and testing (36 sets). Two neuroradiologists independently evaluated deep-learned and original BB images, assessing the degree of blood vessel suppression and lesion conspicuity. Vessel signals were effectively suppressed in all patients. The figure of merits, which indicate the diagnostic performance of radiologists, were 0.9708 with deep-learned BB and 0.9437 with original BB imaging, suggesting that the deep-learned BB imaging is highly comparable to the original BB imaging (difference was not significant; p = 0.2142). In per patient analysis, sensitivities were 100% for both deep-learned and original BB imaging; however, the original BB imaging indicated false positive results for two patients. In per lesion analysis, sensitivities were 90.3% for deep-learned and 100% for original BB images. There were eight false positive lesions on the original BB imaging but only one on the deep-learned BB imaging. Deep-learned 3D BB imaging can be effective for brain metastases detection.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Aprendizaje Profundo , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Anciano , Vasos Sanguíneos/diagnóstico por imagen , Neoplasias Encefálicas/secundario , Femenino , Humanos , Imagenología Tridimensional/normas , Imagen por Resonancia Magnética/normas , Masculino , Persona de Mediana Edad , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA