Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
2.
IEEE Trans Med Imaging ; 42(9): 2577-2591, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37030684

RESUMO

Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Redes Neurais de Computação , Radiologistas
3.
Radiol Artif Intell ; 4(2): e210059, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35391765

RESUMO

Artificial intelligence (AI)-based image enhancement has the potential to reduce scan times while improving signal-to-noise ratio (SNR) and maintaining spatial resolution. This study prospectively evaluated AI-based image enhancement in 32 consecutive patients undergoing clinical brain MRI. Standard-of-care (SOC) three-dimensional (3D) T1 precontrast, 3D T2 fluid-attenuated inversion recovery, and 3D T1 postcontrast sequences were performed along with 45% faster versions of these sequences using half the number of phase-encoding steps. Images from the faster sequences were processed by a Food and Drug Administration-cleared AI-based image enhancement software for resolution enhancement. Four board-certified neuroradiologists scored the SOC and AI-enhanced image series independently on a five-point Likert scale for image SNR, anatomic conspicuity, overall image quality, imaging artifacts, and diagnostic confidence. While interrater κ was low to fair, the AI-enhanced scans were noninferior for all metrics and actually demonstrated a qualitative SNR improvement. Quantitative analyses showed that the AI software restored the high spatial resolution of small structures, such as the septum pellucidum. In conclusion, AI-based software can achieve noninferior image quality for 3D brain MRI sequences with a 45% scan time reduction, potentially improving the patient experience and scanner efficiency without sacrificing diagnostic quality. Keywords: MR Imaging, CNS, Brain/Brain Stem, Reconstruction Algorithms © RSNA, 2022.

4.
Front Neurol ; 12: 685276, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34646227

RESUMO

Background: Magnetic resonance (MR) scans are routine clinical procedures for monitoring people with multiple sclerosis (PwMS). Patient discomfort, timely scheduling, and financial burden motivate the need to accelerate MR scan time. We examined the clinical application of a deep learning (DL) model in restoring the image quality of accelerated routine clinical brain MR scans for PwMS. Methods: We acquired fast 3D T1w BRAVO and fast 3D T2w FLAIR MRI sequences (half the phase encodes and half the number of slices) in parallel to conventional parameters. Using a subset of the scans, we trained a DL model to generate images from fast scans with quality similar to the conventional scans and then applied the model to the remaining scans. We calculated clinically relevant T1w volumetrics (normalized whole brain, thalamic, gray matter, and white matter volume) for all scans and T2 lesion volume in a sub-analysis. We performed paired t-tests comparing conventional, fast, and fast with DL for these volumetrics, and fit repeated measures mixed-effects models to test for differences in correlations between volumetrics and clinically relevant patient-reported outcomes (PRO). Results: We found statistically significant but small differences between conventional and fast scans with DL for all T1w volumetrics. There was no difference in the extent to which the key T1w volumetrics correlated with clinically relevant PROs of MS symptom burden and neurological disability. Conclusion: A deep learning model that improves the image quality of the accelerated routine clinical brain MR scans has the potential to inform clinically relevant outcomes in MS.

6.
NPJ Digit Med ; 4(1): 127, 2021 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-34426629

RESUMO

More widespread use of positron emission tomography (PET) imaging is limited by its high cost and radiation dose. Reductions in PET scan time or radiotracer dosage typically degrade diagnostic image quality (DIQ). Deep-learning-based reconstruction may improve DIQ, but such methods have not been clinically evaluated in a realistic multicenter, multivendor environment. In this study, we evaluated the performance and generalizability of a deep-learning-based image-quality enhancement algorithm applied to fourfold reduced-count whole-body PET in a realistic clinical oncologic imaging environment with multiple blinded readers, institutions, and scanner types. We demonstrate that the low-count-enhanced scans were noninferior to the standard scans in DIQ (p < 0.05) and overall diagnostic confidence (p < 0.001) independent of the underlying PET scanner used. Lesion detection for the low-count-enhanced scans had a high patient-level sensitivity of 0.94 (0.83-0.99) and specificity of 0.98 (0.95-0.99). Interscan kappa agreement of 0.85 was comparable to intrareader (0.88) and pairwise inter-reader agreements (maximum of 0.72). SUV quantification was comparable in the reference regions and lesions (lowest p-value=0.59) and had high correlation (lowest CCC = 0.94). Thus, we demonstrated that deep learning can be used to restore diagnostic image quality and maintain SUV accuracy for fourfold reduced-count PET scans, with interscan variations in lesion depiction, lower than intra- and interreader variations. This method generalized to an external validation set of clinical patients from multiple institutions and scanner types. Overall, this method may enable either dose or exam-duration reduction, increasing safety and lowering the cost of PET imaging.

7.
Magn Reson Med ; 86(3): 1687-1700, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33914965

RESUMO

PURPOSE: With rising safety concerns over the use of gadolinium-based contrast agents (GBCAs) in contrast-enhanced MRI, there is a need for dose reduction while maintaining diagnostic capability. This work proposes comprehensive technical solutions for a deep learning (DL) model that predicts contrast-enhanced images of the brain with approximately 10% of the standard dose, across different sites and scanners. METHODS: The proposed DL model consists of a set of methods that improve the model robustness and generalizability. The steps include multi-planar reconstruction, 2.5D model, enhancement-weighted L1, perceptual, and adversarial losses. The proposed model predicts contrast-enhanced images from corresponding pre-contrast and low-dose images. With IRB approval and informed consent, 640 heterogeneous patient scans (56 train, 13 validation, and 571 test) from 3 institutions consisting of 3D T1-weighted brain images were used. Quantitative metrics were computed and 50 randomly sampled test cases were evaluated by 2 board-certified radiologists. Quantitative tumor segmentation was performed on cases with abnormal enhancements. Ablation study was performed for systematic evaluation of proposed technical solutions. RESULTS: The average peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) between full-dose and model prediction were 35.07±3.84 dB and 0.92±0.02 , respectively. Radiologists found the same enhancing pattern in 45/50 (90%) cases; discrepancies were minor differences in contrast intensity and artifacts, with no effect on diagnosis. The average segmentation Dice score between full-dose and synthesized images was 0.88±0.06 (median = 0.91). CONCLUSIONS: We have proposed a DL model with technical solutions for low-dose contrast-enhanced brain MRI with potential generalizability under diverse clinical settings.


Assuntos
Aprendizado Profundo , Gadolínio , Encéfalo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Razão Sinal-Ruído
9.
IEEE Trans Med Imaging ; 39(10): 3089-3099, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32286966

RESUMO

Multi-echo saturation recovery sequence can provide redundant information to synthesize multi-contrast magnetic resonance imaging. Traditional synthesis methods, such as GE's MAGiC platform, employ a model-fitting approach to generate parameter-weighted contrasts. However, models' over-simplification, as well as imperfections in the acquisition, can lead to undesirable reconstruction artifacts, especially in T2-FLAIR contrast. To improve the image quality, in this study, a multi-task deep learning model is developed to synthesize multi-contrast neuroimaging jointly using both signal relaxation relationships and spatial information. Compared with previous deep learning-based synthesis, the correlation between different destination contrast is utilized to enhance reconstruction quality. To improve model generalizability and evaluate clinical significance, the proposed model was trained and tested on a large multi-center dataset, including healthy subjects and patients with pathology. Results from both quantitative comparison and clinical reader study demonstrate that the multi-task formulation leads to more efficient and accurate contrast synthesis than previous methods.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Artefatos , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Neuroimagem
10.
JAMA Netw Open ; 3(3): e200772, 2020 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-32163165

RESUMO

Importance: Predicting infarct size and location is important for decision-making and prognosis in patients with acute stroke. Objectives: To determine whether a deep learning model can predict final infarct lesions using magnetic resonance images (MRIs) acquired at initial presentation (baseline) and to compare the model with current clinical prediction methods. Design, Setting, and Participants: In this multicenter prognostic study, a specific type of neural network for image segmentation (U-net) was trained, validated, and tested using patients from the Imaging Collaterals in Acute Stroke (iCAS) study from April 14, 2014, to April 15, 2018, and the Diffusion Weighted Imaging Evaluation for Understanding Stroke Evolution Study-2 (DEFUSE-2) study from July 14, 2008, to September 17, 2011 (reported in October 2012). Patients underwent baseline perfusion-weighted and diffusion-weighted imaging and MRI at 3 to 7 days after baseline. Patients were grouped into unknown, minimal, partial, and major reperfusion status based on 24-hour imaging results. Baseline images acquired at presentation were inputs, and the final true infarct lesion at 3 to 7 days was considered the ground truth for the model. The model calculated the probability of infarction for every voxel, which can be thresholded to produce a prediction. Data were analyzed from July 1, 2018, to March 7, 2019. Main Outcomes and Measures: Area under the curve, Dice score coefficient (DSC) (a metric from 0-1 indicating the extent of overlap between the prediction and the ground truth; a DSC of ≥0.5 represents significant overlap), and volume error. Current clinical methods were compared with model performance in subgroups of patients with minimal or major reperfusion. Results: Among the 182 patients included in the model (97 women [53.3%]; mean [SD] age, 65 [16] years), the deep learning model achieved a median area under the curve of 0.92 (interquartile range [IQR], 0.87-0.96), DSC of 0.53 (IQR, 0.31-0.68), and volume error of 9 (IQR, -14 to 29) mL. In subgroups with minimal (DSC, 0.58 [IQR, 0.31-0.67] vs 0.55 [IQR, 0.40-0.65]; P = .37) or major (DSC, 0.48 [IQR, 0.29-0.65] vs 0.45 [IQR, 0.15-0.54]; P = .002) reperfusion for which comparison with existing clinical methods was possible, the deep learning model had comparable or better performance. Conclusions and Relevance: The deep learning model appears to have successfully predicted infarct lesions from baseline imaging without reperfusion information and achieved comparable performance to existing clinical methods. Predicting the subacute infarct lesion may help clinicians prepare for decompression treatment and aid in patient selection for neuroprotective clinical trials.


Assuntos
Isquemia Encefálica/diagnóstico , Aprendizado Profundo/estatística & dados numéricos , Imageamento por Ressonância Magnética/métodos , Seleção de Pacientes , Idoso , Isquemia Encefálica/fisiopatologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos
11.
Magn Reson Med ; 84(3): 1456-1469, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32129529

RESUMO

PURPOSE: To improve the image quality of highly accelerated multi-channel MRI data by learning a joint variational network that reconstructs multiple clinical contrasts jointly. METHODS: Data from our multi-contrast acquisition were embedded into the variational network architecture where shared anatomical information is exchanged by mixing the input contrasts. Complementary k-space sampling across imaging contrasts and Bunch-Phase/Wave-Encoding were used for data acquisition to improve the reconstruction at high accelerations. At 3T, our joint variational network approach across T1w, T2w and T2-FLAIR-weighted brain scans was tested for retrospective under-sampling at R = 6 (2D) and R = 4 × 4 (3D) acceleration. Prospective acceleration was also performed for 3D data where the combined acquisition time for whole brain coverage at 1 mm isotropic resolution across three contrasts was less than 3 min. RESULTS: Across all test datasets, our joint multi-contrast network better preserved fine anatomical details with reduced image-blurring when compared to the corresponding single-contrast reconstructions. Improvement in image quality was also obtained through complementary k-space sampling and Bunch-Phase/Wave-Encoding where the synergistic combination yielded the overall best performance as evidenced by exemplary slices and quantitative error metrics. CONCLUSION: By leveraging shared anatomical structures across the jointly reconstructed scans, our joint multi-contrast approach learnt more efficient regularizers, which helped to retain natural image appearance and avoid over-smoothing. When synergistically combined with advanced encoding techniques, the performance was further improved, enabling up to R = 16-fold acceleration with good image quality. This should help pave the way to very rapid high-resolution brain exams.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Estudos Prospectivos , Estudos Retrospectivos
12.
J Cereb Blood Flow Metab ; 40(11): 2240-2253, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-31722599

RESUMO

To improve the quality of MRI-based cerebral blood flow (CBF) measurements, a deep convolutional neural network (dCNN) was trained to combine single- and multi-delay arterial spin labeling (ASL) and structural images to predict gold-standard 15O-water PET CBF images obtained on a simultaneous PET/MRI scanner. The dCNN was trained and tested on 64 scans in 16 healthy controls (HC) and 16 cerebrovascular disease patients (PT) with 4-fold cross-validation. Fidelity to the PET CBF images and the effects of bias due to training on different cohorts were examined. The dCNN significantly improved CBF image quality compared with ASL alone (mean ± standard deviation): structural similarity index (0.854 ± 0.036 vs. 0.743 ± 0.045 [single-delay] and 0.732 ± 0.041 [multi-delay], P < 0.0001); normalized root mean squared error (0.209 ± 0.039 vs. 0.326 ± 0.050 [single-delay] and 0.344 ± 0.055 [multi-delay], P < 0.0001). The dCNN also yielded mean CBF with reduced estimation error in both HC and PT (P < 0.001), and demonstrated better correlation with PET. The dCNN trained with the mixed HC and PT cohort performed the best. The results also suggested that models should be trained on cases representative of the target population.


Assuntos
Encéfalo/irrigação sanguínea , Encéfalo/diagnóstico por imagem , Circulação Cerebrovascular , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Radioisótopos de Oxigênio , Tomografia por Emissão de Pósitrons/métodos , Água , Adolescente , Adulto , Idoso , Algoritmos , Análise de Dados , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Tomografia por Emissão de Pósitrons/normas , Reprodutibilidade dos Testes , Adulto Jovem
13.
Med Phys ; 46(8): 3555-3564, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31131901

RESUMO

PURPOSE: Our goal was to use a generative adversarial network (GAN) with feature matching and task-specific perceptual loss to synthesize standard-dose amyloid Positron emission tomography (PET) images of high quality and including accurate pathological features from ultra-low-dose PET images only. METHODS: Forty PET datasets from 39 participants were acquired with a simultaneous PET/MRI scanner following injection of 330 ± 30 MBq of the amyloid radiotracer 18F-florbetaben. The raw list-mode PET data were reconstructed as the standard-dose ground truth and were randomly undersampled by a factor of 100 to reconstruct 1% low-dose PET scans. A 2D encoder-decoder network was implemented as the generator to synthesize a standard-dose image and a discriminator was used to evaluate them. The two networks contested with each other to achieve high-visual quality PET from the ultra-low-dose PET. Multi-slice inputs were used to reduce noise by providing the network with 2.5D information. Feature matching was applied to reduce hallucinated structures. Task-specific perceptual loss was designed to maintain the correct pathological features. The image quality was evaluated by peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE) metrics with and without each of these modules. Two expert radiologists were asked to score image quality on a 5-point scale and identified the amyloid status (positive or negative). RESULTS: With only low-dose PET as input, the proposed method significantly outperformed Chen et al.'s method (Chen et al. Radiology. 2018;290:649-656) (which shows the best performance in this task) with the same input (PET-only model) by 1.87 dB in PSNR, 2.04% in SSIM, and 24.75% in RMSE. It also achieved comparable results to Chen et al.'s method which used additional magnetic resonance imaging (MRI) inputs (PET-MR model). Experts' reading results showed that the proposed method could achieve better overall image quality and maintain better pathological features indicating amyloid status than both PET-only and PET-MR models proposed by Chen et al. CONCLUSION: Standard-dose amyloid PET images can be synthesized from ultra-low-dose images using GAN. Applying adversarial learning, feature matching, and task-specific perceptual loss are essential to ensure image quality and the preservation of pathological features.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Tomografia por Emissão de Pósitrons , Doses de Radiação , Razão Sinal-Ruído
14.
IEEE Trans Med Imaging ; 38(1): 167-179, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30040634

RESUMO

Undersampled magnetic resonance image (MRI) reconstruction is typically an ill-posed linear inverse task. The time and resource intensive computations require tradeoffs between accuracy and speed. In addition, state-of-the-art compressed sensing (CS) analytics are not cognizant of the image diagnostic quality. To address these challenges, we propose a novel CS framework that uses generative adversarial networks (GAN) to model the (low-dimensional) manifold of high-quality MR images. Leveraging a mixture of least-squares (LS) GANs and pixel-wise l1/l2 cost, a deep residual network with skip connections is trained as the generator that learns to remove the aliasing artifacts by projecting onto the image manifold. The LSGAN learns the texture details, while the l1/l2 cost suppresses high-frequency noise. A discriminator network, which is a multilayer convolutional neural network (CNN), plays the role of a perceptual cost that is then jointly trained based on high-quality MR images to score the quality of retrieved images. In the operational phase, an initial aliased estimate (e.g., simply obtained by zero-filling) is propagated into the trained generator to output the desired reconstruction. This demands a very low computational overhead. Extensive evaluations are performed on a large contrast-enhanced MR dataset of pediatric patients. Images rated by expert radiologists corroborate that GANCS retrieves higher quality images with improved fine texture details compared with conventional Wavelet-based and dictionary-learning-based CS schemes as well as with deep-learning-based schemes using pixel-wise training. In addition, it offers reconstruction times of under a few milliseconds, which are two orders of magnitude faster than the current state-of-the-art CS-MRI schemes.


Assuntos
Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Glândulas Suprarrenais/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Humanos , Joelho/diagnóstico por imagem , Imagens de Fantasmas
15.
Radiology ; 290(3): 649-656, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30526350

RESUMO

Purpose To reduce radiotracer requirements for amyloid PET/MRI without sacrificing diagnostic quality by using deep learning methods. Materials and Methods Forty data sets from 39 patients (mean age ± standard deviation [SD], 67 years ± 8), including 16 male patients and 23 female patients (mean age, 66 years ± 6 and 68 years ± 9, respectively), who underwent simultaneous amyloid (fluorine 18 [18F]-florbetaben) PET/MRI examinations were acquired from March 2016 through October 2017 and retrospectively analyzed. One hundredth of the raw list-mode PET data were randomly chosen to simulate a low-dose (1%) acquisition. Convolutional neural networks were implemented with low-dose PET and multiple MR images (PET-plus-MR model) or with low-dose PET alone (PET-only) as inputs to predict full-dose PET images. Quality of the synthesized images was evaluated while Bland-Altman plots assessed the agreement of regional standard uptake value ratios (SUVRs) between image types. Two readers scored image quality on a five-point scale (5 = excellent) and determined amyloid status (positive or negative). Statistical analyses were carried out to assess the difference of image quality metrics and reader agreement and to determine confidence intervals (CIs) for reading results. Results The synthesized images (especially from the PET-plus-MR model) showed marked improvement on all quality metrics compared with the low-dose image. All PET-plus-MR images scored 3 or higher, with proportions of images rated greater than 3 similar to those for the full-dose images (-10% difference [eight of 80 readings], 95% CI: -15%, -5%). Accuracy for amyloid status was high (71 of 80 readings [89%]) and similar to intrareader reproducibility of full-dose images (73 of 80 [91%]). The PET-plus-MR model also had the smallest mean and variance for SUVR difference to full-dose images. Conclusion Simultaneously acquired MRI and ultra-low-dose PET data can be used to synthesize full-dose-like amyloid PET images. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Catana in this issue.


Assuntos
Compostos de Anilina/administração & dosagem , Encefalopatias/diagnóstico por imagem , Aprendizado Profundo , Imageamento por Ressonância Magnética/métodos , Tomografia por Emissão de Pósitrons/métodos , Estilbenos/administração & dosagem , Idoso , Doença de Alzheimer/diagnóstico por imagem , Amiloide/análise , Disfunção Cognitiva/diagnóstico por imagem , Feminino , Humanos , Doença por Corpos de Lewy/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Imagem Multimodal , Transtornos Parkinsonianos/diagnóstico por imagem , Estudos Retrospectivos
16.
AJR Am J Roentgenol ; 212(1): 44-51, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30354266

RESUMO

OBJECTIVE: When treatment decisions are being made for patients with acute ischemic stroke, timely and accurate outcome prediction plays an important role. The optimal rehabilitation strategy also relies on long-term outcome predictions. The decision-making process involves numerous biomarkers including imaging features and demographic information. The objective of this study was to integrate common stroke biomarkers using machine learning methods and predict patient recovery outcome at 90 days. MATERIALS AND METHODS: A total of 512 patients were enrolled in this retrospective study. Extreme gradient boosting (XGB) and gradient boosting machine (GBM) models were used to predict modified Rankin scale (mRS) scores at 90 days using biomarkers available at admission and 24 hours. Feature selections were performed using a greedy algorithm. Fivefold cross validation was applied to estimate model performance. RESULTS: For binary prediction of an mRS score of greater than 2 using biomarkers available at admission, XGB and GBM had an AUC of 0.746 and 0.748, respectively. Adding the National Institutes of Health Stroke Score at 24 hours and performing feature selection improved the AUC of XGB to 0.884 and the AUC of GBM to 0.877. With the addition of the recanalization outcome, XGB's AUC improved to 0.807 for nonrecanalized patients and dropped to 0.670 for recanalized patients. GBM's AUC improved to 0.781 for nonrecanalized patients and dropped to 0.655 for recanalized patients. CONCLUSION: Decision tree-based GBMs can predict the recovery outcome of stroke patients at admission with a high AUC. Breaking down the patient groups on the basis of recanalization and nonrecanalization can potentially help with the treatment decision process.


Assuntos
Isquemia Encefálica/diagnóstico por imagem , Isquemia Encefálica/terapia , Aprendizado de Máquina , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/terapia , Algoritmos , Biomarcadores/análise , Angiografia por Tomografia Computadorizada , Árvores de Decisões , Demografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
17.
Front Neurol ; 9: 679, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30271370

RESUMO

Performance of models highly depend not only on the used algorithm but also the data set it was applied to. This makes the comparison of newly developed tools to previously published approaches difficult. Either researchers need to implement others' algorithms first, to establish an adequate benchmark on their data, or a direct comparison of new and old techniques is infeasible. The Ischemic Stroke Lesion Segmentation (ISLES) challenge, which has ran now consecutively for 3 years, aims to address this problem of comparability. ISLES 2016 and 2017 focused on lesion outcome prediction after ischemic stroke: By providing a uniformly pre-processed data set, researchers from all over the world could apply their algorithm directly. A total of nine teams participated in ISLES 2015, and 15 teams participated in ISLES 2016. Their performance was evaluated in a fair and transparent way to identify the state-of-the-art among all submissions. Top ranked teams almost always employed deep learning tools, which were predominately convolutional neural networks (CNNs). Despite the great efforts, lesion outcome prediction persists challenging. The annotated data set remains publicly available and new approaches can be compared directly via the online evaluation system, serving as a continuing benchmark (www.isles-challenge.org).

18.
Neuroimage ; 179: 199-206, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29894829

RESUMO

Deep neural networks have demonstrated promising potential for the field of medical image reconstruction, successfully generating high quality images for CT, PET and MRI. In this work, an MRI reconstruction algorithm, which is referred to as quantitative susceptibility mapping (QSM), has been developed using a deep neural network in order to perform dipole deconvolution, which restores magnetic susceptibility source from an MRI field map. Previous approaches of QSM require multiple orientation data (e.g. Calculation of Susceptibility through Multiple Orientation Sampling or COSMOS) or regularization terms (e.g. Truncated K-space Division or TKD; Morphology Enabled Dipole Inversion or MEDI) to solve an ill-conditioned dipole deconvolution problem. Unfortunately, they either entail challenges in data acquisition (i.e. long scan time and multiple head orientations) or suffer from image artifacts. To overcome these shortcomings, a deep neural network, which is referred to as QSMnet, is constructed to generate a high quality susceptibility source map from single orientation data. The network has a modified U-net structure and is trained using COSMOS QSM maps, which are considered as gold standard. Five head orientation datasets from five subjects were employed for patch-wise network training after doubling the training data using a model-based data augmentation. Seven additional datasets of five head orientation images (i.e. total 35 images) were used for validation (one dataset) and test (six datasets). The QSMnet maps of the test dataset were compared with the maps from TKD and MEDI for their image quality and consistency with respect to multiple head orientations. Quantitative and qualitative image quality comparisons demonstrate that the QSMnet results have superior image quality to those of TKD or MEDI results and have comparable image quality to those of COSMOS. Additionally, QSMnet maps reveal substantially better consistency across the multiple head orientation data than those from TKD or MEDI. As a preliminary application, the network was further tested for three patients, one with microbleed, another with multiple sclerosis lesions, and the third with hemorrhage. The QSMnet maps showed similar lesion contrasts with those from MEDI, demonstrating potential for future applications.


Assuntos
Algoritmos , Mapeamento Encefálico/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Adulto , Idoso , Encéfalo/anatomia & histologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
19.
J Magn Reson Imaging ; 48(2): 330-340, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29437269

RESUMO

BACKGROUND: There are concerns over gadolinium deposition from gadolinium-based contrast agents (GBCA) administration. PURPOSE: To reduce gadolinium dose in contrast-enhanced brain MRI using a deep learning method. STUDY TYPE: Retrospective, crossover. POPULATION: Sixty patients receiving clinically indicated contrast-enhanced brain MRI. SEQUENCE: 3D T1 -weighted inversion-recovery prepped fast-spoiled-gradient-echo (IR-FSPGR) imaging was acquired at both 1.5T and 3T. In 60 brain MRI exams, the IR-FSPGR sequence was obtained under three conditions: precontrast, postcontrast images with 10% low-dose (0.01mmol/kg) and 100% full-dose (0.1 mmol/kg) of gadobenate dimeglumine. We trained a deep learning model using the first 10 cases (with mixed indications) to approximate full-dose images from the precontrast and low-dose images. Synthesized full-dose images were created using the trained model in two test sets: 20 patients with mixed indications and 30 patients with glioma. ASSESSMENT: For both test sets, low-dose, true full-dose, and the synthesized full-dose postcontrast image sets were compared quantitatively using peak-signal-to-noise-ratios (PSNR) and structural-similarity-index (SSIM). For the test set comprised of 20 patients with mixed indications, two neuroradiologists scored blindly and independently for the three postcontrast image sets, evaluating image quality, motion-artifact suppression, and contrast enhancement compared with precontrast images. STATISTICAL ANALYSIS: Results were assessed using paired t-tests and noninferiority tests. RESULTS: The proposed deep learning method yielded significant (n = 50, P < 0.001) improvements over the low-dose images (>5 dB PSNR gains and >11.0% SSIM). Ratings on image quality (n = 20, P = 0.003) and contrast enhancement (n = 20, P < 0.001) were significantly increased. Compared to true full-dose images, the synthesized full-dose images have a slight but not significant reduction in image quality (n = 20, P = 0.083) and contrast enhancement (n = 20, P = 0.068). Slightly better (n = 20, P = 0.039) motion-artifact suppression was noted in the synthesized images. The noninferiority test rejects the inferiority of the synthesized to true full-dose images for image quality (95% CI: -14-9%), artifacts suppression (95% CI: -5-20%), and contrast enhancement (95% CI: -13-6%). DATA CONCLUSION: With the proposed deep learning method, gadolinium dose can be reduced 10-fold while preserving contrast information and avoiding significant image quality degradation. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 5 J. MAGN. RESON. IMAGING 2018;48:330-340.


Assuntos
Encéfalo/diagnóstico por imagem , Meios de Contraste/química , Aprendizado Profundo , Gadolínio/química , Imageamento por Ressonância Magnética , Adulto , Idoso , Artefatos , Neoplasias Encefálicas/diagnóstico por imagem , Feminino , Glioma/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Movimento (Física)
20.
Magn Reson Med ; 73(2): 523-35, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24604305

RESUMO

PURPOSE: A typical clinical MR examination includes multiple scans to acquire images with different contrasts for complementary diagnostic information. The multicontrast scheme requires long scanning time. The combination of partially parallel imaging and compressed sensing (CS-PPI) has been used to reconstruct accelerated scans. However, there are several unsolved problems in existing methods. The target of this work is to improve existing CS-PPI methods for multicontrast imaging, especially for two-dimensional imaging. THEORY AND METHODS: If the same field of view is scanned in multicontrast imaging, there is significant amount of sharable information. It is proposed in this study to use manifold sharable information among multicontrast images to enhance CS-PPI in a sequential way. Coil sensitivity information and structure based adaptive regularization, which were extracted from previously reconstructed images, were applied to enhance the following reconstructions. The proposed method is called Parallel-imaging and compressed-sensing Reconstruction Of Multicontrast Imaging using SharablE information (PROMISE). RESULTS: Using L1 -SPIRiT as a CS-PPI example, results on multicontrast brain and carotid scans demonstrated that lower error level and better detail preservation can be achieved by exploiting manifold sharable information. Besides, the privilege of PROMISE still exists while there is interscan motion. CONCLUSION: Using the sharable information among multicontrast images can enhance CS-PPI with tolerance to motions.


Assuntos
Encéfalo/anatomia & histologia , Artérias Carótidas/anatomia & histologia , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...