Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 64
Filter
Add more filters

Publication year range
1.
MAGMA ; 2024 Apr 13.
Article in English | MEDLINE | ID: mdl-38613715

ABSTRACT

PURPOSE: Use a conference challenge format to compare machine learning-based gamma-aminobutyric acid (GABA)-edited magnetic resonance spectroscopy (MRS) reconstruction models using one-quarter of the transients typically acquired during a complete scan. METHODS: There were three tracks: Track 1: simulated data, Track 2: identical acquisition parameters with in vivo data, and Track 3: different acquisition parameters with in vivo data. The mean squared error, signal-to-noise ratio, linewidth, and a proposed shape score metric were used to quantify model performance. Challenge organizers provided open access to a baseline model, simulated noise-free data, guides for adding synthetic noise, and in vivo data. RESULTS: Three submissions were compared. A covariance matrix convolutional neural network model was most successful for Track 1. A vision transformer model operating on a spectrogram data representation was most successful for Tracks 2 and 3. Deep learning (DL) reconstructions with 80 transients achieved equivalent or better SNR, linewidth and fit error compared to conventional 320 transient reconstructions. However, some DL models optimized linewidth and SNR without actually improving overall spectral quality, indicating a need for more robust metrics. CONCLUSION: DL-based reconstruction pipelines have the promise to reduce the number of transients required for GABA-edited MRS.

2.
Magn Reson Med ; 89(3): 977-989, 2023 03.
Article in English | MEDLINE | ID: mdl-36346081

ABSTRACT

PURPOSE: To develop a free-breathing (FB) 2D radial balanced steady-state free precession cine cardiac MRI method with 100% respiratory gating efficiency using respiratory auto-calibrated motion correction (RAMCO) based on a motion-sensing camera. METHODS: The signal from a respiratory motion-sensing camera was recorded during a FB retrospectively electrocardiogram triggered 2D radial balanced steady-state free precession acquisition using pseudo-tiny-golden-angle ordering. With RAMCO, for each acquisition the respiratory signal was retrospectively auto-calibrated by applying different linear translations, using the resulting in-plane image sharpness as a criterium. The auto-calibration determines the optimal magnitude of the linear translations for each of the in-plane directions to minimize motion blurring caused by bulk respiratory motion. Additionally, motion-weighted density compensation was applied during radial gridding to minimize through-plane and non-bulk motion blurring. Left ventricular functional parameters and sharpness scores of FB radial cine were compared with and without RAMCO, and additionally with conventional breath-hold Cartesian cine on 9 volunteers. RESULTS: FB radial cine with RAMCO had similar sharpness scores as conventional breath-hold Cartesian cine and the left ventricular functional parameters agreed. For FB radial cine, RAMCO reduced respiratory motion artifacts with a statistically significant difference in sharpness scores (P < 0.05) compared to reconstructions without motion correction. CONCLUSION: 2D radial cine imaging with RAMCO allows evaluation of left ventricular functional parameters in FB with 100% respiratory efficiency. It eliminates the need for breath-holds, which is especially valuable for patients with no or impaired breath-holding capacity. Validation of the proposed method on patients is warranted.


Subject(s)
Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging, Cine , Ventricular Function, Left , Humans , Breath Holding , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine/methods , Respiration , Retrospective Studies , Ventricular Function, Left/physiology
3.
Magn Reson Med ; 90(4): 1253-1270, 2023 10.
Article in English | MEDLINE | ID: mdl-37402235

ABSTRACT

This literature review presents a comprehensive overview of machine learning (ML) applications in proton MR spectroscopy (MRS). As the use of ML techniques in MRS continues to grow, this review aims to provide the MRS community with a structured overview of the state-of-the-art methods. Specifically, we examine and summarize studies published between 2017 and 2023 from major journals in the MR field. We categorize these studies based on a typical MRS workflow, including data acquisition, processing, analysis, and artificial data generation. Our review reveals that ML in MRS is still in its early stages, with a primary focus on processing and analysis techniques, and less attention given to data acquisition. We also found that many studies use similar model architectures, with little comparison to alternative architectures. Additionally, the generation of artificial data is a crucial topic, with no consistent method for its generation. Furthermore, many studies demonstrate that artificial data suffers from generalization issues when tested on in vivo data. We also conclude that risks related to ML models should be addressed, particularly for clinical applications. Therefore, output uncertainty measures and model biases are critical to investigate. Nonetheless, the rapid development of ML in MRS and the promising results from the reviewed studies justify further research in this field.


Subject(s)
Machine Learning , Protons , Magnetic Resonance Spectroscopy/methods , Workflow , Proton Magnetic Resonance Spectroscopy
4.
Neuroimage ; 238: 118244, 2021 09.
Article in English | MEDLINE | ID: mdl-34116148

ABSTRACT

A variety of strategies are used to combine multi-echo functional magnetic resonance imaging (fMRI) data, yet recent literature lacks a systematic comparison of the available options. Here we compare six different approaches derived from multi-echo data and evaluate their influences on BOLD sensitivity for offline and in particular real-time use cases: a single-echo time series (based on Echo 2), the real-time T2*-mapped time series (T2*FIT) and four combined time series (T2*-weighted, tSNR-weighted, TE-weighted, and a new combination scheme termed T2*FIT-weighted). We compare the influences of these six multi-echo derived time series on BOLD sensitivity using a healthy participant dataset (N = 28) with four task-based fMRI runs and two resting state runs. We show that the T2*FIT-weighted combination yields the largest increase in temporal signal-to-noise ratio across task and resting state runs. We demonstrate additionally for all tasks that the T2*FIT time series consistently yields the largest offline effect size measures and real-time region-of-interest based functional contrasts and temporal contrast-to-noise ratios. These improvements show the promising utility of multi-echo fMRI for studies employing real-time paradigms, while further work is advised to mitigate the decreased tSNR of the T2*FIT time series. We recommend the use and continued exploration of T2*FIT for offline task-based and real-time region-based fMRI analysis. Supporting information includes: a data repository (https://dataverse.nl/dataverse/rt-me-fmri), an interactive web-based application to explore the data (https://rt-me-fmri.herokuapp.com/), and further materials and code for reproducibility (https://github.com/jsheunis/rt-me-fMRI).


Subject(s)
Brain Mapping/methods , Brain/diagnostic imaging , Emotions/physiology , Humans , Magnetic Resonance Imaging , Neurofeedback , Reproducibility of Results
5.
Magn Reson Med ; 85(1): 68-77, 2021 01.
Article in English | MEDLINE | ID: mdl-32851711

ABSTRACT

PURPOSE: To develop a new 3D radial trajectory based on the natural spiral phyllotaxis (SP), with variable anisotropic FOV. THEORY & METHODS: A 3D radial trajectory based on the SP with favorable interleaving properties for cardiac imaging has been proposed by Piccini et al (Magn Reson Med. 2011;66:1049-1056), which supports a FOV with a fixed anisotropy. However, a fixed anisotropy can be inefficient when sampling objects with different anisotropic dimensions. We extend Larson's 3D radial method to provide variable anisotropic FOV for spiral phyllotaxis (VASP). Simulations were performed to measure distance between successive projections, analyze point spread functions, and compare aliasing artifacts for both VASP and conventional SP. VASP was fully implemented on a whole-body clinical MR scanner. Phantom and in vivo cardiac images were acquired at 1.5 tesla. RESULTS: Simulations, phantom, and in vivo experiments confirmed that VASP can achieve variable anisotropic FOV while maintaining the favorable interleaving properties of SP. For an anisotropic FOV with 100:100:35 ratio, VASP required ~65% fewer radial projections than the conventional SP to satisfy Nyquist criteria. Alternatively, when the same number of radial projections were used as in conventional SP, VASP produced fewer aliasing artifacts for anisotropic objects within the excited imaging volumes. CONCLUSION: We have developed a new method (VASP), which enables variable anisotropic FOV for 3D radial trajectory with SP. For anisotropic objects within the excited imaging volumes, VASP can reduce scan times and/or reduce aliasing artifacts.


Subject(s)
Image Enhancement , Image Processing, Computer-Assisted , Algorithms , Anisotropy , Artifacts , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Phantoms, Imaging
6.
Hum Brain Mapp ; 41(12): 3439-3467, 2020 08 15.
Article in English | MEDLINE | ID: mdl-32333624

ABSTRACT

Neurofeedback training using real-time functional magnetic resonance imaging (rtfMRI-NF) allows subjects voluntary control of localised and distributed brain activity. It has sparked increased interest as a promising non-invasive treatment option in neuropsychiatric and neurocognitive disorders, although its efficacy and clinical significance are yet to be determined. In this work, we present the first extensive review of acquisition, processing and quality control methods available to improve the quality of the neurofeedback signal. Furthermore, we investigate the state of denoising and quality control practices in 128 recently published rtfMRI-NF studies. We found: (a) that less than a third of the studies reported implementing standard real-time fMRI denoising steps, (b) significant room for improvement with regards to methods reporting and (c) the need for methodological studies quantifying and comparing the contribution of denoising steps to the neurofeedback signal quality. Advances in rtfMRI-NF research depend on reproducibility of methods and results. Notably, a systematic effort is needed to build up evidence that disentangles the various mechanisms influencing neurofeedback effects. To this end, we recommend that future rtfMRI-NF studies: (a) report implementation of a set of standard real-time fMRI denoising steps according to a proposed COBIDAS-style checklist (https://osf.io/kjwhf/), (b) ensure the quality of the neurofeedback signal by calculating and reporting community-informed quality metrics and applying offline control checks and (c) strive to adopt transparent principles in the form of methods and data sharing and support of open-source rtfMRI-NF software. Code and data for reproducibility, as well as an interactive environment to explore the study data, can be accessed at https://github.com/jsheunis/quality-and-denoising-in-rtfmri-nf.


Subject(s)
Functional Neuroimaging , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Neurofeedback , Quality Control , Functional Neuroimaging/methods , Functional Neuroimaging/standards , Humans , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/standards , Magnetic Resonance Imaging/methods , Magnetic Resonance Imaging/standards , Neurofeedback/methods
7.
J Magn Reson Imaging ; 51(6): 1689-1696, 2020 06.
Article in English | MEDLINE | ID: mdl-31710769

ABSTRACT

BACKGROUND: Quantitative myocardial perfusion cardiac MRI can provide a fast and robust assessment of myocardial perfusion status for the noninvasive diagnosis of myocardial ischemia while being more objective than visual assessment. However, it currently has limited use in clinical practice due to the challenging postprocessing required, particularly the segmentation. PURPOSE: To evaluate the efficacy of an automated deep learning (DL) pipeline for image processing prior to quantitative analysis. STUDY TYPE: Retrospective. POPULATION: In all, 175 (350 MRI scans; 1050 image series) clinical patients under both rest and stress conditions (135/10/30 training/validation/test). FIELD STRENGTH/SEQUENCE: 3.0T/2D multislice saturation recovery T1 -weighted gradient echo sequence. ASSESSMENT: Accuracy was assessed, as compared to the manual operator, through the mean square error of the distance between landmarks and the Dice similarity coefficient of the segmentation and bounding box detection. Quantitative perfusion maps obtained using the automated DL-based processing were compared to the results obtained with the manually processed images. STATISTICAL TESTS: Bland-Altman plots and intraclass correlation coefficient (ICC) were used to assess the myocardial blood flow (MBF) obtained using the automated DL pipeline, as compared to values obtained by a manual operator. RESULTS: The mean (SD) error in the detection of the time of peak signal enhancement in the left ventricle was 1.49 (1.4) timeframes. The mean (SD) Dice similarity coefficients for the bounding box and myocardial segmentation were 0.93 (0.03) and 0.80 (0.06), respectively. The mean (SD) error in the RV insertion point was 2.8 (1.8) mm. The Bland-Altman plots showed a bias of 2.6% of the mean MBF between the automated and manually processed MBF values on a per-myocardial segment basis. The ICC was 0.89, 95% confidence interval = [0.87, 0.90]. DATA CONCLUSION: We showed high accuracy, compared to manual processing, for the DL-based processing of myocardial perfusion data leading to quantitative values that are similar to those achieved with manual processing. LEVEL OF EVIDENCE: 3 Technical Efficacy Stage: 1 J. Magn. Reson. Imaging 2020;51:1689-1696.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Perfusion , Retrospective Studies
8.
Magn Reson Med ; 73(4): 1623-31, 2015 Apr.
Article in English | MEDLINE | ID: mdl-24844947

ABSTRACT

PURPOSE: High-resolution myocardial perfusion analysis allows for preserving spatial information with excellent sensitivity for subendocardial ischemia detection. However, it suffers from low signal-to-noise ratio. Commonly, spatial averaging is used to increase signal-to-noise ratio. This bears the risk of losing information about the extent, localization and transmurality of ischemia. This study investigates spatial-averaging effects on perfusion-estimates accuracy. METHODS: Perfusion data were obtained from patients and healthy volunteers. Spatial averaging was performed on voxel-based data in transmural and angular direction to reduce resolution to 50, 20, and 10% of its original value. Fit quality assessment method is used to measure the fraction of modeled information and remaining unmodeled information in the residuals. RESULTS: Fraction of modeled information decreased in patients as resolution reduced. This decrease was more evident for Fermi and exponential in transmural direction. Fermi and exponential showed significant difference at 50% resolution (Fermi P < 0.001, exponential P =0.0014). No significant differences were observed for autoregressive-moving-average model (P = 0.081). At full resolution, autoregressive-moving-average model has the lowest fraction of residual information (0.3). Differences were observed comparing ischemic regions perfusion-estimates coefficient of variation at transmural and angular direction. CONCLUSION: Angular averaging preserves more information compared to transmural averaging. Reducing resolution level below 50% at transmural and 20% at angular direction results in losing information about transmural perfusion differences. Maximum voxel size of 2 × 2 mm(2) is necessary to avoid loss of physiological information due to spatial averaging.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Angiography/methods , Magnetic Resonance Imaging, Cine/methods , Myocardial Ischemia/diagnosis , Myocardial Ischemia/physiopathology , Myocardial Perfusion Imaging/methods , Algorithms , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity , Spatio-Temporal Analysis
9.
Magn Reson Med ; 72(1): 188-201, 2014 Jul.
Article in English | MEDLINE | ID: mdl-23943090

ABSTRACT

PURPOSE: Atherosclerotic carotid plaques can be quantified in vivo by MRI. However, the accuracy in segmentation and quantification of components such as the thin fibrous cap (FC) and lipid-rich necrotic core (LRNC) remains unknown due to the lack of a submillimeter scale ground truth. METHODS: A novel approach was taken by numerically simulating in vivo carotid MRI providing a ground truth comparison. Upon evaluation of a simulated clinical protocol, MR readers segmented simulated images of cross-sectional plaque geometries derived from histological data of 12 patients. RESULTS: MR readers showed high correlation (R) and intraclass correlation (ICC) in measuring the luminal area (R = 0.996, ICC = 0.99), vessel wall area (R = 0.96, ICC = 0.94) and LRNC area (R = 0.95, ICC = 0.94). LRNC area was underestimated (mean error, -24%). Minimum FC thickness showed a mediocre correlation and intraclass correlation (R = 0.71, ICC = 0.69). CONCLUSION: Current clinical MRI can quantify carotid plaques but shows limitations for thin FC thickness quantification. These limitations could influence the reliability of carotid MRI for assessing plaque rupture risk associated with FC thickness. Overall, MRI simulations provide a feasible methodology for assessing segmentation and quantification accuracy, as well as for improving scan protocol design.


Subject(s)
Carotid Artery Diseases/diagnosis , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Angiography/methods , Plaque, Atherosclerotic/diagnosis , Computer Simulation , Contrast Media , Humans , Lipids/analysis , Necrosis , Organometallic Compounds , Signal-To-Noise Ratio
10.
J Magn Reson Imaging ; 39(4): 895-900, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24123369

ABSTRACT

PURPOSE: To determine sex-specific reference values for left ventricular (LV) volumes, mass, and ejection fraction (EF) in healthy adults using computer-aided analysis and to examine the effect of age on LV parameters. MATERIALS AND METHODS: We examined data from 1494 members of the Framingham Heart Study Offspring cohort, obtained using short-axis stack cine SSFP CMR, identified a healthy reference group (without cardiovascular disease, hypertension, or LV wall motion abnormality) and determined sex-specific upper 95th percentile thresholds for LV volumes and mass, and lower 5th percentile thresholds for EF using computer-assisted border detection. In secondary analyses, we stratified participants by age-decade and tested for linear trend across age groups. RESULTS: The reference group comprised 685 adults (423F; 61 ± 9 years). Men had greater LV volumes and mass, before and after indexation to common measures of body size (all P = 0.001). Women had greater EF (73 ± 6 versus 71 ± 6%; P = 0.0002). LV volumes decreased with greater age in both sexes, even after indexation. Indexed LV mass did not vary with age. LV EF and concentricity increased with greater age in both sexes. CONCLUSION: We present CMR-derived LV reference values. There are significant age and sex differences in LV volumes, EF, and geometry, whereas mass differs between sexes but not age groups.


Subject(s)
Aging/physiology , Heart Ventricles/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging, Cine/methods , Stroke Volume/physiology , Ventricular Function, Left/physiology , Adult , Aged , Aged, 80 and over , Algorithms , Female , Humans , Image Enhancement/methods , Male , Middle Aged , Organ Size/physiology , Pattern Recognition, Automated/methods , Reference Values , Reproducibility of Results , Sensitivity and Specificity
11.
J Biomech Eng ; 136(2): 021015, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24317274

ABSTRACT

Biomechanical finite element analysis (FEA) based on in vivo carotid magnetic resonance imaging (MRI) can be used to assess carotid plaque vulnerability noninvasively by computing peak cap stress. However, the accuracy of MRI plaque segmentation and the influence this has on FEA has remained unreported due to the lack of a reliable submillimeter ground truth. In this study, we quantify this influence using novel numerical simulations of carotid MRI. Histological sections from carotid plaques from 12 patients were used to create 33 ground truth plaque models. These models were subjected to numerical computer simulations of a currently used clinically applied 3.0 T T1-weighted black-blood carotid MRI protocol (in-plane acquisition voxel size of 0.62 × 0.62 mm2) to generate simulated in vivo MR images from a known underlying ground truth. The simulated images were manually segmented by three MRI readers. FEA models based on the MRI segmentations were compared with the FEA models based on the ground truth. MRI-based FEA model peak cap stress was consistently underestimated, but still correlated (R) moderately with the ground truth stress: R = 0.71, R = 0.47, and R = 0.76 for the three MRI readers respectively (p < 0.01). Peak plaque stretch was underestimated as well. The peak cap stress in thick-cap, low stress plaques was substantially more accurately and precisely predicted (error of -12 ± 44 kPa) than the peak cap stress in plaques with caps thinner than the acquisition voxel size (error of -177 ± 168 kPa). For reliable MRI-based FEA to compute the peak cap stress of carotid plaques with thin caps, the current clinically used in-plane acquisition voxel size (∼0.6 mm) is inadequate. FEA plaque stress computations would be considerably more reliable if they would be used to identify thick-cap carotid plaques with low stresses instead.


Subject(s)
Carotid Arteries/physiopathology , Carotid Stenosis/pathology , Carotid Stenosis/physiopathology , Elasticity Imaging Techniques/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Angiography/methods , Models, Cardiovascular , Aged , Blood Flow Velocity , Carotid Arteries/pathology , Computer Simulation , Elastic Modulus , Female , Humans , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity , Shear Strength
12.
Eur Heart J Imaging Methods Pract ; 2(1): qyae001, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38283662

ABSTRACT

Aims: Quantitative stress perfusion cardiac magnetic resonance (CMR) is becoming more widely available, but it is still unclear how to integrate this information into clinical decision-making. Typically, pixel-wise perfusion maps are generated, but diagnostic and prognostic studies have summarized perfusion as just one value per patient or in 16 myocardial segments. In this study, the reporting of quantitative perfusion maps is extended from the standard 16 segments to a high-resolution bullseye. Cut-off thresholds are established for the high-resolution bullseye, and the identified perfusion defects are compared with visual assessment. Methods and results: Thirty-four patients with known or suspected coronary artery disease were retrospectively analysed. Visual perfusion defects were contoured on the CMR images and pixel-wise quantitative perfusion maps were generated. Cut-off values were established on the high-resolution bullseye consisting of 1800 points and compared with the per-segment, per-coronary, and per-patient resolution thresholds. Quantitative stress perfusion was significantly lower in visually abnormal pixels, 1.11 (0.75-1.57) vs. 2.35 (1.82-2.9) mL/min/g (Mann-Whitney U test P < 0.001), with an optimal cut-off of 1.72 mL/min/g. This was lower than the segment-wise optimal threshold of 1.92 mL/min/g. The Bland-Altman analysis showed that visual assessment underestimated large perfusion defects compared with the quantification with good agreement for smaller defect burdens. A Dice overlap of 0.68 (0.57-0.78) was found. Conclusion: This study introduces a high-resolution bullseye consisting of 1800 points, rather than 16, per patient for reporting quantitative stress perfusion, which may improve sensitivity. Using this representation, the threshold required to identify areas of reduced perfusion is lower than for segmental analysis.

13.
Front Psychiatry ; 15: 1255370, 2024.
Article in English | MEDLINE | ID: mdl-38585483

ABSTRACT

Introduction: Approximately one in six people will experience an episode of major depressive disorder (MDD) in their lifetime. Effective treatment is hindered by subjective clinical decision-making and a lack of objective prognostic biomarkers. Functional MRI (fMRI) could provide such an objective measure but the majority of MDD studies has focused on static approaches, disregarding the rapidly changing nature of the brain. In this study, we aim to predict depression severity changes at 3 and 6 months using dynamic fMRI features. Methods: For our research, we acquired a longitudinal dataset of 32 MDD patients with fMRI scans acquired at baseline and clinical follow-ups 3 and 6 months later. Several measures were derived from an emotion face-matching fMRI dataset: activity in brain regions, static and dynamic functional connectivity between functional brain networks (FBNs) and two measures from a wavelet coherence analysis approach. All fMRI features were evaluated independently, with and without demographic and clinical parameters. Patients were divided into two classes based on changes in depression severity at both follow-ups. Results: The number of coherence clusters (nCC) between FBNs, reflecting the total number of interactions (either synchronous, anti-synchronous or causal), resulted in the highest predictive performance. The nCC-based classifier achieved 87.5% and 77.4% accuracy for the 3- and 6-months change in severity, respectively. Furthermore, regression analyses supported the potential of nCC for predicting depression severity on a continuous scale. The posterior default mode network (DMN), dorsal attention network (DAN) and two visual networks were the most important networks in the optimal nCC models. Reduced nCC was associated with a poorer depression course, suggesting deficits in sustained attention to and coping with emotion-related faces. An ensemble of classifiers with demographic, clinical and lead coherence features, a measure of dynamic causality, resulted in a 3-months clinical outcome prediction accuracy of 81.2%. Discussion: The dynamic wavelet features demonstrated high accuracy in predicting individual depression severity change. Features describing brain dynamics could enhance understanding of depression and support clinical decision-making. Further studies are required to evaluate their robustness and replicability in larger cohorts.

14.
Comput Med Imaging Graph ; 112: 102332, 2024 03.
Article in English | MEDLINE | ID: mdl-38245925

ABSTRACT

Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.


Subject(s)
Brain Neoplasms , Glioma , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Brain Neoplasms/diagnostic imaging , Algorithms , Magnetic Resonance Imaging/methods
15.
Comput Methods Programs Biomed ; 248: 108115, 2024 May.
Article in English | MEDLINE | ID: mdl-38503072

ABSTRACT

BACKGROUND AND OBJECTIVE: As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism. METHODS: We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg. RESULTS: Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937. CONCLUSIONS: Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.


Subject(s)
Deep Learning , Humans , Brain/diagnostic imaging , Brain/anatomy & histology , Magnetic Resonance Imaging/methods , Algorithms , Neuroimaging/methods , Image Processing, Computer-Assisted/methods
16.
Invest Radiol ; 2024 May 01.
Article in English | MEDLINE | ID: mdl-38687025

ABSTRACT

OBJECTIVES: Dark-blood late gadolinium enhancement (DB-LGE) cardiac magnetic resonance has been proposed as an alternative to standard white-blood LGE (WB-LGE) imaging protocols to enhance scar-to-blood contrast without compromising scar-to-myocardium contrast. In practice, both DB and WB contrasts may have clinical utility, but acquiring both has the drawback of additional acquisition time. The aim of this study was to develop and evaluate a deep learning method to generate synthetic WB-LGE images from DB-LGE, allowing the assessment of both contrasts without additional scan time. MATERIALS AND METHODS: DB-LGE and WB-LGE data from 215 patients were used to train 2 types of unpaired image-to-image translation deep learning models, cycle-consistent generative adversarial network (CycleGAN) and contrastive unpaired translation, with 5 different loss function hyperparameter settings each. Initially, the best hyperparameter setting was determined for each model type based on the Fréchet inception distance and the visual assessment of expert readers. Then, the CycleGAN and contrastive unpaired translation models with the optimal hyperparameters were directly compared. Finally, with the best model chosen, the quantification of scar based on the synthetic WB-LGE images was compared with the truly acquired WB-LGE. RESULTS: The CycleGAN architecture for unpaired image-to-image translation was found to provide the most realistic synthetic WB-LGE images from DB-LGE images. The results showed that it was difficult for visual readers to distinguish if an image was true or synthetic (55% correctly classified). In addition, scar burden quantification with the synthetic data was highly correlated with the analysis of the truly acquired images. Bland-Altman analysis found a mean bias in percentage scar burden between the quantification of the real WB and synthetic white-blood images of 0.44% with limits of agreement from -10.85% to 11.74%. The mean image quality of the real WB images (3.53/5) was scored higher than the synthetic white-blood images (3.03), P = 0.009. CONCLUSIONS: This study proposed a CycleGAN model to generate synthetic WB-LGE from DB-LGE images to allow assessment of both image contrasts without additional scan time. This work represents a clinically focused assessment of synthetic medical images generated by artificial intelligence, a topic with significant potential for a multitude of applications. However, further evaluation is warranted before clinical adoption.

17.
Magn Reson Med ; 69(3): 698-707, 2013 Mar 01.
Article in English | MEDLINE | ID: mdl-22532435

ABSTRACT

The aim of this article is to describe a novel hardware perfusion phantom that simulates myocardial first-pass perfusion allowing comparisons between different MR techniques and validation of the results against a true gold standard. MR perfusion images were acquired at different myocardial perfusion rates and variable doses of gadolinium and cardiac output. The system proved to be sensitive to controlled variations of myocardial perfusion rate, contrast agent dose, and cardiac output. It produced distinct signal intensity curves for perfusion rates ranging from 1 to 10 mL/mL/min. Quantification of myocardial blood flow by signal deconvolution techniques provided accurate measurements of perfusion. The phantom also proved to be very reproducible between different sessions and different operators. This novel hardware perfusion phantom system allows reliable, reproducible, and efficient simulation of myocardial first-pass MR perfusion. Direct comparison between the results of image-based quantification and reference values of flow and myocardial perfusion will allow development and validation of accurate quantification methods.


Subject(s)
Magnetic Resonance Angiography/instrumentation , Magnetic Resonance Imaging, Cine/instrumentation , Myocardial Perfusion Imaging/instrumentation , Phantoms, Imaging , Equipment Design , Equipment Failure Analysis , Humans , Reproducibility of Results , Sensitivity and Specificity
18.
Med Image Anal ; 84: 102688, 2023 02.
Article in English | MEDLINE | ID: mdl-36493702

ABSTRACT

Deep learning-based segmentation methods provide an effective and automated way for assessing the structure and function of the heart in cardiac magnetic resonance (CMR) images. However, despite their state-of-the-art performance on images acquired from the same source (same scanner or scanner vendor) as images used during training, their performance degrades significantly on images coming from different domains. A straightforward approach to tackle this issue consists of acquiring large quantities of multi-site and multi-vendor data, which is practically infeasible. Generative adversarial networks (GANs) for image synthesis present a promising solution for tackling data limitations in medical imaging and addressing the generalization capability of segmentation models. In this work, we explore the usability of synthesized short-axis CMR images generated using a segmentation-informed conditional GAN, to improve the robustness of heart cavity segmentation models in a variety of different settings. The GAN is trained on paired real images and corresponding segmentation maps belonging to both the heart and the surrounding tissue, reinforcing the synthesis of semantically-consistent and realistic images. First, we evaluate the segmentation performance of a model trained solely with synthetic data and show that it only slightly underperforms compared to the baseline trained with real data. By further combining real with synthetic data during training, we observe a substantial improvement in segmentation performance (up to 4% and 40% in terms of Dice score and Hausdorff distance) across multiple data-sets collected from various sites and scanner. This is additionally demonstrated across state-of-the-art 2D and 3D segmentation networks, whereby the obtained results demonstrate the potential of the proposed method in tackling the presence of the domain shift in medical data. Finally, we thoroughly analyze the quality of synthetic data and its ability to replace real MR images during training, as well as provide an insight into important aspects of utilizing synthetic images for segmentation.


Subject(s)
Deep Learning , Humans , Magnetic Resonance Imaging , Heart/diagnostic imaging , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods
19.
IEEE Trans Med Imaging ; 42(3): 726-738, 2023 03.
Article in English | MEDLINE | ID: mdl-36260571

ABSTRACT

One of the limiting factors for the development and adoption of novel deep-learning (DL) based medical image analysis methods is the scarcity of labeled medical images. Medical image simulation and synthesis can provide solutions by generating ample training data with corresponding ground truth labels. Despite recent advances, generated images demonstrate limited realism and diversity. In this work, we develop a flexible framework for simulating cardiac magnetic resonance (MR) images with variable anatomical and imaging characteristics for the purpose of creating a diversified virtual population. We advance previous works on both cardiac MR image simulation and anatomical modeling to increase the realism in terms of both image appearance and underlying anatomy. To diversify the generated images, we define parameters: 1)to alter the anatomy, 2) to assign MR tissue properties to various tissue types, and 3) to manipulate the image contrast via acquisition parameters. The proposed framework is optimized to generate a substantial number of cardiac MR images with ground truth labels suitable for downstream supervised tasks. A database of virtual subjects is simulated and its usefulness for aiding a DL segmentation method is evaluated. Our experiments show that training completely with simulated images can perform comparable with a model trained with real images for heart cavity segmentation in mid-ventricular slices. Moreover, such data can be used in addition to classical augmentation for boosting the performance when training data is limited, particularly by increasing the contrast and anatomical variation, leading to better regularization and generalization. The database is publicly available at https://osf.io/bkzhm/ and the simulation code will be available at https://github.com/sinaamirrajab/CMRI.


Subject(s)
Heart , Magnetic Resonance Imaging , Humans , Heart/diagnostic imaging , Computer Simulation
20.
Comput Biol Med ; 161: 106973, 2023 07.
Article in English | MEDLINE | ID: mdl-37209615

ABSTRACT

Cardiac magnetic resonance (CMR) image segmentation is an integral step in the analysis of cardiac function and diagnosis of heart related diseases. While recent deep learning-based approaches in automatic segmentation have shown great promise to alleviate the need for manual segmentation, most of these are not applicable to realistic clinical scenarios. This is largely due to training on mainly homogeneous datasets, without variation in acquisition, which typically occurs in multi-vendor and multi-site settings, as well as pathological data. Such approaches frequently exhibit a degradation in prediction performance, particularly on outlier cases commonly associated with difficult pathologies, artifacts and extensive changes in tissue shape and appearance. In this work, we present a model aimed at segmenting all three cardiac structures in a multi-center, multi-disease and multi-view scenario. We propose a pipeline, addressing different challenges with segmentation of such heterogeneous data, consisting of heart region detection, augmentation through image synthesis and a late-fusion segmentation approach. Extensive experiments and analysis demonstrate the ability of the proposed approach to tackle the presence of outlier cases during both training and testing, allowing for better adaptation to unseen and difficult examples. Overall, we show that the effective reduction of segmentation failures on outlier cases has a positive impact on not only the average segmentation performance, but also on the estimation of clinical parameters, leading to a better consistency in derived metrics.


Subject(s)
Algorithms , Heart Diseases , Humans , Magnetic Resonance Imaging/methods , Heart/diagnostic imaging , Radiography , Image Processing, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL