RESUMO
PURPOSE: Use a conference challenge format to compare machine learning-based gamma-aminobutyric acid (GABA)-edited magnetic resonance spectroscopy (MRS) reconstruction models using one-quarter of the transients typically acquired during a complete scan. METHODS: There were three tracks: Track 1: simulated data, Track 2: identical acquisition parameters with in vivo data, and Track 3: different acquisition parameters with in vivo data. The mean squared error, signal-to-noise ratio, linewidth, and a proposed shape score metric were used to quantify model performance. Challenge organizers provided open access to a baseline model, simulated noise-free data, guides for adding synthetic noise, and in vivo data. RESULTS: Three submissions were compared. A covariance matrix convolutional neural network model was most successful for Track 1. A vision transformer model operating on a spectrogram data representation was most successful for Tracks 2 and 3. Deep learning (DL) reconstructions with 80 transients achieved equivalent or better SNR, linewidth and fit error compared to conventional 320 transient reconstructions. However, some DL models optimized linewidth and SNR without actually improving overall spectral quality, indicating a need for more robust metrics. CONCLUSION: DL-based reconstruction pipelines have the promise to reduce the number of transients required for GABA-edited MRS.
Assuntos
Aprendizado Profundo , Espectroscopia de Ressonância Magnética , Razão Sinal-Ruído , Ácido gama-Aminobutírico , Ácido gama-Aminobutírico/metabolismo , Humanos , Espectroscopia de Ressonância Magnética/métodos , Redes Neurais de Computação , Algoritmos , Encéfalo/diagnóstico por imagem , Encéfalo/metabolismo , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Simulação por ComputadorRESUMO
PURPOSE: To develop a free-breathing (FB) 2D radial balanced steady-state free precession cine cardiac MRI method with 100% respiratory gating efficiency using respiratory auto-calibrated motion correction (RAMCO) based on a motion-sensing camera. METHODS: The signal from a respiratory motion-sensing camera was recorded during a FB retrospectively electrocardiogram triggered 2D radial balanced steady-state free precession acquisition using pseudo-tiny-golden-angle ordering. With RAMCO, for each acquisition the respiratory signal was retrospectively auto-calibrated by applying different linear translations, using the resulting in-plane image sharpness as a criterium. The auto-calibration determines the optimal magnitude of the linear translations for each of the in-plane directions to minimize motion blurring caused by bulk respiratory motion. Additionally, motion-weighted density compensation was applied during radial gridding to minimize through-plane and non-bulk motion blurring. Left ventricular functional parameters and sharpness scores of FB radial cine were compared with and without RAMCO, and additionally with conventional breath-hold Cartesian cine on 9 volunteers. RESULTS: FB radial cine with RAMCO had similar sharpness scores as conventional breath-hold Cartesian cine and the left ventricular functional parameters agreed. For FB radial cine, RAMCO reduced respiratory motion artifacts with a statistically significant difference in sharpness scores (P < 0.05) compared to reconstructions without motion correction. CONCLUSION: 2D radial cine imaging with RAMCO allows evaluation of left ventricular functional parameters in FB with 100% respiratory efficiency. It eliminates the need for breath-holds, which is especially valuable for patients with no or impaired breath-holding capacity. Validation of the proposed method on patients is warranted.
Assuntos
Interpretação de Imagem Assistida por Computador , Imagem Cinética por Ressonância Magnética , Função Ventricular Esquerda , Humanos , Suspensão da Respiração , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/métodos , Respiração , Estudos Retrospectivos , Função Ventricular Esquerda/fisiologiaRESUMO
This literature review presents a comprehensive overview of machine learning (ML) applications in proton MR spectroscopy (MRS). As the use of ML techniques in MRS continues to grow, this review aims to provide the MRS community with a structured overview of the state-of-the-art methods. Specifically, we examine and summarize studies published between 2017 and 2023 from major journals in the MR field. We categorize these studies based on a typical MRS workflow, including data acquisition, processing, analysis, and artificial data generation. Our review reveals that ML in MRS is still in its early stages, with a primary focus on processing and analysis techniques, and less attention given to data acquisition. We also found that many studies use similar model architectures, with little comparison to alternative architectures. Additionally, the generation of artificial data is a crucial topic, with no consistent method for its generation. Furthermore, many studies demonstrate that artificial data suffers from generalization issues when tested on in vivo data. We also conclude that risks related to ML models should be addressed, particularly for clinical applications. Therefore, output uncertainty measures and model biases are critical to investigate. Nonetheless, the rapid development of ML in MRS and the promising results from the reviewed studies justify further research in this field.
Assuntos
Aprendizado de Máquina , Prótons , Espectroscopia de Ressonância Magnética/métodos , Fluxo de Trabalho , Espectroscopia de Prótons por Ressonância MagnéticaRESUMO
A variety of strategies are used to combine multi-echo functional magnetic resonance imaging (fMRI) data, yet recent literature lacks a systematic comparison of the available options. Here we compare six different approaches derived from multi-echo data and evaluate their influences on BOLD sensitivity for offline and in particular real-time use cases: a single-echo time series (based on Echo 2), the real-time T2*-mapped time series (T2*FIT) and four combined time series (T2*-weighted, tSNR-weighted, TE-weighted, and a new combination scheme termed T2*FIT-weighted). We compare the influences of these six multi-echo derived time series on BOLD sensitivity using a healthy participant dataset (N = 28) with four task-based fMRI runs and two resting state runs. We show that the T2*FIT-weighted combination yields the largest increase in temporal signal-to-noise ratio across task and resting state runs. We demonstrate additionally for all tasks that the T2*FIT time series consistently yields the largest offline effect size measures and real-time region-of-interest based functional contrasts and temporal contrast-to-noise ratios. These improvements show the promising utility of multi-echo fMRI for studies employing real-time paradigms, while further work is advised to mitigate the decreased tSNR of the T2*FIT time series. We recommend the use and continued exploration of T2*FIT for offline task-based and real-time region-based fMRI analysis. Supporting information includes: a data repository (https://dataverse.nl/dataverse/rt-me-fmri), an interactive web-based application to explore the data (https://rt-me-fmri.herokuapp.com/), and further materials and code for reproducibility (https://github.com/jsheunis/rt-me-fMRI).
Assuntos
Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Emoções/fisiologia , Humanos , Imageamento por Ressonância Magnética , Neurorretroalimentação , Reprodutibilidade dos TestesRESUMO
PURPOSE: To develop a new 3D radial trajectory based on the natural spiral phyllotaxis (SP), with variable anisotropic FOV. THEORY & METHODS: A 3D radial trajectory based on the SP with favorable interleaving properties for cardiac imaging has been proposed by Piccini et al (Magn Reson Med. 2011;66:1049-1056), which supports a FOV with a fixed anisotropy. However, a fixed anisotropy can be inefficient when sampling objects with different anisotropic dimensions. We extend Larson's 3D radial method to provide variable anisotropic FOV for spiral phyllotaxis (VASP). Simulations were performed to measure distance between successive projections, analyze point spread functions, and compare aliasing artifacts for both VASP and conventional SP. VASP was fully implemented on a whole-body clinical MR scanner. Phantom and in vivo cardiac images were acquired at 1.5 tesla. RESULTS: Simulations, phantom, and in vivo experiments confirmed that VASP can achieve variable anisotropic FOV while maintaining the favorable interleaving properties of SP. For an anisotropic FOV with 100:100:35 ratio, VASP required ~65% fewer radial projections than the conventional SP to satisfy Nyquist criteria. Alternatively, when the same number of radial projections were used as in conventional SP, VASP produced fewer aliasing artifacts for anisotropic objects within the excited imaging volumes. CONCLUSION: We have developed a new method (VASP), which enables variable anisotropic FOV for 3D radial trajectory with SP. For anisotropic objects within the excited imaging volumes, VASP can reduce scan times and/or reduce aliasing artifacts.
Assuntos
Aumento da Imagem , Processamento de Imagem Assistida por Computador , Algoritmos , Anisotropia , Artefatos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Imagens de FantasmasRESUMO
Neurofeedback training using real-time functional magnetic resonance imaging (rtfMRI-NF) allows subjects voluntary control of localised and distributed brain activity. It has sparked increased interest as a promising non-invasive treatment option in neuropsychiatric and neurocognitive disorders, although its efficacy and clinical significance are yet to be determined. In this work, we present the first extensive review of acquisition, processing and quality control methods available to improve the quality of the neurofeedback signal. Furthermore, we investigate the state of denoising and quality control practices in 128 recently published rtfMRI-NF studies. We found: (a) that less than a third of the studies reported implementing standard real-time fMRI denoising steps, (b) significant room for improvement with regards to methods reporting and (c) the need for methodological studies quantifying and comparing the contribution of denoising steps to the neurofeedback signal quality. Advances in rtfMRI-NF research depend on reproducibility of methods and results. Notably, a systematic effort is needed to build up evidence that disentangles the various mechanisms influencing neurofeedback effects. To this end, we recommend that future rtfMRI-NF studies: (a) report implementation of a set of standard real-time fMRI denoising steps according to a proposed COBIDAS-style checklist (https://osf.io/kjwhf/), (b) ensure the quality of the neurofeedback signal by calculating and reporting community-informed quality metrics and applying offline control checks and (c) strive to adopt transparent principles in the form of methods and data sharing and support of open-source rtfMRI-NF software. Code and data for reproducibility, as well as an interactive environment to explore the study data, can be accessed at https://github.com/jsheunis/quality-and-denoising-in-rtfmri-nf.
Assuntos
Neuroimagem Funcional , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Neurorretroalimentação , Controle de Qualidade , Neuroimagem Funcional/métodos , Neuroimagem Funcional/normas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Neurorretroalimentação/métodosRESUMO
BACKGROUND: Quantitative myocardial perfusion cardiac MRI can provide a fast and robust assessment of myocardial perfusion status for the noninvasive diagnosis of myocardial ischemia while being more objective than visual assessment. However, it currently has limited use in clinical practice due to the challenging postprocessing required, particularly the segmentation. PURPOSE: To evaluate the efficacy of an automated deep learning (DL) pipeline for image processing prior to quantitative analysis. STUDY TYPE: Retrospective. POPULATION: In all, 175 (350 MRI scans; 1050 image series) clinical patients under both rest and stress conditions (135/10/30 training/validation/test). FIELD STRENGTH/SEQUENCE: 3.0T/2D multislice saturation recovery T1 -weighted gradient echo sequence. ASSESSMENT: Accuracy was assessed, as compared to the manual operator, through the mean square error of the distance between landmarks and the Dice similarity coefficient of the segmentation and bounding box detection. Quantitative perfusion maps obtained using the automated DL-based processing were compared to the results obtained with the manually processed images. STATISTICAL TESTS: Bland-Altman plots and intraclass correlation coefficient (ICC) were used to assess the myocardial blood flow (MBF) obtained using the automated DL pipeline, as compared to values obtained by a manual operator. RESULTS: The mean (SD) error in the detection of the time of peak signal enhancement in the left ventricle was 1.49 (1.4) timeframes. The mean (SD) Dice similarity coefficients for the bounding box and myocardial segmentation were 0.93 (0.03) and 0.80 (0.06), respectively. The mean (SD) error in the RV insertion point was 2.8 (1.8) mm. The Bland-Altman plots showed a bias of 2.6% of the mean MBF between the automated and manually processed MBF values on a per-myocardial segment basis. The ICC was 0.89, 95% confidence interval = [0.87, 0.90]. DATA CONCLUSION: We showed high accuracy, compared to manual processing, for the DL-based processing of myocardial perfusion data leading to quantitative values that are similar to those achieved with manual processing. LEVEL OF EVIDENCE: 3 Technical Efficacy Stage: 1 J. Magn. Reson. Imaging 2020;51:1689-1696.
Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Perfusão , Estudos RetrospectivosRESUMO
PURPOSE: High-resolution myocardial perfusion analysis allows for preserving spatial information with excellent sensitivity for subendocardial ischemia detection. However, it suffers from low signal-to-noise ratio. Commonly, spatial averaging is used to increase signal-to-noise ratio. This bears the risk of losing information about the extent, localization and transmurality of ischemia. This study investigates spatial-averaging effects on perfusion-estimates accuracy. METHODS: Perfusion data were obtained from patients and healthy volunteers. Spatial averaging was performed on voxel-based data in transmural and angular direction to reduce resolution to 50, 20, and 10% of its original value. Fit quality assessment method is used to measure the fraction of modeled information and remaining unmodeled information in the residuals. RESULTS: Fraction of modeled information decreased in patients as resolution reduced. This decrease was more evident for Fermi and exponential in transmural direction. Fermi and exponential showed significant difference at 50% resolution (Fermi P < 0.001, exponential P =0.0014). No significant differences were observed for autoregressive-moving-average model (P = 0.081). At full resolution, autoregressive-moving-average model has the lowest fraction of residual information (0.3). Differences were observed comparing ischemic regions perfusion-estimates coefficient of variation at transmural and angular direction. CONCLUSION: Angular averaging preserves more information compared to transmural averaging. Reducing resolution level below 50% at transmural and 20% at angular direction results in losing information about transmural perfusion differences. Maximum voxel size of 2 × 2 mm(2) is necessary to avoid loss of physiological information due to spatial averaging.
Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Angiografia por Ressonância Magnética/métodos , Imagem Cinética por Ressonância Magnética/métodos , Isquemia Miocárdica/diagnóstico , Isquemia Miocárdica/fisiopatologia , Imagem de Perfusão do Miocárdio/métodos , Algoritmos , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Análise Espaço-TemporalRESUMO
PURPOSE: Atherosclerotic carotid plaques can be quantified in vivo by MRI. However, the accuracy in segmentation and quantification of components such as the thin fibrous cap (FC) and lipid-rich necrotic core (LRNC) remains unknown due to the lack of a submillimeter scale ground truth. METHODS: A novel approach was taken by numerically simulating in vivo carotid MRI providing a ground truth comparison. Upon evaluation of a simulated clinical protocol, MR readers segmented simulated images of cross-sectional plaque geometries derived from histological data of 12 patients. RESULTS: MR readers showed high correlation (R) and intraclass correlation (ICC) in measuring the luminal area (R = 0.996, ICC = 0.99), vessel wall area (R = 0.96, ICC = 0.94) and LRNC area (R = 0.95, ICC = 0.94). LRNC area was underestimated (mean error, -24%). Minimum FC thickness showed a mediocre correlation and intraclass correlation (R = 0.71, ICC = 0.69). CONCLUSION: Current clinical MRI can quantify carotid plaques but shows limitations for thin FC thickness quantification. These limitations could influence the reliability of carotid MRI for assessing plaque rupture risk associated with FC thickness. Overall, MRI simulations provide a feasible methodology for assessing segmentation and quantification accuracy, as well as for improving scan protocol design.
Assuntos
Doenças das Artérias Carótidas/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Angiografia por Ressonância Magnética/métodos , Placa Aterosclerótica/diagnóstico , Simulação por Computador , Meios de Contraste , Humanos , Lipídeos/análise , Necrose , Compostos Organometálicos , Razão Sinal-RuídoRESUMO
PURPOSE: To determine sex-specific reference values for left ventricular (LV) volumes, mass, and ejection fraction (EF) in healthy adults using computer-aided analysis and to examine the effect of age on LV parameters. MATERIALS AND METHODS: We examined data from 1494 members of the Framingham Heart Study Offspring cohort, obtained using short-axis stack cine SSFP CMR, identified a healthy reference group (without cardiovascular disease, hypertension, or LV wall motion abnormality) and determined sex-specific upper 95th percentile thresholds for LV volumes and mass, and lower 5th percentile thresholds for EF using computer-assisted border detection. In secondary analyses, we stratified participants by age-decade and tested for linear trend across age groups. RESULTS: The reference group comprised 685 adults (423F; 61 ± 9 years). Men had greater LV volumes and mass, before and after indexation to common measures of body size (all P = 0.001). Women had greater EF (73 ± 6 versus 71 ± 6%; P = 0.0002). LV volumes decreased with greater age in both sexes, even after indexation. Indexed LV mass did not vary with age. LV EF and concentricity increased with greater age in both sexes. CONCLUSION: We present CMR-derived LV reference values. There are significant age and sex differences in LV volumes, EF, and geometry, whereas mass differs between sexes but not age groups.
Assuntos
Envelhecimento/fisiologia , Ventrículos do Coração/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imagem Cinética por Ressonância Magnética/métodos , Volume Sistólico/fisiologia , Função Ventricular Esquerda/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Humanos , Aumento da Imagem/métodos , Masculino , Pessoa de Meia-Idade , Tamanho do Órgão/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Valores de Referência , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Biomechanical finite element analysis (FEA) based on in vivo carotid magnetic resonance imaging (MRI) can be used to assess carotid plaque vulnerability noninvasively by computing peak cap stress. However, the accuracy of MRI plaque segmentation and the influence this has on FEA has remained unreported due to the lack of a reliable submillimeter ground truth. In this study, we quantify this influence using novel numerical simulations of carotid MRI. Histological sections from carotid plaques from 12 patients were used to create 33 ground truth plaque models. These models were subjected to numerical computer simulations of a currently used clinically applied 3.0 T T1-weighted black-blood carotid MRI protocol (in-plane acquisition voxel size of 0.62 × 0.62 mm2) to generate simulated in vivo MR images from a known underlying ground truth. The simulated images were manually segmented by three MRI readers. FEA models based on the MRI segmentations were compared with the FEA models based on the ground truth. MRI-based FEA model peak cap stress was consistently underestimated, but still correlated (R) moderately with the ground truth stress: R = 0.71, R = 0.47, and R = 0.76 for the three MRI readers respectively (p < 0.01). Peak plaque stretch was underestimated as well. The peak cap stress in thick-cap, low stress plaques was substantially more accurately and precisely predicted (error of -12 ± 44 kPa) than the peak cap stress in plaques with caps thinner than the acquisition voxel size (error of -177 ± 168 kPa). For reliable MRI-based FEA to compute the peak cap stress of carotid plaques with thin caps, the current clinically used in-plane acquisition voxel size (â¼0.6 mm) is inadequate. FEA plaque stress computations would be considerably more reliable if they would be used to identify thick-cap carotid plaques with low stresses instead.
Assuntos
Artérias Carótidas/fisiopatologia , Estenose das Carótidas/patologia , Estenose das Carótidas/fisiopatologia , Técnicas de Imagem por Elasticidade/métodos , Interpretação de Imagem Assistida por Computador/métodos , Angiografia por Ressonância Magnética/métodos , Modelos Cardiovasculares , Idoso , Velocidade do Fluxo Sanguíneo , Artérias Carótidas/patologia , Simulação por Computador , Módulo de Elasticidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Resistência ao CisalhamentoRESUMO
Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.
Assuntos
Neoplasias Encefálicas , Glioma , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem , Algoritmos , Imageamento por Ressonância Magnética/métodosRESUMO
BACKGROUND AND OBJECTIVE: As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism. METHODS: We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg. RESULTS: Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937. CONCLUSIONS: Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.
Assuntos
Aprendizado Profundo , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Algoritmos , Neuroimagem/métodos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Aims: Quantitative stress perfusion cardiac magnetic resonance (CMR) is becoming more widely available, but it is still unclear how to integrate this information into clinical decision-making. Typically, pixel-wise perfusion maps are generated, but diagnostic and prognostic studies have summarized perfusion as just one value per patient or in 16 myocardial segments. In this study, the reporting of quantitative perfusion maps is extended from the standard 16 segments to a high-resolution bullseye. Cut-off thresholds are established for the high-resolution bullseye, and the identified perfusion defects are compared with visual assessment. Methods and results: Thirty-four patients with known or suspected coronary artery disease were retrospectively analysed. Visual perfusion defects were contoured on the CMR images and pixel-wise quantitative perfusion maps were generated. Cut-off values were established on the high-resolution bullseye consisting of 1800 points and compared with the per-segment, per-coronary, and per-patient resolution thresholds. Quantitative stress perfusion was significantly lower in visually abnormal pixels, 1.11 (0.75-1.57) vs. 2.35 (1.82-2.9) mL/min/g (Mann-Whitney U test P < 0.001), with an optimal cut-off of 1.72â mL/min/g. This was lower than the segment-wise optimal threshold of 1.92â mL/min/g. The Bland-Altman analysis showed that visual assessment underestimated large perfusion defects compared with the quantification with good agreement for smaller defect burdens. A Dice overlap of 0.68 (0.57-0.78) was found. Conclusion: This study introduces a high-resolution bullseye consisting of 1800 points, rather than 16, per patient for reporting quantitative stress perfusion, which may improve sensitivity. Using this representation, the threshold required to identify areas of reduced perfusion is lower than for segmental analysis.
RESUMO
Introduction: Approximately one in six people will experience an episode of major depressive disorder (MDD) in their lifetime. Effective treatment is hindered by subjective clinical decision-making and a lack of objective prognostic biomarkers. Functional MRI (fMRI) could provide such an objective measure but the majority of MDD studies has focused on static approaches, disregarding the rapidly changing nature of the brain. In this study, we aim to predict depression severity changes at 3 and 6 months using dynamic fMRI features. Methods: For our research, we acquired a longitudinal dataset of 32 MDD patients with fMRI scans acquired at baseline and clinical follow-ups 3 and 6 months later. Several measures were derived from an emotion face-matching fMRI dataset: activity in brain regions, static and dynamic functional connectivity between functional brain networks (FBNs) and two measures from a wavelet coherence analysis approach. All fMRI features were evaluated independently, with and without demographic and clinical parameters. Patients were divided into two classes based on changes in depression severity at both follow-ups. Results: The number of coherence clusters (nCC) between FBNs, reflecting the total number of interactions (either synchronous, anti-synchronous or causal), resulted in the highest predictive performance. The nCC-based classifier achieved 87.5% and 77.4% accuracy for the 3- and 6-months change in severity, respectively. Furthermore, regression analyses supported the potential of nCC for predicting depression severity on a continuous scale. The posterior default mode network (DMN), dorsal attention network (DAN) and two visual networks were the most important networks in the optimal nCC models. Reduced nCC was associated with a poorer depression course, suggesting deficits in sustained attention to and coping with emotion-related faces. An ensemble of classifiers with demographic, clinical and lead coherence features, a measure of dynamic causality, resulted in a 3-months clinical outcome prediction accuracy of 81.2%. Discussion: The dynamic wavelet features demonstrated high accuracy in predicting individual depression severity change. Features describing brain dynamics could enhance understanding of depression and support clinical decision-making. Further studies are required to evaluate their robustness and replicability in larger cohorts.
RESUMO
OBJECTIVES: Dark-blood late gadolinium enhancement (DB-LGE) cardiac magnetic resonance has been proposed as an alternative to standard white-blood LGE (WB-LGE) imaging protocols to enhance scar-to-blood contrast without compromising scar-to-myocardium contrast. In practice, both DB and WB contrasts may have clinical utility, but acquiring both has the drawback of additional acquisition time. The aim of this study was to develop and evaluate a deep learning method to generate synthetic WB-LGE images from DB-LGE, allowing the assessment of both contrasts without additional scan time. MATERIALS AND METHODS: DB-LGE and WB-LGE data from 215 patients were used to train 2 types of unpaired image-to-image translation deep learning models, cycle-consistent generative adversarial network (CycleGAN) and contrastive unpaired translation, with 5 different loss function hyperparameter settings each. Initially, the best hyperparameter setting was determined for each model type based on the Fréchet inception distance and the visual assessment of expert readers. Then, the CycleGAN and contrastive unpaired translation models with the optimal hyperparameters were directly compared. Finally, with the best model chosen, the quantification of scar based on the synthetic WB-LGE images was compared with the truly acquired WB-LGE. RESULTS: The CycleGAN architecture for unpaired image-to-image translation was found to provide the most realistic synthetic WB-LGE images from DB-LGE images. The results showed that it was difficult for visual readers to distinguish if an image was true or synthetic (55% correctly classified). In addition, scar burden quantification with the synthetic data was highly correlated with the analysis of the truly acquired images. Bland-Altman analysis found a mean bias in percentage scar burden between the quantification of the real WB and synthetic white-blood images of 0.44% with limits of agreement from -10.85% to 11.74%. The mean image quality of the real WB images (3.53/5) was scored higher than the synthetic white-blood images (3.03), P = 0.009. CONCLUSIONS: This study proposed a CycleGAN model to generate synthetic WB-LGE from DB-LGE images to allow assessment of both image contrasts without additional scan time. This work represents a clinically focused assessment of synthetic medical images generated by artificial intelligence, a topic with significant potential for a multitude of applications. However, further evaluation is warranted before clinical adoption.
RESUMO
Quantification of myocardial scar from late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) images can be facilitated by automated artificial intelligence (AI)-based analysis. However, AI models are susceptible to domain shifts in which the model performance is degraded when applied to data with different characteristics than the original training data. In this study, CycleGAN models were trained to translate local hospital data to the appearance of a public LGE CMR dataset. After domain adaptation, an AI scar quantification pipeline including myocardium segmentation, scar segmentation, and computation of scar burden, previously developed on the public dataset, was evaluated on an external test set including 44 patients clinically assessed for ischemic scar. The mean ± standard deviation Dice similarity coefficients between the manual and AI-predicted segmentations in all patients were similar to those previously reported: 0.76 ± 0.05 for myocardium and 0.75 ± 0.32 for scar, 0.41 ± 0.12 for scar in scans with pathological findings. Bland-Altman analysis showed a mean bias in scar burden percentage of -0.62% with limits of agreement from -8.4% to 7.17%. These results show the feasibility of deploying AI models, trained with public data, for LGE CMR quantification on local clinical data using unsupervised CycleGAN-based domain adaptation. RELEVANCE STATEMENT: Our study demonstrated the possibility of using AI models trained from public databases to be applied to patient data acquired at a specific institution with different acquisition settings, without additional manual labor to obtain further training labels.
Assuntos
Cicatriz , Imageamento por Ressonância Magnética , Humanos , Cicatriz/diagnóstico por imagem , Masculino , Feminino , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Meios de Contraste , Idoso , Interpretação de Imagem Assistida por Computador/métodos , Inteligência ArtificialRESUMO
RESEARCH PURPOSE: The low treatment effectiveness in major depressive disorder (MDD) may be caused by the subjectiveness in clinical examination and the lack of quantitative tests. Objective biomarkers derived from magnetic resonance imaging (MRI) may support clinical experts during decision-making. Numerous studies have attempted to identify such MRI-based biomarkers. However, the majority is uni-modal (based on a single MRI modality) and focus on either MDD diagnosis or outcome. Uncertainty remains regarding whether key features or classification models for diagnosis may also be used for outcome prediction. Therefore, we aim to find multi-modal predictors of both, MDD diagnosis and outcome. By addressing these research questions using the same dataset, we eliminate between-study confounding factors. Various structural (T1-weighted, T2-weighted, diffusion tensor imaging (DTI)) and functional (resting-state and task-based functional MRI) scans were acquired from 32 MDD and 31 healthy control (HC) subjects during the first visit at the investigational site (baseline). Depression severity was assessed at baseline and 6 months later. Features were extracted from the baseline MRI images with different modalities. Binary 6-months negative and positive outcome (NO; PO) classes were defined based on relative (to baseline) change in depression severity. Support vector machine models were employed to separate MDD from HC (diagnosis) and NO from PO subjects (outcome). Classification was performed through a uni-modal (features from a single MRI modality) and multi-modal (combination of features from different modalities) approach. PRINCIPAL RESULTS: Our results show that DTI features yielded the highest uni-modal performance for diagnosis and outcome prediction: mean diffusivity (AUC (area under the curve) = 0.701) and the sum of streamline weights (AUC = 0.860), respectively. Multi-modal ensemble classifiers with T1-weighted, resting-state functional MRI and DTI features improved classification performance for both diagnosis and outcome (AUC = 0.746 and 0.932, respectively). Feature analyses revealed that the most important features were located in frontal, limbic and parietal areas. However, the modality or location of these features was different between diagnostic and prognostic models. MAJOR CONCLUSIONS: Our findings suggest that combining features from different MRI modalities predict MDD diagnosis and outcome with higher performance. Furthermore, we demonstrated that the most important features for MDD diagnosis were different and located in other brain regions than those for outcome. This longitudinal study contributes to the identification of objective biomarkers of MDD and its outcome. Follow-up studies may further evaluate the generalizability of our models in larger or multi-center cohorts.
RESUMO
The aim of this article is to describe a novel hardware perfusion phantom that simulates myocardial first-pass perfusion allowing comparisons between different MR techniques and validation of the results against a true gold standard. MR perfusion images were acquired at different myocardial perfusion rates and variable doses of gadolinium and cardiac output. The system proved to be sensitive to controlled variations of myocardial perfusion rate, contrast agent dose, and cardiac output. It produced distinct signal intensity curves for perfusion rates ranging from 1 to 10 mL/mL/min. Quantification of myocardial blood flow by signal deconvolution techniques provided accurate measurements of perfusion. The phantom also proved to be very reproducible between different sessions and different operators. This novel hardware perfusion phantom system allows reliable, reproducible, and efficient simulation of myocardial first-pass MR perfusion. Direct comparison between the results of image-based quantification and reference values of flow and myocardial perfusion will allow development and validation of accurate quantification methods.
Assuntos
Angiografia por Ressonância Magnética/instrumentação , Imagem Cinética por Ressonância Magnética/instrumentação , Imagem de Perfusão do Miocárdio/instrumentação , Imagens de Fantasmas , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Deep learning-based segmentation methods provide an effective and automated way for assessing the structure and function of the heart in cardiac magnetic resonance (CMR) images. However, despite their state-of-the-art performance on images acquired from the same source (same scanner or scanner vendor) as images used during training, their performance degrades significantly on images coming from different domains. A straightforward approach to tackle this issue consists of acquiring large quantities of multi-site and multi-vendor data, which is practically infeasible. Generative adversarial networks (GANs) for image synthesis present a promising solution for tackling data limitations in medical imaging and addressing the generalization capability of segmentation models. In this work, we explore the usability of synthesized short-axis CMR images generated using a segmentation-informed conditional GAN, to improve the robustness of heart cavity segmentation models in a variety of different settings. The GAN is trained on paired real images and corresponding segmentation maps belonging to both the heart and the surrounding tissue, reinforcing the synthesis of semantically-consistent and realistic images. First, we evaluate the segmentation performance of a model trained solely with synthetic data and show that it only slightly underperforms compared to the baseline trained with real data. By further combining real with synthetic data during training, we observe a substantial improvement in segmentation performance (up to 4% and 40% in terms of Dice score and Hausdorff distance) across multiple data-sets collected from various sites and scanner. This is additionally demonstrated across state-of-the-art 2D and 3D segmentation networks, whereby the obtained results demonstrate the potential of the proposed method in tackling the presence of the domain shift in medical data. Finally, we thoroughly analyze the quality of synthetic data and its ability to replace real MR images during training, as well as provide an insight into important aspects of utilizing synthetic images for segmentation.