Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
MAGMA ; 37(3): 449-463, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38613715

RESUMO

PURPOSE: Use a conference challenge format to compare machine learning-based gamma-aminobutyric acid (GABA)-edited magnetic resonance spectroscopy (MRS) reconstruction models using one-quarter of the transients typically acquired during a complete scan. METHODS: There were three tracks: Track 1: simulated data, Track 2: identical acquisition parameters with in vivo data, and Track 3: different acquisition parameters with in vivo data. The mean squared error, signal-to-noise ratio, linewidth, and a proposed shape score metric were used to quantify model performance. Challenge organizers provided open access to a baseline model, simulated noise-free data, guides for adding synthetic noise, and in vivo data. RESULTS: Three submissions were compared. A covariance matrix convolutional neural network model was most successful for Track 1. A vision transformer model operating on a spectrogram data representation was most successful for Tracks 2 and 3. Deep learning (DL) reconstructions with 80 transients achieved equivalent or better SNR, linewidth and fit error compared to conventional 320 transient reconstructions. However, some DL models optimized linewidth and SNR without actually improving overall spectral quality, indicating a need for more robust metrics. CONCLUSION: DL-based reconstruction pipelines have the promise to reduce the number of transients required for GABA-edited MRS.


Assuntos
Aprendizado Profundo , Espectroscopia de Ressonância Magnética , Razão Sinal-Ruído , Ácido gama-Aminobutírico , Ácido gama-Aminobutírico/metabolismo , Humanos , Espectroscopia de Ressonância Magnética/métodos , Redes Neurais de Computação , Algoritmos , Encéfalo/diagnóstico por imagem , Encéfalo/metabolismo , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Simulação por Computador
2.
Front Psychiatry ; 15: 1255370, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38585483

RESUMO

Introduction: Approximately one in six people will experience an episode of major depressive disorder (MDD) in their lifetime. Effective treatment is hindered by subjective clinical decision-making and a lack of objective prognostic biomarkers. Functional MRI (fMRI) could provide such an objective measure but the majority of MDD studies has focused on static approaches, disregarding the rapidly changing nature of the brain. In this study, we aim to predict depression severity changes at 3 and 6 months using dynamic fMRI features. Methods: For our research, we acquired a longitudinal dataset of 32 MDD patients with fMRI scans acquired at baseline and clinical follow-ups 3 and 6 months later. Several measures were derived from an emotion face-matching fMRI dataset: activity in brain regions, static and dynamic functional connectivity between functional brain networks (FBNs) and two measures from a wavelet coherence analysis approach. All fMRI features were evaluated independently, with and without demographic and clinical parameters. Patients were divided into two classes based on changes in depression severity at both follow-ups. Results: The number of coherence clusters (nCC) between FBNs, reflecting the total number of interactions (either synchronous, anti-synchronous or causal), resulted in the highest predictive performance. The nCC-based classifier achieved 87.5% and 77.4% accuracy for the 3- and 6-months change in severity, respectively. Furthermore, regression analyses supported the potential of nCC for predicting depression severity on a continuous scale. The posterior default mode network (DMN), dorsal attention network (DAN) and two visual networks were the most important networks in the optimal nCC models. Reduced nCC was associated with a poorer depression course, suggesting deficits in sustained attention to and coping with emotion-related faces. An ensemble of classifiers with demographic, clinical and lead coherence features, a measure of dynamic causality, resulted in a 3-months clinical outcome prediction accuracy of 81.2%. Discussion: The dynamic wavelet features demonstrated high accuracy in predicting individual depression severity change. Features describing brain dynamics could enhance understanding of depression and support clinical decision-making. Further studies are required to evaluate their robustness and replicability in larger cohorts.

3.
Comput Methods Programs Biomed ; 248: 108115, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38503072

RESUMO

BACKGROUND AND OBJECTIVE: As large sets of annotated MRI data are needed for training and validating deep learning based medical image analysis algorithms, the lack of sufficient annotated data is a critical problem. A possible solution is the generation of artificial data by means of physics-based simulations. Existing brain simulation data is limited in terms of anatomical models, tissue classes, fixed tissue characteristics, MR sequences and overall realism. METHODS: We propose a realistic simulation framework by incorporating patient-specific phantoms and Bloch equations-based analytical solutions for fast and accurate MRI simulations. A large number of labels are derived from open-source high-resolution T1w MRI data using a fully automated brain classification tool. The brain labels are taken as ground truth (GT) on which MR images are simulated using our framework. Moreover, we demonstrate that the T1w MR images generated from our framework along with GT annotations can be utilized directly to train a 3D brain segmentation network. To evaluate our model further on larger set of real multi-source MRI data without GT, we compared our model to existing brain segmentation tools, FSL-FAST and SynthSeg. RESULTS: Our framework generates 3D brain MRI for variable anatomy, sequence, contrast, SNR and resolution. The brain segmentation network for WM/GM/CSF trained only on T1w simulated data shows promising results on real MRI data from MRBrainS18 challenge dataset with a Dice scores of 0.818/0.832/0.828. On OASIS data, our model exhibits a close performance to FSL, both qualitatively and quantitatively with a Dice scores of 0.901/0.939/0.937. CONCLUSIONS: Our proposed simulation framework is the initial step towards achieving truly physics-based MRI image generation, providing flexibility to generate large sets of variable MRI data for desired anatomy, sequence, contrast, SNR, and resolution. Furthermore, the generated images can effectively train 3D brain segmentation networks, mitigating the reliance on real 3D annotated data.


Assuntos
Aprendizado Profundo , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Algoritmos , Neuroimagem/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
Comput Med Imaging Graph ; 112: 102332, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38245925

RESUMO

Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.


Assuntos
Neoplasias Encefálicas , Glioma , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem , Algoritmos , Imageamento por Ressonância Magnética/métodos
5.
Eur Heart J Imaging Methods Pract ; 2(1): qyae001, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38283662

RESUMO

Aims: Quantitative stress perfusion cardiac magnetic resonance (CMR) is becoming more widely available, but it is still unclear how to integrate this information into clinical decision-making. Typically, pixel-wise perfusion maps are generated, but diagnostic and prognostic studies have summarized perfusion as just one value per patient or in 16 myocardial segments. In this study, the reporting of quantitative perfusion maps is extended from the standard 16 segments to a high-resolution bullseye. Cut-off thresholds are established for the high-resolution bullseye, and the identified perfusion defects are compared with visual assessment. Methods and results: Thirty-four patients with known or suspected coronary artery disease were retrospectively analysed. Visual perfusion defects were contoured on the CMR images and pixel-wise quantitative perfusion maps were generated. Cut-off values were established on the high-resolution bullseye consisting of 1800 points and compared with the per-segment, per-coronary, and per-patient resolution thresholds. Quantitative stress perfusion was significantly lower in visually abnormal pixels, 1.11 (0.75-1.57) vs. 2.35 (1.82-2.9) mL/min/g (Mann-Whitney U test P < 0.001), with an optimal cut-off of 1.72 mL/min/g. This was lower than the segment-wise optimal threshold of 1.92 mL/min/g. The Bland-Altman analysis showed that visual assessment underestimated large perfusion defects compared with the quantification with good agreement for smaller defect burdens. A Dice overlap of 0.68 (0.57-0.78) was found. Conclusion: This study introduces a high-resolution bullseye consisting of 1800 points, rather than 16, per patient for reporting quantitative stress perfusion, which may improve sensitivity. Using this representation, the threshold required to identify areas of reduced perfusion is lower than for segmental analysis.

6.
Invest Radiol ; 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38687025

RESUMO

OBJECTIVES: Dark-blood late gadolinium enhancement (DB-LGE) cardiac magnetic resonance has been proposed as an alternative to standard white-blood LGE (WB-LGE) imaging protocols to enhance scar-to-blood contrast without compromising scar-to-myocardium contrast. In practice, both DB and WB contrasts may have clinical utility, but acquiring both has the drawback of additional acquisition time. The aim of this study was to develop and evaluate a deep learning method to generate synthetic WB-LGE images from DB-LGE, allowing the assessment of both contrasts without additional scan time. MATERIALS AND METHODS: DB-LGE and WB-LGE data from 215 patients were used to train 2 types of unpaired image-to-image translation deep learning models, cycle-consistent generative adversarial network (CycleGAN) and contrastive unpaired translation, with 5 different loss function hyperparameter settings each. Initially, the best hyperparameter setting was determined for each model type based on the Fréchet inception distance and the visual assessment of expert readers. Then, the CycleGAN and contrastive unpaired translation models with the optimal hyperparameters were directly compared. Finally, with the best model chosen, the quantification of scar based on the synthetic WB-LGE images was compared with the truly acquired WB-LGE. RESULTS: The CycleGAN architecture for unpaired image-to-image translation was found to provide the most realistic synthetic WB-LGE images from DB-LGE images. The results showed that it was difficult for visual readers to distinguish if an image was true or synthetic (55% correctly classified). In addition, scar burden quantification with the synthetic data was highly correlated with the analysis of the truly acquired images. Bland-Altman analysis found a mean bias in percentage scar burden between the quantification of the real WB and synthetic white-blood images of 0.44% with limits of agreement from -10.85% to 11.74%. The mean image quality of the real WB images (3.53/5) was scored higher than the synthetic white-blood images (3.03), P = 0.009. CONCLUSIONS: This study proposed a CycleGAN model to generate synthetic WB-LGE from DB-LGE images to allow assessment of both image contrasts without additional scan time. This work represents a clinically focused assessment of synthetic medical images generated by artificial intelligence, a topic with significant potential for a multitude of applications. However, further evaluation is warranted before clinical adoption.

7.
Eur Radiol Exp ; 8(1): 93, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39143405

RESUMO

Quantification of myocardial scar from late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) images can be facilitated by automated artificial intelligence (AI)-based analysis. However, AI models are susceptible to domain shifts in which the model performance is degraded when applied to data with different characteristics than the original training data. In this study, CycleGAN models were trained to translate local hospital data to the appearance of a public LGE CMR dataset. After domain adaptation, an AI scar quantification pipeline including myocardium segmentation, scar segmentation, and computation of scar burden, previously developed on the public dataset, was evaluated on an external test set including 44 patients clinically assessed for ischemic scar. The mean ± standard deviation Dice similarity coefficients between the manual and AI-predicted segmentations in all patients were similar to those previously reported: 0.76 ± 0.05 for myocardium and 0.75 ± 0.32 for scar, 0.41 ± 0.12 for scar in scans with pathological findings. Bland-Altman analysis showed a mean bias in scar burden percentage of -0.62% with limits of agreement from -8.4% to 7.17%. These results show the feasibility of deploying AI models, trained with public data, for LGE CMR quantification on local clinical data using unsupervised CycleGAN-based domain adaptation. RELEVANCE STATEMENT: Our study demonstrated the possibility of using AI models trained from public databases to be applied to patient data acquired at a specific institution with different acquisition settings, without additional manual labor to obtain further training labels.


Assuntos
Cicatriz , Imageamento por Ressonância Magnética , Humanos , Cicatriz/diagnóstico por imagem , Masculino , Feminino , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Meios de Contraste , Idoso , Interpretação de Imagem Assistida por Computador/métodos , Inteligência Artificial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA