Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39041007

RESUMEN

The quality of brain MRI volumes is often compromised by motion artifacts arising from intricate respiratory patterns and involuntary head movements, manifesting as blurring and ghosting that markedly degrade imaging quality. In this study, we introduce an innovative approach employing a 3D deep learning framework to restore brain MR volumes afflicted by motion artifacts. The framework integrates a densely connected 3D U-net architecture augmented by generative adversarial network (GAN)-informed training with a novel volumetric reconstruction loss function tailored to 3D GAN to enhance the quality of the volumes. Our methodology is substantiated through comprehensive experimentation involving a diverse set of motion artifact-affected MR volumes. The generated high-quality MR volumes have similar volumetric signatures comparable to motion-free MR volumes after motion correction. This underscores the significant potential of harnessing this 3D deep learning system to aid in the rectification of motion artifacts in brain MR volumes, highlighting a promising avenue for advanced clinical applications.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38715792

RESUMEN

Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.

5.
Neurooncol Adv ; 2(1): vdaa066, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32705083

RESUMEN

BACKGROUND: One of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status as markers for therapy and prognosis. 1p/19q co-deletion is the defining genomic marker for oligodendrogliomas and confers a better prognosis and treatment response than gliomas without it. Our group has previously developed a highly accurate deep-learning network for determining IDH mutation status using T2-weighted (T2w) MRI only. The purpose of this study was to develop a similar 1p/19q deep-learning classification network. METHODS: Multiparametric brain MRI and corresponding genomic information were obtained for 368 subjects from The Cancer Imaging Archive and The Cancer Genome Atlas. 1p/19 co-deletions were present in 130 subjects. Two-hundred and thirty-eight subjects were non-co-deleted. A T2w image-only network (1p/19q-net) was developed to perform 1p/19q co-deletion status classification and simultaneous single-label tumor segmentation using 3D-Dense-UNets. Three-fold cross-validation was performed to generalize the network performance. Receiver operating characteristic analysis was also performed. Dice scores were computed to determine tumor segmentation accuracy. RESULTS: 1p/19q-net demonstrated a mean cross-validation accuracy of 93.46% across the 3 folds (93.4%, 94.35%, and 92.62%, SD = 0.8) in predicting 1p/19q co-deletion status with a sensitivity and specificity of 0.90 ± 0.003 and 0.95 ± 0.01, respectively and a mean area under the curve of 0.95 ± 0.01. The whole tumor segmentation mean Dice score was 0.80 ± 0.007. CONCLUSION: We demonstrate high 1p/19q co-deletion classification accuracy using only T2w MR images. This represents an important milestone toward using MRI to predict glioma histology, prognosis, and response to treatment.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA