Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
ArXiv ; 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38947917

ABSTRACT

Fiber orientation distributions (FODs) is a popular model to represent the diffusion MRI (dMRI) data. However, imaging artifacts such as susceptibility-induced distortion in dMRI can cause signal loss and lead to the corrupted reconstruction of FODs, which prohibits successful fiber tracking and connectivity analysis in affected brain regions such as the brain stem. Generative models, such as the diffusion models, have been successfully applied in various image restoration tasks. However, their application on FOD images poses unique challenges since FODs are 4-dimensional data represented by spherical harmonics (SPHARM) with the 4-th dimension exhibiting order-related dependency. In this paper, we propose a novel diffusion model for FOD restoration that can recover the signal loss caused by distortion artifacts. We use volume-order encoding to enhance the ability of the diffusion model to generate individual FOD volumes at all SPHARM orders. Moreover, we add cross-attention features extracted across all SPHARM orders in generating every individual FOD volume to capture the order-related dependency across FOD volumes. We also condition the diffusion model with low-distortion FODs surrounding high-distortion areas to maintain the geometric coherence of the generated FODs. We trained and tested our model using data from the UK Biobank (n = 1315). On a test set with ground truth (n = 43), we demonstrate the high accuracy of the generated FODs in terms of root mean square errors of FOD volumes and angular errors of FOD peaks. We also apply our method to a test set with large distortion in the brain stem area (n = 1172) and demonstrate the efficacy of our method in restoring the FOD integrity and, hence, greatly improving tractography performance in affected brain regions.

2.
Med Image Anal ; 97: 103276, 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39068830

ABSTRACT

Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.

3.
Comput Diffus MRI ; 14328: 58-69, 2023.
Article in English | MEDLINE | ID: mdl-38500569

ABSTRACT

Susceptibility-induced distortion is a common artifact in diffusion MRI (dMRI), which deforms the dMRI locally and poses significant challenges in connectivity analysis. While various methods were proposed to correct the distortion, residual distortions often persist at varying degrees across brain regions and subjects. Generating a voxel-level residual distortion severity map can thus be a valuable tool to better inform downstream connectivity analysis. To fill this current gap in dMRI analysis, we propose a supervised deep-learning network to predict a severity map of residual distortion. The training process is supervised using the structural similarity index measure (SSIM) of the fiber orientation distribution (FOD) in two opposite phase encoding (PE) directions. Only b0 images and related outputs from the distortion correction methods are needed as inputs in the testing process. The proposed method is applicable in large-scale datasets such as the UK Biobank, Adolescent Brain Cognitive Development (ABCD), and other emerging studies that only have complete dMRI data in one PE direction but acquires b0 images in both PEs. In our experiments, we trained the proposed model using the Lifespan Human Connectome Project Aging (HCP-Aging) dataset (n=662) and apply the trained model to data (n=1330) from UK Biobank. Our results show low training, validation, and test errors, and the severity map correlates excellently with an FOD integrity measure in both HCP-Aging and UK Biobank data. The proposed method is also highly efficient and can generate the severity map in around 1 second for each subject.

SELECTION OF CITATIONS
SEARCH DETAIL