Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
Magn Reson Med ; 88(2): 633-650, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35436357

RESUMO

PURPOSE: To rapidly obtain high resolution T2 , T2 *, and quantitative susceptibility mapping (QSM) source separation maps with whole-brain coverage and high geometric fidelity. METHODS: We propose Blip Up-Down Acquisition for Spin And Gradient Echo imaging (BUDA-SAGE), an efficient EPI sequence for quantitative mapping. The acquisition includes multiple T2 *-, T2 '-, and T2 -weighted contrasts. We alternate the phase-encoding polarities across the interleaved shots in this multi-shot navigator-free acquisition. A field map estimated from interim reconstructions was incorporated into the joint multi-shot EPI reconstruction with a structured low rank constraint to eliminate distortion. A self-supervised neural network (NN), MR-Self2Self (MR-S2S), was used to perform denoising to boost SNR. Using Slider encoding allowed us to reach 1 mm isotropic resolution by performing super-resolution reconstruction on volumes acquired with 2 mm slice thickness. Quantitative T2 (=1/R2 ) and T2 * (=1/R2 *) maps were obtained using Bloch dictionary matching on the reconstructed echoes. QSM was estimated using nonlinear dipole inversion on the gradient echoes. Starting from the estimated R2 /R2 * maps, R2 ' information was derived and used in source separation QSM reconstruction, which provided additional para- and dia-magnetic susceptibility maps. RESULTS: In vivo results demonstrate the ability of BUDA-SAGE to provide whole-brain, distortion-free, high-resolution, multi-contrast images and quantitative T2 /T2 * maps, as well as yielding para- and dia-magnetic susceptibility maps. Estimated quantitative maps showed comparable values to conventional mapping methods in phantom and in vivo measurements. CONCLUSION: BUDA-SAGE acquisition with self-supervised denoising and Slider encoding enables rapid, distortion-free, whole-brain T2 /T2 * mapping at 1 mm isotropic resolution under 90 s.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Processamento de Imagem Assistida por Computador/métodos , Fenômenos Magnéticos , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas
2.
bioRxiv ; 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38798473

RESUMO

Significance: Voltage imaging is a powerful tool for studying the dynamics of neuronal activities in the brain. However, voltage imaging data are fundamentally corrupted by severe Poisson noise in the low-photon regime, which hinders the accurate extraction of neuronal activities. Self-supervised deep learning denoising methods have shown great potential in addressing the challenges in low-photon voltage imaging without the need for ground truth, but usually suffer from the tradeoff between spatial and temporal performance. Aim: We present DeepVID v2, a novel self-supervised denoising framework with decoupled spatial and temporal enhancement capability to significantly augment low-photon voltage imaging. Approach: DeepVID v2 is built on our original DeepVID framework,1,2 which performs frame-based denoising by utilizing a sequence of frames around the central frame targeted for denoising to leverage temporal information and ensure consistency. The network further integrates multiple blind pixels in the central frame to enrich the learning of local spatial information. Additionally, DeepVID v2 introduces a new edge extraction branch to capture fine structural details in order to learn high spatial resolution information. Results: We demonstrate that DeepVID v2 is able to overcome the tradeoff between spatial and temporal performance, and achieve superior denoising capability in resolving both high-resolution spatial structures and rapid temporal neuronal activities. We further show that DeepVID v2 is able to generalize to different imaging conditions, including time-series measurements with various signal-to-noise ratios (SNRs) and in extreme low-photon conditions. Conclusions: Our results underscore DeepVID v2 as a promising tool for enhancing voltage imaging. This framework has the potential to generalize to other low-photon imaging modalities and greatly facilitate the study of neuronal activities in the brain.

3.
Radiol Phys Technol ; 17(2): 367-374, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38413510

RESUMO

This study aimed to assess the subjective and objective image quality of low-dose computed tomography (CT) images processed using a self-supervised denoising algorithm with deep learning. We trained the self-supervised denoising model using low-dose CT images of 40 patients and applied this model to CT images of another 30 patients. Image quality, in terms of noise and edge sharpness, was rated on a 5-point scale by two radiologists. The coefficient of variation, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) were calculated. The values for the self-supervised denoising model were compared with those for the original low-dose CT images and CT images processed using other conventional denoising algorithms (non-local means, block-matching and 3D filtering, and total variation minimization-based algorithms). The mean (standard deviation) scores of local and overall noise levels for the self-supervised denoising algorithm were 3.90 (0.40) and 3.93 (0.51), respectively, outperforming the original image and other algorithms. Similarly, the mean scores of local and overall edge sharpness for the self-supervised denoising algorithm were 3.90 (0.40) and 3.75 (0.47), respectively, surpassing the scores of the original image and other algorithms. The CNR and SNR for the self-supervised denoising algorithm were higher than those for the original images but slightly lower than those for the other algorithms. Our findings indicate the potential clinical applicability of the self-supervised denoising algorithm for low-dose CT images in clinical settings.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Doses de Radiação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Adulto
4.
Phenomics ; 1(6): 257-268, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36939784

RESUMO

Lung nodule classification based on low-dose computed tomography (LDCT) images has attracted major attention thanks to the reduced radiation dose and its potential for early diagnosis of lung cancer from LDCT-based lung cancer screening. However, LDCT images suffer from severe noise, largely influencing the performance of lung nodule classification. Current methods combining denoising and classification tasks typically require the corresponding normal-dose CT (NDCT) images as the supervision for the denoising task, which is impractical in the context of clinical diagnosis using LDCT. To jointly train these two tasks in a unified framework without the NDCT images, this paper introduces a novel self-supervised method, termed strided Noise2Neighbors or SN2N, for blind medical image denoising and lung nodule classification, where the supervision is generated from noisy input images. More specifically, the proposed SN2N can construct the supervision information from its neighbors for LDCT denoising, which does not need NDCT images anymore. The proposed SN2N method enables joint training of LDCT denoising and lung nodule classification tasks by using self-supervised loss for denoising and cross-entropy loss for classification. Extensively experimental results on the Mayo LDCT dataset demonstrate that our SN2N achieves competitive performance compared with the supervised learning methods that have paired NDCT images as supervision. Moreover, our results on the LIDC-IDRI dataset show that the joint training of LDCT denoising and lung nodule classification significantly improves the performance of LDCT-based lung nodule classification.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA