Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 86: 102787, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36933386

RESUMO

X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X/métodos , Atenção
2.
Med Image Anal ; 78: 102389, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35219940

RESUMO

Automatic segmentation of cardiac magnetic resonance imaging (MRI) facilitates efficient and accurate volume measurement in clinical applications. However, due to anisotropic resolution, ambiguous borders and complicated shapes, existing methods suffer from the degradation of accuracy and robustness in cardiac MRI segmentation. In this paper, we propose an enhanced Deformable U-Net (DeU-Net) for 3D cardiac cine MRI segmentation, composed of three modules, namely Temporal Deformable Aggregation Module (TDAM), Enhanced Deformable Attention Network (EDAN), and Probabilistic Noise Correction Module (PNCM). TDAM first takes consecutive cardiac MR slices (including a target slice and its neighboring reference slices) as input, and extracts spatio-temporal information by an offset prediction network to generate fused features of the target slice. Then the fused features are also fed into EDAN that exploits several flexible deformable convolutional layers and generates clear borders of every segmentation map. A Multi-Scale Attention Module (MSAM) in EDAN is proposed to capture long range dependencies between features of different scales. Meanwhile, PNCM treats the fused features as a distribution to quantify uncertainty. Experimental results show that our DeU-Net achieves the state-of-the-art performance in terms of the commonly used evaluation metrics on the Extended ACDC dataset and competitive performance on other two datasets, validating the robustness and generalization of DeU-Net.


Assuntos
Processamento de Imagem Assistida por Computador , Imagem Cinética por Ressonância Magnética , Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...