Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Magn Reson Imaging ; 107: 69-79, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38237693

RESUMO

Current challenges in Magnetic Resonance Imaging (MRI) include long acquisition times and motion artifacts. To address these issues, under-sampled k-space acquisition has gained popularity as a fast imaging method. However, recovering fine details from under-sampled data remains challenging. In this study, we introduce a pioneering deep learning approach, namely DCT-Net, designed for dual-domain MRI reconstruction. DCT-Net seamlessly integrates information from the image domain (IRM) and frequency domain (FRM), utilizing a novel Cross Attention Block (CAB) and Fusion Attention Block (FAB). These innovative blocks enable precise feature extraction and adaptive fusion across both domains, resulting in a significant enhancement of the reconstructed image quality. The adaptive interaction and fusion mechanisms of CAB and FAB contribute to the method's effectiveness in capturing distinctive features and optimizing image reconstruction. Comprehensive ablation studies have been conducted to assess the contributions of these modules to reconstruction quality and accuracy. Experimental results on the FastMRI (2023) and Calgary-Campinas datasets (2021) demonstrate the superiority of our MRI reconstruction framework over other typical methods (most are illustrated in 2023 or 2022) in both qualitative and quantitative evaluations. This holds for knee and brain datasets under 4× and 8× accelerated imaging scenarios.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Fontes de Energia Elétrica , Articulação do Joelho , Processamento de Imagem Assistida por Computador
2.
Comput Biol Med ; 157: 106769, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36947904

RESUMO

Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador
3.
Med Image Anal ; 76: 102327, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34923250

RESUMO

Skin lesion segmentation from dermoscopic image is essential for improving the quantitative analysis of melanoma. However, it is still a challenging task due to the large scale variations and irregular shapes of the skin lesions. In addition, the blurred lesion boundaries between the skin lesions and the surrounding tissues may also increase the probability of incorrect segmentation. Due to the inherent limitations of traditional convolutional neural networks (CNNs) in capturing global context information, traditional CNN-based methods usually cannot achieve a satisfactory segmentation performance. In this paper, we propose a novel feature adaptive transformer network based on the classical encoder-decoder architecture, named FAT-Net, which integrates an extra transformer branch to effectively capture long-range dependencies and global context information. Furthermore, we also employ a memory-efficient decoder and a feature adaptation module to enhance the feature fusion between the adjacent-level features by activating the effective channels and restraining the irrelevant background noise. We have performed extensive experiments to verify the effectiveness of our proposed method on four public skin lesion segmentation datasets, including the ISIC 2016, ISIC 2017, ISIC 2018, and PH2 datasets. Ablation studies demonstrate the effectiveness of our feature adaptive transformers and memory-efficient strategies. Comparisons with state-of-the-art methods also verify the superiority of our proposed FAT-Net in terms of both accuracy and inference speed. The code is available at https://github.com/SZUcsh/FAT-Net.


Assuntos
Processamento de Imagem Assistida por Computador , Dermatopatias , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA