Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 28(7): 4048-4061, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38709610

RESUMO

The Transformer has been successfully used in medical image segmentation due to its excellent long-range modeling capabilities. However, patch segmentation is necessary when building a Transformer class model. This process ignores the tissue structure features within patch, resulting in the loss of shallow representation information. In this study, we propose a Heterogeneous Swin Transformer with Multi-Receptive Field (HST-MRF) model that fuses patch information from different receptive fields to solve the problem of loss of feature information caused by patch segmentation. The heterogeneous Swin Transformer (HST) is the core module, which achieves the interaction of multi-receptive field patch information through heterogeneous attention and passes it to the next stage for progressive learning, thus complementing the patch structure information. We also designed a two-stage fusion module, multimodal bilinear pooling (MBP), to assist HST in further fusing multi-receptive field information and combining low-level and high-level semantic information for accurate localization of lesion regions. In addition, we developed adaptive patch embedding (APE) and soft channel attention (SCA) modules to retain more valuable information when acquiring patch embedding and filtering channel features, respectively, thereby improving model segmentation quality. We evaluated HST-MRF on multiple datasets for polyp, skin lesion and breast ultrasound segmentation tasks. Experimental results show that our proposed method outperforms state-of-the-art models and can achieve superior performance. Furthermore, we verified the effectiveness of each module and the benefits of multi-receptive field segmentation in reducing the loss of structural information through ablation experiments and qualitative analysis.


Assuntos
Algoritmos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo
2.
Comput Biol Med ; 170: 108090, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38320341

RESUMO

The U-shaped convolutional neural network (CNN) has attained remarkable achievements in the segmentation of skin lesion. However, given the inherent locality of convolution, this architecture cannot capture long-range pixel dependencies and multiscale global contextual information effectively. Moreover, repeated convolutions and downsampling operations can readily result in the omission of intricate local fine-grained details. In this paper, we proposed a U-shaped network (DBNet-SI) equipped with a dual-branch module that combines shift window attention and inception structures. First, we proposed a dual-branch module that combines shift window attention and inception structures (MSI) to better capture multiscale global contextual information and long-range pixel dependencies. Specifically, we have devised a cross-branch bidirectional interaction module within the MSI module to enable information complementarity between the two branches in the channel and spatial dimensions. Therefore, MSI is capable of extracting distinguishing and comprehensive features to accurately identify the skin lesion boundaries. Second, we have devised a progressive feature enhancement and information compensation module (PFEIC), which progressively compensates for fine-grained features through reconstructed skip connections and integrated global context attention modules. The results of the experiment show the superior segmentation performance of DBNet-SI compared with other deep learning models for skin lesion segmentation in the ISIC2017 and ISIC2018 datasets. Ablation studies demonstrate that our model can effectively extract rich multiscale global contextual information and compensate for the loss of local details.


Assuntos
Redes Neurais de Computação , Dermatopatias , Humanos , Dermatopatias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
3.
IEEE Trans Med Imaging ; 43(2): 832-845, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37812550

RESUMO

Research in medical visual question answering (MVQA) can contribute to the development of computer-aided diagnosis. MVQA is a task that aims to predict accurate and convincing answers based on given medical images and associated natural language questions. This task requires extracting medical knowledge-rich feature content and making fine-grained understandings of them. Therefore, constructing an effective feature extraction and understanding scheme are keys to modeling. Existing MVQA question extraction schemes mainly focus on word information, ignoring medical information in the text, such as medical concepts and domain-specific terms. Meanwhile, some visual and textual feature understanding schemes cannot effectively capture the correlation between regions and keywords for reasonable visual reasoning. In this study, a dual-attention learning network with word and sentence embedding (DALNet-WSE) is proposed. We design a module, transformer with sentence embedding (TSE), to extract a double embedding representation of questions containing keywords and medical information. A dual-attention learning (DAL) module consisting of self-attention and guided attention is proposed to model intensive intramodal and intermodal interactions. With multiple DAL modules (DALs), learning visual and textual co-attention can increase the granularity of understanding and improve visual reasoning. Experimental results on the ImageCLEF 2019 VQA-MED (VQA-MED 2019) and VQA-RAD datasets demonstrate that our proposed method outperforms previous state-of-the-art methods. According to the ablation studies and Grad-CAM maps, DALNet-WSE can extract rich textual information and has strong visual reasoning ability.


Assuntos
Diagnóstico por Computador , Idioma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...