Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Opt Express ; 32(3): 3835-3851, 2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38297596

RESUMO

High-level detection of weak targets under bright light has always been an important yet challenging task. In this paper, a method of effectively fusing intensity and polarization information has been proposed to tackle this issue. Specifically, an attention-guided dual-discriminator generative adversarial network (GAN) has been designed for image fusion of these two sources, in which the fusion results can maintain rich background information in intensity images while significantly completing target information from polarization images. The framework consists of a generator and two discriminators, which retain the texture and salient information as much as possible from the source images. Furthermore, attention mechanism is introduced to focus on contextual semantic information and enhance long-term dependency. For preserving salient information, a suitable loss function has been introduced to constrain the pixel-level distribution between the result and the original image. Moreover, the real scene dataset of weak targets under bright light has been built and the effects of fusion between polarization and intensity information on different weak targets have been investigated and discussed. The results demonstrate that the proposed method outperforms other methods both in subjective evaluations and objective indexes, which prove the effectiveness of achieving accurate detection of weak targets in bright light background.

2.
IEEE Trans Med Imaging ; 40(12): 3531-3542, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34133275

RESUMO

Liver lesion segmentation is an essential process to assist doctors in hepatocellular carcinoma diagnosis and treatment planning. Multi-modal positron emission tomography and computed tomography (PET-CT) scans are widely utilized due to their complementary feature information for this purpose. However, current methods ignore the interaction of information across the two modalities during feature extraction, omit the co-learning of the feature maps of different resolutions, and do not ensure that shallow and deep features complement each others sufficiently. In this paper, our proposed model can achieve feature interaction across multi-modal channels by sharing the down-sampling blocks between two encoding branches to eliminate misleading features. Furthermore, we combine feature maps of different resolutions to derive spatially varying fusion maps and enhance the lesions information. In addition, we introduce a similarity loss function for consistency constraint in case that predictions of separated refactoring branches for the same regions vary a lot. We evaluate our model for liver tumor segmentation using a PET-CT scans dataset, compare our method with the baseline techniques for multi-modal (multi-branches, multi-channels and cascaded networks) and then demonstrate that our method has a significantly higher accuracy ( ) than the baseline models.


Assuntos
Neoplasias Hepáticas , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...