Your browser doesn't support javascript.
loading
Referring Segmentation via Encoder-Fused Cross-Modal Attention Network.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 7654-7667, 2023 Jun.
Article em En | MEDLINE | ID: mdl-36367919
ABSTRACT
This paper focuses on referring segmentation, which aims to selectively segment the corresponding visual region in an image (or video) according to the referring expression. However, the existing methods usually consider the interaction between multi-modal features at the decoding end of the network. Specifically, they interact the visual features of each scale with language respectively, thus ignoring the correlation between multi-scale features. In this work, we present an encoder fusion network (EFN), which transfers the multi-modal feature learning process from the decoding end to the encoding end and realizes the gradual refinement of multi-modal features by the language. In EFN, we also adopt a co-attention mechanism to promote the mutual alignment of language and visual information in feature space. In the decoding stage, a boundary enhancement module (BEM) is proposed to enhance the network's attention to the details of the target. For video data, we introduce an asymmetric cross-frame attention module (ACFM) to effectively capture the temporal information from the video frames by computing the relationship between each pixel of the current frame and each pooled sub-region of the reference frames. Extensive experiments on referring image/video segmentation datasets show that our method outperforms the state-of-the-art performance.

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article