Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38625774

RESUMO

Scene Graph Generation (SGG) aims to detect visual relationships in an image. However, due to long-tailed bias, SGG is far from practical. Most methods depend heavily on the assistance of statistics co-occurrence to generate a balanced dataset, so they are dataset-specific and easily affected by noises. The fundamental cause is that SGG is simplified as a classification task instead of a reasoning task, thus the ability capturing the fine-grained details is limited and the difficulty in handling ambiguity is increased. By imitating the way of dual process in cognitive psychology, a Visual-Textual Semantics Consistency Network (VTSCN) is proposed to model the SGG task as a reasoning process, and relieve the long-tailed bias significantly. In VTSCN, as the rapid autonomous process (Type1 process), we design a Hybrid Union Representation (HUR) module, which is divided into two steps for spatial awareness and working memories modeling. In addition, as the higher order reasoning process (Type2 process), a Global Textual Semantics Modeling (GTS) module is designed to individually model the textual contexts with the word embeddings of pairwise objects. As the final associative process of cognition, a Heterogeneous Semantics Consistency (HSC) module is designed to balance the type1 process and the type2 process. Lastly, our VTSCN raises a new way for SGG model design by fully considering human cognitive process. Experiments on Visual Genome, GQA and PSG datasets show our method is superior to state-of-the-art methods, and ablation studies validate the effectiveness of our VTSCN. The source codes are released on GitHub: https://github.com/Nora-Zhang98/VTSCN.

2.
IEEE Trans Neural Netw Learn Syst ; 34(3): 1304-1318, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34424850

RESUMO

The feature pyramid has been widely used in many visual tasks, such as fine-grained image classification, instance segmentation, and object detection, and had been achieving promising performance. Although many algorithms exploit different-level features to construct the feature pyramid, they usually treat them equally and do not make an in-depth investigation on the inherent complementary advantages of different-level features. In this article, to learn a pyramid feature with the robust representational ability for action recognition, we propose a novel collaborative and multilevel feature selection network (FSNet) that applies feature selection and aggregation on multilevel features according to action context. Unlike previous works that learn the pattern of frame appearance by enhancing spatial encoding, the proposed network consists of the position selection module and channel selection module that can adaptively aggregate multilevel features into a new informative feature from both position and channel dimensions. The position selection module integrates the vectors at the same spatial location across multilevel features with positionwise attention. Similarly, the channel selection module selectively aggregates the channel maps at the same channel location across multilevel features with channelwise attention. Positionwise features with different receptive fields and channelwise features with different pattern-specific responses are emphasized respectively depending on their correlations to actions, which are fused as a new informative feature for action recognition. The proposed FSNet can be inserted into different backbone networks flexibly, and extensive experiments are conducted on three benchmark action datasets, Kinetics, UCF101, and HMDB51. Experimental results show that FSNet is practical and can be collaboratively trained to boost the representational ability of existing networks. FSNet achieves superior performance against most top-tier models on Kinetics and all models on UCF101 and HMDB51.

3.
IEEE Trans Neural Netw Learn Syst ; 32(1): 334-347, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32224465

RESUMO

Convolutional neural networks (CNNs) have shown an effective way to learn spatiotemporal representation for action recognition in videos. However, most traditional action recognition algorithms do not employ the attention mechanism to focus on essential parts of video frames that are relevant to the action. In this article, we propose a novel global and local knowledge-aware attention network to address this challenge for action recognition. The proposed network incorporates two types of attention mechanism called statistic-based attention (SA) and learning-based attention (LA) to attach higher importance to the crucial elements in each video frame. As global pooling (GP) models capture global information, while attention models focus on the significant details to make full use of their implicit complementary advantages, our network adopts a three-stream architecture, including two attention streams and a GP stream. Each attention stream employs a fusion layer to combine global and local information and produces composite features. Furthermore, global-attention (GA) regularization is proposed to guide two attention streams to better model dynamics of composite features with the reference to the global information. Fusion at the softmax layer is adopted to make better use of the implicit complementary advantages between SA, LA, and GP streams and get the final comprehensive predictions. The proposed network is trained in an end-to-end fashion and learns efficient video-level features both spatially and temporally. Extensive experiments are conducted on three challenging benchmarks, Kinetics, HMDB51, and UCF101, and experimental results demonstrate that the proposed network outperforms most state-of-the-art methods.


Assuntos
Movimento , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Atenção , Benchmarking , Sistemas Computacionais , Bases de Dados Factuais , Humanos , Processamento de Imagem Assistida por Computador , Conhecimento , Aprendizado de Máquina , Reprodutibilidade dos Testes , Gravação em Vídeo
4.
IEEE Trans Image Process ; 27(4): 1748-1762, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29346092

RESUMO

In this paper, we present a novel two-layer video representation for human action recognition employing hierarchical group sparse encoding technique and spatio-temporal structure. In the first layer, a new sparse encoding method named locally consistent group sparse coding (LCGSC) is proposed to make full use of motion and appearance information of local features. LCGSC method not only encodes global layouts of features within the same video-level groups, but also captures local correlations between them, which obtains expressive sparse representations of video sequences. Meanwhile, two kinds of efficient location estimation models, namely an absolute location model and a relative location model, are developed to incorporate spatio-temporal structure into LCGSC representations. In the second layer, action-level group is established, where a hierarchical LCGSC encoding scheme is applied to describe videos at different levels of abstractions. On the one hand, the new layer captures higher order dependency between video sequences; on the other hand, it takes label information into consideration to improve discrimination of videos' representations. The superiorities of our hierarchical framework are demonstrated on several challenging datasets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...