Learning to Recognize Actions on Objects in Egocentric Video With Attention Dictionaries.
IEEE Trans Pattern Anal Mach Intell
; 45(6): 6674-6687, 2023 Jun.
Article
in En
| MEDLINE
| ID: mdl-33571086
We present EgoACO, a deep neural architecture for video action recognition that learns to pool action-context-object descriptors from frame level features by leveraging the verb-noun structure of action labels in egocentric video datasets. The core component is class activation pooling (CAP), a differentiable pooling layer that combines ideas from bilinear pooling for fine-grained recognition and from feature learning for discriminative localization. CAP uses self-attention with a dictionary of learnable weights to pool from the most relevant feature regions. Through CAP, EgoACO learns to decode object and scene context descriptors from video frame features. For temporal modeling we design a recurrent version of class activation pooling termed Long Short-Term Attention (LSTA). LSTA extends convolutional gated LSTM with built-in spatial attention and a re-designed output gate. Action, object and context descriptors are fused by a multi-head prediction that accounts for the inter-dependencies between noun-verb-action structured labels in egocentric video datasets. EgoACO features built-in visual explanations, helping learning and interpretation of discriminative information in video. Results on the two largest egocentric action recognition datasets currently available, EPIC-KITCHENS and EGTEA Gaze+, show that by decoding action-context-object descriptors, the model achieves state-of-the-art recognition performance.
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Type of study:
Prognostic_studies
Language:
En
Journal:
IEEE Trans Pattern Anal Mach Intell
Journal subject:
INFORMATICA MEDICA
Year:
2023
Document type:
Article
Country of publication:
United States