Your browser doesn't support javascript.
loading
STAC: Spatial-Temporal Attention on Compensation Information for Activity Recognition in FPV.
Zhang, Yue; Sun, Shengli; Lei, Linjian; Liu, Huikai; Xie, Hui.
Afiliação
  • Zhang Y; Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China.
  • Sun S; School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China.
  • Lei L; Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai 200083, China.
  • Liu H; Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China.
  • Xie H; Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai 200083, China.
Sensors (Basel) ; 21(4)2021 Feb 05.
Article em En | MEDLINE | ID: mdl-33562612
Egocentric activity recognition in first-person video (FPV) requires fine-grained matching of the camera wearer's action and the objects being operated. The traditional method used for third-person action recognition does not suffice because of (1) the background ego-noise introduced by the unstructured movement of the wearable devices caused by body movement; (2) the small-sized and fine-grained objects with single scale in FPV. Size compensation is performed to augment the data. It generates a multi-scale set of regions, including multi-size objects, leading to superior performance. We compensate for the optical flow to eliminate the camera noise in motion. We developed a novel two-stream convolutional neural network-recurrent attention neural network (CNN-RAN) architecture: spatial temporal attention on compensation information (STAC), able to generate generic descriptors under weak supervision and focus on the locations of activated objects and the capture of effective motion. We encode the RGB features using a spatial location-aware attention mechanism to guide the representation of visual features. Similar location-aware channel attention is applied to the temporal stream in the form of stacked optical flow to implicitly select the relevant frames and pay attention to where the action occurs. The two streams are complementary since one is object-centric and the other focuses on the motion. We conducted extensive ablation analysis to validate the complementarity and effectiveness of our STAC model qualitatively and quantitatively. It achieved state-of-the-art performance on two egocentric datasets.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Imagem Assistida por Computador / Redes Neurais de Computação Limite: Humans Idioma: En Revista: Sensors (Basel) Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Imagem Assistida por Computador / Redes Neurais de Computação Limite: Humans Idioma: En Revista: Sensors (Basel) Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China