Your browser doesn't support javascript.
loading
A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition Using a Wearable Hybrid Sensor System.
Yu, Haibin; Pan, Guoxiong; Pan, Mian; Li, Chong; Jia, Wenyan; Zhang, Li; Sun, Mingui.
Afiliação
  • Yu H; College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China. shoreyhb@hdu.edu.cn.
  • Pan G; College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China. pgx@hdu.edu.cn.
  • Pan M; College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China. ai@hdu.edu.cn.
  • Li C; College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China. 172040046@hdu.edu.cn.
  • Jia W; Department of Electrical and Computer Engineering, University of Pittsburgh, PA 15261, USA. wej6@pitt.edu.
  • Zhang L; School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China. zhangli@hdu.edu.cn.
  • Sun M; Department of Neurological Surgery, University of Pittsburgh, PA 15213, USA. drsun@pitt.edu.
Sensors (Basel) ; 19(3)2019 Jan 28.
Article em En | MEDLINE | ID: mdl-30696100
ABSTRACT
Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2019 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2019 Tipo de documento: Article