Your browser doesn't support javascript.
loading
Fall Detection Method for Infrared Videos Based on Spatial-Temporal Graph Convolutional Network.
Yang, Junkai; He, Yuqing; Zhu, Jingxuan; Lv, Zitao; Jin, Weiqi.
Affiliation
  • Yang J; MOE Key Laboratory of Optoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
  • He Y; MOE Key Laboratory of Optoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
  • Zhu J; MOE Key Laboratory of Optoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
  • Lv Z; MOE Key Laboratory of Optoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
  • Jin W; MOE Key Laboratory of Optoelectronic Imaging Technology and System, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
Sensors (Basel) ; 24(14)2024 Jul 17.
Article in En | MEDLINE | ID: mdl-39066046
ABSTRACT
The timely detection of falls and alerting medical aid is critical for health monitoring in elderly individuals living alone. This paper mainly focuses on issues such as poor adaptability, privacy infringement, and low recognition accuracy associated with traditional visual sensor-based fall detection. We propose an infrared video-based fall detection method utilizing spatial-temporal graph convolutional networks (ST-GCNs) to address these challenges. Our method used fine-tuned AlphaPose to extract 2D human skeleton sequences from infrared videos. Subsequently, the skeleton data was represented in Cartesian and polar coordinates and processed through a two-stream ST-GCN to recognize fall behaviors promptly. To enhance the network's recognition capability for fall actions, we improved the adjacency matrix of graph convolutional units and introduced multi-scale temporal graph convolution units. To facilitate practical deployment, we optimized time window and network depth of the ST-GCN, striking a balance between model accuracy and speed. The experimental results on a proprietary infrared human action recognition dataset demonstrated that our proposed algorithm accurately identifies fall behaviors with the highest accuracy of 96%. Moreover, our algorithm performed robustly, identifying falls in both near-infrared and thermal-infrared videos.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Video Recording / Accidental Falls / Algorithms / Neural Networks, Computer / Infrared Rays Limits: Humans Language: En Journal: Sensors (Basel) Year: 2024 Document type: Article Affiliation country: Country of publication:

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Video Recording / Accidental Falls / Algorithms / Neural Networks, Computer / Infrared Rays Limits: Humans Language: En Journal: Sensors (Basel) Year: 2024 Document type: Article Affiliation country: Country of publication: