Your browser doesn't support javascript.
loading
Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework.
Javeed, Madiha; Mudawi, Naif Al; Alazeb, Abdulwahab; Almakdi, Sultan; Alotaibi, Saud S; Chelloug, Samia Allaoua; Jalal, Ahmad.
Afiliação
  • Javeed M; Department of Computer Science, Air University, E-9, Islamabad 44000, Pakistan.
  • Mudawi NA; Department of Computer Science, College of Computer Science and Information System, Najran University, Najran 55461, Saudi Arabia.
  • Alazeb A; Department of Computer Science, College of Computer Science and Information System, Najran University, Najran 55461, Saudi Arabia.
  • Almakdi S; Department of Computer Science, College of Computer Science and Information System, Najran University, Najran 55461, Saudi Arabia.
  • Alotaibi SS; Information Systems Department, Umm Al-Qura University, Makkah 24382, Saudi Arabia.
  • Chelloug SA; Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia.
  • Jalal A; Department of Computer Science, Air University, E-9, Islamabad 44000, Pakistan.
Sensors (Basel) ; 23(18)2023 Sep 16.
Article em En | MEDLINE | ID: mdl-37765984
ABSTRACT
Smart home monitoring systems via internet of things (IoT) are required for taking care of elders at home. They provide the flexibility of monitoring elders remotely for their families and caregivers. Activities of daily living are an efficient way to effectively monitor elderly people at home and patients at caregiving facilities. The monitoring of such actions depends largely on IoT-based devices, either wireless or installed at different places. This paper proposes an effective and robust layered architecture using multisensory devices to recognize the activities of daily living from anywhere. Multimodality refers to the sensory devices of multiple types working together to achieve the objective of remote monitoring. Therefore, the proposed multimodal-based approach includes IoT devices, such as wearable inertial sensors and videos recorded during daily routines, fused together. The data from these multi-sensors have to be processed through a pre-processing layer through different stages, such as data filtration, segmentation, landmark detection, and 2D stick model. In next layer called the features processing, we have extracted, fused, and optimized different features from multimodal sensors. The final layer, called classification, has been utilized to recognize the activities of daily living via a deep learning technique known as convolutional neural network. It is observed from the proposed IoT-based multimodal layered system's results that an acceptable mean accuracy rate of 84.14% has been achieved.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article