RESUMO
Lower limb exoskeletons represent a relevant tool for rehabilitating gait in patients with lower limb movement disorders. Partial assistance exoskeletons adaptively provide the joint torque needed, on top of that produced by the patient, for a correct and stable gait, helping the patient to recover an autonomous gait. Thus, the device needs to identify the different phases of the gait cycle to produce precisely timed commands that drive its joint motors appropriately. In this study, EMG signals have been used for gait phase detection considering that EMG activations lead limb kinematics by at least 120 ms. We propose a deep learning model based on bidirectional LSTM to identify stance and swing gait phases from EMG data. We built a dataset of EMG signals recorded at 1500 Hz from four muscles from the dominant leg in a population of 26 healthy subjects walking overground (WO) and walking on a treadmill (WT) using a lower limb exoskeleton. The data were labeled with the corresponding stance or swing gait phase based on limb kinematics provided by inertial motion sensors. The model was studied in three different scenarios, and we explored its generalization abilities and evaluated its applicability to the online processing of EMG data. The training was always conducted on 500-sample sequences from WO recordings of 23 subjects. Testing always involved WO and WT sequences from the remaining three subjects. First, the model was trained and tested on 500 Hz EMG data, obtaining an overall accuracy on the WO and WT test datasets of 92.43% and 91.16%, respectively. The simulation of online operation required 127 ms to preprocess and classify one sequence. Second, the trained model was evaluated against a test set built on 1500 Hz EMG data. The accuracies were lower, yet the processing times were 11 ms faster. Third, we partially retrained the model on a subset of the 1500 Hz training dataset, achieving 87.17% and 89.64% accuracy on the 1500 Hz WO and WT test sets, respectively. Overall, the proposed deep learning model appears to be a valuable candidate for entering the control pipeline of a lower limb rehabilitation exoskeleton in terms of both the achieved accuracy and processing times.
Assuntos
Eletromiografia , Exoesqueleto Energizado , Marcha , Caminhada , Humanos , Eletromiografia/métodos , Marcha/fisiologia , Caminhada/fisiologia , Masculino , Adulto , Fenômenos Biomecânicos/fisiologia , Feminino , Redes Neurais de Computação , Aprendizado Profundo , Adulto Jovem , Processamento de Sinais Assistido por Computador , Extremidade Inferior/fisiologiaRESUMO
Ambient Assisted Living is a concept that focuses on using technology to support and enhance the quality of life and well-being of frail or elderly individuals in both indoor and outdoor environments. It aims at empowering individuals to maintain their independence and autonomy while ensuring their safety and providing assistance when needed. Human Activity Recognition is widely regarded as the most popular methodology within the field of Ambient Assisted Living. Human Activity Recognition involves automatically detecting and classifying the activities performed by individuals using sensor-based systems. Researchers have employed various methodologies, utilizing wearable and/or non-wearable sensors, and employing algorithms ranging from simple threshold-based techniques to more advanced deep learning approaches. In this review, literature from the past decade is critically examined, specifically exploring the technological aspects of Human Activity Recognition in Ambient Assisted Living. An exhaustive analysis of the methodologies adopted, highlighting their strengths and weaknesses is provided. Finally, challenges encountered in the field of Human Activity Recognition for Ambient Assisted Living are thoroughly discussed. These challenges encompass issues related to data collection, model training, real-time performance, generalizability, and user acceptance. Miniaturization, unobtrusiveness, energy harvesting and communication efficiency will be the crucial factors for new wearable solutions.
RESUMO
Ambient Assisted Living (AAL) systems are designed to provide unobtrusive and user-friendly support in daily life and can be used for monitoring frail people based on various types of sensors, including wearables and cameras. Although cameras can be perceived as intrusive in terms of privacy, low-cost RGB-D devices (i.e., Kinect V2) that extract skeletal data can partially overcome these limits. In addition, deep learning-based algorithms, such as Recurrent Neural Networks (RNNs), can be trained on skeletal tracking data to automatically identify different human postures in the AAL domain. In this study, we investigate the performance of two RNN models (2BLSTM and 3BGRU) in identifying daily living postures and potentially dangerous situations in a home monitoring system, based on 3D skeletal data acquired with Kinect V2. We tested the RNN models with two different feature sets: one consisting of eight human-crafted kinematic features selected by a genetic algorithm, and another consisting of 52 ego-centric 3D coordinates of each considered skeleton joint, plus the subject's distance from the Kinect V2. To improve the generalization ability of the 3BGRU model, we also applied a data augmentation method to balance the training dataset. With this last solution we reached an accuracy of 88%, the best we achieved so far.
Assuntos
Algoritmos , Postura , Humanos , Redes Neurais de Computação , Esqueleto , Monitorização FisiológicaRESUMO
Human Action Recognition (HAR) is a rapidly evolving field impacting numerous domains, among which is Ambient Assisted Living (AAL). In such a context, the aim of HAR is meeting the needs of frail individuals, whether elderly and/or disabled and promoting autonomous, safe and secure living. To this goal, we propose a monitoring system detecting dangerous situations by classifying human postures through Artificial Intelligence (AI) solutions. The developed algorithm works on a set of features computed from the skeleton data provided by four Kinect One systems simultaneously recording the scene from different angles and identifying the posture of the subject in an ecological context within each recorded frame. Here, we compare the recognition abilities of Multi-Layer Perceptron (MLP) and Long-Short Term Memory (LSTM) Sequence networks. Starting from the set of previously selected features we performed a further feature selection based on an SVM algorithm for the optimization of the MLP network and used a genetic algorithm for selecting the features for the LSTM sequence model. We then optimized the architecture and hyperparameters of both models before comparing their performances. The best MLP model (3 hidden layers and a Softmax output layer) achieved 78.4%, while the best LSTM (2 bidirectional LSTM layers, 2 dropout and a fully connected layer) reached 85.7%. The analysis of the performances on individual classes highlights the better suitability of the LSTM approach.
Assuntos
Inteligência Ambiental , Idoso , Inteligência Artificial , Atividades Humanas , Humanos , Redes Neurais de Computação , PosturaRESUMO
Continuous monitoring of frail individuals for detecting dangerous situations during their daily living at home can be a powerful tool toward their inclusion in the society by allowing living independently while safely. To this goal we developed a pose recognition system tailored to disabled students living in college dorms and based on skeleton tracking through four Kinect One devices independently recording the inhabitant with different viewpoints, while preserving the individual's privacy. The system is intended to classify each data frame and provide the classification result to a further decision-making algorithm, which may trigger an alarm based on the classified pose and the location of the subject with respect to the furniture in the room. An extensive dataset was recorded on 12 individuals moving in a mockup room and undertaking four poses to be recognized: standing, sitting, lying down, and "dangerous sitting." The latter consists of the subject slumped in a chair with his/her head lying forward or backward as if unconscious. Each skeleton frame was labeled and represented using 10 discriminative features: three skeletal joint vertical coordinates and seven relative and absolute angles describing articular joint positions and body segment orientation. In order to classify the pose of the subject in each skeleton frame we built a two hidden layers multi-layer perceptron neural network with a "SoftMax" output layer, which we trained on the data from 10 of the 12 subjects (495,728 frames), with the data from the two remaining subjects representing the test set (106,802 frames). The system achieved very promising results, with an average accuracy of 83.9% (ranging 82.7 and 94.3% in each of the four classes). Our work proves the usefulness of human pose recognition based on machine learning in the field of safety monitoring in assisted living conditions.