Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 745
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Int J Behav Nutr Phys Act ; 21(1): 77, 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39020353

RESUMEN

BACKGROUND: The more accurate we can assess human physical behaviour in free-living conditions the better we can understand its relationship with health and wellbeing. Thigh-worn accelerometry can be used to identify basic activity types as well as different postures with high accuracy. User-friendly software without the need for specialized programming may support the adoption of this method. This study aims to evaluate the classification accuracy of two novel no-code classification methods, namely SENS motion and ActiPASS. METHODS: A sample of 38 healthy adults (30.8 ± 9.6 years; 53% female) wore the SENS motion accelerometer (12.5 Hz; ±4 g) on their thigh during various physical activities. Participants completed standardized activities with varying intensities in the laboratory. Activities included walking, running, cycling, sitting, standing, and lying down. Subsequently, participants performed unrestricted free-living activities outside of the laboratory while being video-recorded with a chest-mounted camera. Videos were annotated using a predefined labelling scheme and annotations served as a reference for the free-living condition. Classification output from the SENS motion software and ActiPASS software was compared to reference labels. RESULTS: A total of 63.6 h of activity data were analysed. We observed a high level of agreement between the two classification algorithms and their respective references in both conditions. In the free-living condition, Cohen's kappa coefficients were 0.86 for SENS and 0.92 for ActiPASS. The mean balanced accuracy ranged from 0.81 (cycling) to 0.99 (running) for SENS and from 0.92 (walking) to 0.99 (sedentary) for ActiPASS across all activity types. CONCLUSIONS: The study shows that two available no-code classification methods can be used to accurately identify basic physical activity types and postures. Our results highlight the accuracy of both methods based on relatively low sampling frequency data. The classification methods showed differences in performance, with lower sensitivity observed in free-living cycling (SENS) and slow treadmill walking (ActiPASS). Both methods use different sets of activity classes with varying definitions, which may explain the observed differences. Our results support the use of the SENS motion system and both no-code classification methods.


Asunto(s)
Acelerometría , Ejercicio Físico , Muslo , Caminata , Humanos , Femenino , Masculino , Adulto , Acelerometría/métodos , Ejercicio Físico/fisiología , Caminata/fisiología , Adulto Joven , Algoritmos , Programas Informáticos , Carrera/fisiología , Ciclismo/fisiología , Postura
2.
Biomed Eng Online ; 23(1): 21, 2024 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-38368358

RESUMEN

BACKGROUND: Human activity Recognition (HAR) using smartphone sensors suffers from two major problems: sensor orientation and placement. Sensor orientation and sensor placement problems refer to the variation in sensor signal for a particular activity due to sensors' altering orientation and placement. Extracting orientation and position invariant features from raw sensor signals is a simple solution for tackling these problems. Using few heuristic features rather than numerous time-domain and frequency-domain features offers more simplicity in this approach. The heuristic features are features which have very minimal effects of sensor orientation and placement. In this study, we evaluated the effectiveness of four simple heuristic features in solving the sensor orientation and placement problems using a 1D-CNN-LSTM model for a data set consisting of over 12 million samples. METHODS: We accumulated data from 42 participants for six common daily activities: Lying, Sitting, Walking, and Running at 3-Metabolic Equivalent of Tasks (METs), 5-METs and 7-METs from a single accelerometer sensor of a smartphone. We conducted our study for three smartphone positions: Pocket, Backpack and Hand. We extracted simple heuristic features from the accelerometer data and used them to train and test a 1D-CNN-LSTM model to evaluate their effectiveness in solving sensor orientation and placement problems. RESULTS: We performed intra-position and inter-position evaluations. In intra-position evaluation, we trained and tested the model using data from the same smartphone position, whereas, in inter-position evaluation, the training and test data was from different smartphone positions. For intra-position evaluation, we acquired 70-73% accuracy; for inter-position cases, the accuracies ranged between 59 and 69%. Moreover, we performed participant-specific and activity-specific analyses. CONCLUSIONS: We found that the simple heuristic features are considerably effective in solving orientation problems. With further development, such as fusing the heuristic features with other methods that eliminate placement issues, we can also achieve a better result than the outcome we achieved using the heuristic features for the sensor placement problem. In addition, we found the heuristic features to be more effective in recognizing high-intensity activities.


Asunto(s)
Heurística , Teléfono Inteligente , Humanos , Actividades Humanas , Caminata , Acelerometría/métodos
3.
Biomed Eng Online ; 23(1): 17, 2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38336781

RESUMEN

BACKGROUND: The research gap addressed in this study is the applicability of deep neural network (NN) models on wearable sensor data to recognize different activities performed by patients with Parkinson's Disease (PwPD) and the generalizability of these models to PwPD using labeled healthy data. METHODS: The experiments were carried out utilizing three datasets containing wearable motion sensor readings on common activities of daily living. The collected readings were from two accelerometer sensors. PAMAP2 and MHEALTH are publicly available datasets collected from 10 and 9 healthy, young subjects, respectively. A private dataset of a similar nature collected from 14 PwPD patients was utilized as well. Deep NN models were implemented with varying levels of complexity to investigate the impact of data augmentation, manual axis reorientation, model complexity, and domain adaptation on activity recognition performance. RESULTS: A moderately complex model trained on the augmented PAMAP2 dataset and adapted to the Parkinson domain using domain adaptation achieved the best activity recognition performance with an accuracy of 73.02%, which was significantly higher than the accuracy of 63% reported in previous studies. The model's F1 score of 49.79% significantly improved compared to the best cross-testing of 33.66% F1 score with only data augmentation and 2.88% F1 score without data augmentation or domain adaptation. CONCLUSION: These findings suggest that deep NN models originating on healthy data have the potential to recognize activities performed by PwPD accurately and that data augmentation and domain adaptation can improve the generalizability of models in the healthy-to-PwPD transfer scenario. The simple/moderately complex architectures tested in this study could generalize better to the PwPD domain when trained on a healthy dataset compared to the most complex architectures used. The findings of this study could contribute to the development of accurate wearable-based activity monitoring solutions for PwPD, improving clinical decision-making and patient outcomes based on patient activity levels.


Asunto(s)
Enfermedad de Parkinson , Dispositivos Electrónicos Vestibles , Humanos , Enfermedad de Parkinson/diagnóstico , Actividades Cotidianas , Redes Neurales de la Computación , Movimiento (Física)
4.
Sensors (Basel) ; 24(8)2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38676166

RESUMEN

Shoe-based wearable sensor systems are a growing research area in health monitoring, disease diagnosis, rehabilitation, and sports training. These systems-equipped with one or more sensors, either of the same or different types-capture information related to foot movement or pressure maps beneath the foot. This captured information offers an overview of the subject's overall movement, known as the human gait. Beyond sensing, these systems also provide a platform for hosting ambient energy harvesters. They hold the potential to harvest energy from foot movements and operate related low-power devices sustainably. This article proposes two types of strategies (Strategy 1 and Strategy 2) for an energy-autonomous shoe-based system. Strategy 1 uses an accelerometer as a sensor for gait acquisition, which reflects the classical choice. Strategy 2 uses a piezoelectric element for the same, which opens up a new perspective in its implementation. In both strategies, the piezoelectric elements are used to harvest energy from foot activities and operate the system. The article presents a fair comparison between both strategies in terms of power consumption, accuracy, and the extent to which piezoelectric energy harvesters can contribute to overall power management. Moreover, Strategy 2, which uses piezoelectric elements for simultaneous sensing and energy harvesting, is a power-optimized method for an energy-autonomous shoe system.

5.
Sensors (Basel) ; 24(11)2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38894070

RESUMEN

To provide diverse in-home services like elderly care, versatile activity recognition technology is essential. Radio-based methods, including WiFi CSI, RFID, and backscatter communication, are preferred due to their minimal privacy intrusion, reduced physical burden, and low maintenance costs. However, these methods face challenges, including environmental dependence, proximity limitations between the device and the user, and untested accuracy amidst various radio obstacles such as furniture, appliances, walls, and other radio waves. In this paper, we propose a frequency-shift backscatter tag-based in-home activity recognition method and test its feasibility in a near-real residential setting. Consisting of simple components such as antennas and switches, these tags facilitate ultra-low power consumption and demonstrate robustness against environmental noise because a context corresponding to a tag can be obtained by only observing frequency shifts. We implemented a sensing system consisting of SD-WiFi, a software-defined WiFi AP, and physical switches on backscatter tags tailored for detecting the movements of daily objects. Our experiments demonstrate that frequency shifts by tags can be detected within a 2 m range with 72% accuracy under the line of sight (LoS) conditions and achieve a 96.0% accuracy (F-score) in recognizing seven typical daily living activities with an appropriate receiver/transmitter layout. Furthermore, in an additional experiment, we confirmed that increasing the number of overlaying packets enables frequency shift-detection even without LoS at distances of 3-5 m.


Asunto(s)
Actividades Cotidianas , Tecnología Inalámbrica , Humanos , Ondas de Radio , Dispositivo de Identificación por Radiofrecuencia/métodos
6.
Sensors (Basel) ; 24(11)2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38894164

RESUMEN

Group-activity scene graph (GASG) generation is a challenging task in computer vision, aiming to anticipate and describe relationships between subjects and objects in video sequences. Traditional video scene graph generation (VidSGG) methods focus on retrospective analysis, limiting their predictive capabilities. To enrich the scene-understanding capabilities, we introduced a GASG dataset extending the JRDB dataset with nuanced annotations involving appearance, interaction, position, relationship, and situation attributes. This work also introduces an innovative approach, a Hierarchical Attention-Flow (HAtt-Flow) mechanism, rooted in flow network theory to enhance GASG performance. Flow-attention incorporates flow conservation principles, fostering competition for sources and allocation for sinks, effectively preventing the generation of trivial attention. Our proposed approach offers a unique perspective on attention mechanisms, where conventional "values" and "keys" are transformed into sources and sinks, respectively, creating a novel framework for attention-based models. Through extensive experiments, we demonstrate the effectiveness of our Hatt-Flow model and the superiority of our proposed flow-attention mechanism. This work represents a significant advancement in predictive video scene understanding, providing valuable insights and techniques for applications that require real-time relationship prediction in video data.

7.
Sensors (Basel) ; 24(11)2024 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-38894162

RESUMEN

Composite indoor human activity recognition is very important in elderly health monitoring and is more difficult than identifying individual human movements. This article proposes a sensor-based human indoor activity recognition method that integrates indoor positioning. Convolutional neural networks are used to extract spatial information contained in geomagnetic sensors and ambient light sensors, while transform encoders are used to extract temporal motion features collected by gyroscopes and accelerometers. We established an indoor activity recognition model with a multimodal feature fusion structure. In order to explore the possibility of using only smartphones to complete the above tasks, we collected and established a multisensor indoor activity dataset. Extensive experiments verified the effectiveness of the proposed method. Compared with algorithms that do not consider the location information, our method has a 13.65% improvement in recognition accuracy.


Asunto(s)
Acelerometría , Algoritmos , Actividades Humanas , Redes Neurales de la Computación , Teléfono Inteligente , Humanos , Acelerometría/instrumentación , Acelerometría/métodos , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos
8.
Sensors (Basel) ; 24(4)2024 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-38400378

RESUMEN

Computer vision (CV)-based recognition approaches have accelerated the automation of safety and progress monitoring on construction sites. However, limited studies have explored its application in process-based quality control of construction works, especially for concealed work. In this study, a framework is developed to facilitate process-based quality control utilizing Spatial-Temporal Graph Convolutional Networks (ST-GCNs). To test this model experimentally, we used an on-site collected plastering work video dataset to recognize construction activities. An ST-GCN model was constructed to identify the four primary activities in plastering works, which attained 99.48% accuracy on the validation set. Then, the ST-GCN model was employed to recognize the activities of three extra videos, which represented a process with four activities in the correct order, a process without the activity of fiberglass mesh covering, and a process with four activities but in the wrong order, respectively. The results indicated that activity order could be clearly withdrawn from the activity recognition result of the model. Hence, it was convenient to judge whether key activities were missing or in the wrong order. This study has identified a promising framework that has the potential to the development of active, real-time, process-based quality control at construction sites.

9.
Sensors (Basel) ; 24(4)2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38400393

RESUMEN

Human activity recognition (HAR) in wearable and ubiquitous computing typically involves translating sensor readings into feature representations, either derived through dedicated pre-processing procedures or integrated into end-to-end learning approaches. Independent of their origin, for the vast majority of contemporary HAR methods and applications, those feature representations are typically continuous in nature. That has not always been the case. In the early days of HAR, discretization approaches had been explored-primarily motivated by the desire to minimize computational requirements on HAR, but also with a view on applications beyond mere activity classification, such as, for example, activity discovery, fingerprinting, or large-scale search. Those traditional discretization approaches, however, suffer from substantial loss in precision and resolution in the resulting data representations with detrimental effects on downstream analysis tasks. Times have changed, and in this paper, we propose a return to discretized representations. We adopt and apply recent advancements in vector quantization (VQ) to wearables applications, which enables us to directly learn a mapping between short spans of sensor data and a codebook of vectors, where the index comprises the discrete representation, resulting in recognition performance that is at least on par with their contemporary, continuous counterparts-often surpassing them. Therefore, this work presents a proof of concept for demonstrating how effective discrete representations can be derived, enabling applications beyond mere activity classification but also opening up the field to advanced tools for the analysis of symbolic sequences, as they are known, for example, from domains such as natural language processing. Based on an extensive experimental evaluation of a suite of wearable-based benchmark HAR tasks, we demonstrate the potential of our learned discretization scheme and discuss how discretized sensor data analysis can lead to substantial changes in HAR.


Asunto(s)
Actividades Humanas , Dispositivos Electrónicos Vestibles , Humanos , Aprendizaje Automático , Procesamiento de Lenguaje Natural
10.
Sensors (Basel) ; 24(3)2024 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-38339732

RESUMEN

Traditional systems for indoor pressure sensing and human activity recognition (HAR) rely on costly, high-resolution mats and computationally intensive neural network-based (NN-based) models that are prone to noise. In contrast, we design a cost-effective and noise-resilient pressure mat system for HAR, leveraging Velostat for intelligent pressure sensing and a novel hyperdimensional computing (HDC) classifier that is lightweight and highly noise resilient. To measure the performance of our system, we collected two datasets, capturing the static and continuous nature of human movements. Our HDC-based classification algorithm shows an accuracy of 93.19%, improving the accuracy by 9.47% over state-of-the-art CNNs, along with an 85% reduction in energy consumption. We propose a new HDC noise-resilient algorithm and analyze the performance of our proposed method in the presence of three different kinds of noise, including memory and communication, input, and sensor noise. Our system is more resilient across all three noise types. Specifically, in the presence of Gaussian noise, we achieve an accuracy of 92.15% (97.51% for static data), representing a 13.19% (8.77%) improvement compared to state-of-the-art CNNs.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Ruido , Actividades Humanas , Movimiento
11.
Sensors (Basel) ; 24(8)2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38676108

RESUMEN

Egocentric activity recognition is a prominent computer vision task that is based on the use of wearable cameras. Since egocentric videos are captured through the perspective of the person wearing the camera, her/his body motions severely complicate the video content, imposing several challenges. In this work we propose a novel approach for domain-generalized egocentric human activity recognition. Typical approaches use a large amount of training data, aiming to cover all possible variants of each action. Moreover, several recent approaches have attempted to handle discrepancies between domains with a variety of costly and mostly unsupervised domain adaptation methods. In our approach we show that through simple manipulation of available source domain data and with minor involvement from the target domain, we are able to produce robust models, able to adequately predict human activity in egocentric video sequences. To this end, we introduce a novel three-stream deep neural network architecture combining elements of vision transformers and residual neural networks which are trained using multi-modal data. We evaluate the proposed approach using a challenging, egocentric video dataset and demonstrate its superiority over recent, state-of-the-art research works.


Asunto(s)
Redes Neurales de la Computación , Grabación en Video , Humanos , Grabación en Video/métodos , Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Actividades Humanas , Dispositivos Electrónicos Vestibles
12.
Sensors (Basel) ; 24(8)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38676149

RESUMEN

Activity recognition is one of the significant technologies accompanying the development of the Internet of Things (IoT). It can help in recording daily life activities or reporting emergencies, thus improving the user's quality of life and safety, and even easing the workload of caregivers. This study proposes a human activity recognition (HAR) system based on activity data obtained via the micro-Doppler effect, combining a two-stream one-dimensional convolutional neural network (1D-CNN) with a bidirectional gated recurrent unit (BiGRU). Initially, radar sensor data are used to generate information related to time and frequency responses using short-time Fourier transform (STFT). Subsequently, the magnitudes and phase values are calculated and fed into the 1D-CNN and Bi-GRU models to extract spatial and temporal features for subsequent model training and activity recognition. Additionally, we propose a simple cross-channel operation (CCO) to facilitate the exchange of magnitude and phase features between parallel convolutional layers. An open dataset collected through radar, named Rad-HAR, is employed for model training and performance evaluation. Experimental results demonstrate that the proposed 1D-CNN+CCO-BiGRU model demonstrated superior performance, achieving an impressive accuracy rate of 98.2%. This outperformance of existing systems with the radar sensor underscores the proposed model's potential applicability in real-world scenarios, marking a significant advancement in the field of HAR within the IoT framework.


Asunto(s)
Aprendizaje Profundo , Actividades Humanas , Redes Neurales de la Computación , Radar , Humanos , Algoritmos , Internet de las Cosas
13.
Sensors (Basel) ; 24(12)2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38931675

RESUMEN

Human Activity Recognition (HAR) plays an important role in the automation of various tasks related to activity tracking in such areas as healthcare and eldercare (telerehabilitation, telemonitoring), security, ergonomics, entertainment (fitness, sports promotion, human-computer interaction, video games), and intelligent environments. This paper tackles the problem of real-time recognition and repetition counting of 12 types of exercises performed during athletic workouts. Our approach is based on the deep neural network model fed by the signal from a 9-axis motion sensor (IMU) placed on the chest. The model can be run on mobile platforms (iOS, Android). We discuss design requirements for the system and their impact on data collection protocols. We present architecture based on an encoder pretrained with contrastive learning. Compared to end-to-end training, the presented approach significantly improves the developed model's quality in terms of accuracy (F1 score, MAPE) and robustness (false-positive rate) during background activity. We make the AIDLAB-HAR dataset publicly available to encourage further research.


Asunto(s)
Actividades Humanas , Redes Neurales de la Computación , Telemedicina , Humanos , Ejercicio Físico/fisiología , Algoritmos
14.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38610574

RESUMEN

Significant strides have been made in the field of WiFi-based human activity recognition, yet recent wireless sensing methodologies still grapple with the reliance on copious amounts of data. When assessed in unfamiliar domains, the majority of models experience a decline in accuracy. To address this challenge, this study introduces Wi-CHAR, a novel few-shot learning-based cross-domain activity recognition system. Wi-CHAR is meticulously designed to tackle both the intricacies of specific sensing environments and pertinent data-related issues. Initially, Wi-CHAR employs a dynamic selection methodology for sensing devices, tailored to mitigate the diminished sensing capabilities observed in specific regions within a multi-WiFi sensor device ecosystem, thereby augmenting the fidelity of sensing data. Subsequent refinement involves the utilization of the MF-DBSCAN clustering algorithm iteratively, enabling the rectification of anomalies and enhancing the quality of subsequent behavior recognition processes. Furthermore, the Re-PN module is consistently engaged, dynamically adjusting feature prototype weights to facilitate cross-domain activity sensing in scenarios with limited sample data, effectively distinguishing between accurate and noisy data samples, thus streamlining the identification of new users and environments. The experimental results show that the average accuracy is more than 93% (five-shot) in various scenarios. Even in cases where the target domain has fewer data samples, better cross-domain results can be achieved. Notably, evaluation on publicly available datasets, WiAR and Widar 3.0, corroborates Wi-CHAR's robust performance, boasting accuracy rates of 89.7% and 92.5%, respectively. In summary, Wi-CHAR delivers recognition outcomes on par with state-of-the-art methodologies, meticulously tailored to accommodate specific sensing environments and data constraints.

15.
Sensors (Basel) ; 24(5)2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38475146

RESUMEN

Various sensing modalities, including external and internal sensors, have been employed in research on human activity recognition (HAR). Among these, internal sensors, particularly wearable technologies, hold significant promise due to their lightweight nature and simplicity. Recently, HAR techniques leveraging wearable biometric signals, such as electrocardiography (ECG) and photoplethysmography (PPG), have been proposed using publicly available datasets. However, to facilitate broader practical applications, a more extensive analysis based on larger databases with cross-subject validation is required. In pursuit of this objective, we initially gathered PPG signals from 40 participants engaged in five common daily activities. Subsequently, we evaluated the feasibility of classifying these activities using deep learning architecture. The model's performance was assessed in terms of accuracy, precision, recall, and F-1 measure via cross-subject cross-validation (CV). The proposed method successfully distinguished the five activities considered, with an average test accuracy of 95.14%. Furthermore, we recommend an optimal window size based on a comprehensive evaluation of performance relative to the input signal length. These findings confirm the potential for practical HAR applications based on PPG and indicate its prospective extension to various domains, such as healthcare or fitness applications, by concurrently analyzing behavioral and health data through a single biometric signal.


Asunto(s)
Redes Neurales de la Computación , Fotopletismografía , Humanos , Fotopletismografía/métodos , Estudios Prospectivos , Electrocardiografía/métodos , Actividades Humanas
16.
Sensors (Basel) ; 24(6)2024 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-38544172

RESUMEN

Physical exercise affects many facets of life, including mental health, social interaction, physical fitness, and illness prevention, among many others. Therefore, several AI-driven techniques have been developed in the literature to recognize human physical activities. However, these techniques fail to adequately learn the temporal and spatial features of the data patterns. Additionally, these techniques are unable to fully comprehend complex activity patterns over different periods, emphasizing the need for enhanced architectures to further increase accuracy by learning spatiotemporal dependencies in the data individually. Therefore, in this work, we develop an attention-enhanced dual-stream network (PAR-Net) for physical activity recognition with the ability to extract both spatial and temporal features simultaneously. The PAR-Net integrates convolutional neural networks (CNNs) and echo state networks (ESNs), followed by a self-attention mechanism for optimal feature selection. The dual-stream feature extraction mechanism enables the PAR-Net to learn spatiotemporal dependencies from actual data. Furthermore, the incorporation of a self-attention mechanism makes a substantial contribution by facilitating targeted attention on significant features, hence enhancing the identification of nuanced activity patterns. The PAR-Net was evaluated on two benchmark physical activity recognition datasets and achieved higher performance by surpassing the baselines comparatively. Additionally, a thorough ablation study was conducted to determine the best optimal model for human physical activity recognition.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Humanos , Actividades Humanas , Reconocimiento en Psicología , Ejercicio Físico
17.
Sensors (Basel) ; 24(6)2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38544204

RESUMEN

The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human-computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability and reliability of XAI evaluation metrics in the skeleton-based HAR domain. We have tested established XAI metrics, namely faithfulness and stability on Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM) to address this problem. This study introduces a perturbation method that produces variations within the error tolerance of motion sensor tracking, ensuring the resultant skeletal data points remain within the plausible output range of human movement as captured by the tracking device. We used the NTU RGB+D 60 dataset and the EfficientGCN architecture for HAR model training and testing. The evaluation involved systematically perturbing the 3D skeleton data by applying controlled displacements at different magnitudes to assess the impact on XAI metric performance across multiple action classes. Our findings reveal that faithfulness may not consistently serve as a reliable metric across all classes for the EfficientGCN model, indicating its limited applicability in certain contexts. In contrast, stability proves to be a more robust metric, showing dependability across different perturbation magnitudes. Additionally, CAM and Grad-CAM yielded almost identical explanations, leading to closely similar metric outcomes. This suggests a need for the exploration of additional metrics and the application of more diverse XAI methods to broaden the understanding and effectiveness of XAI in skeleton-based HAR.


Asunto(s)
Sistema Musculoesquelético , Humanos , Reproducibilidad de los Resultados , Movimiento , Esqueleto , Actividades Humanas
18.
Sensors (Basel) ; 24(6)2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38544198

RESUMEN

Lower extremity exercises are considered a standard and necessary treatment for rehabilitation and a well-rounded fitness routine, which builds strength, flexibility, and balance. The efficacy of rehabilitation programs hinges on meticulous monitoring of both adherence to home exercise routines and the quality of performance. However, in a home environment, patients often tend to inaccurately report the number of exercises performed and overlook the correctness of their rehabilitation motions, lacking quantifiable and systematic standards, thus impeding the recovery process. To address these challenges, there is a crucial need for a lightweight, unbiased, cost-effective, and objective wearable motion capture (Mocap) system designed for monitoring and evaluating home-based rehabilitation/fitness programs. This paper focuses on the development of such a system to gather exercise data into usable metrics. Five radio frequency (RF) inertial measurement unit (IMU) devices (RF-IMUs) were developed and strategically placed on calves, thighs, and abdomens. A two-layer long short-term memory (LSTM) model was used for fitness activity recognition (FAR) with an average accuracy of 97.4%. An intelligent smartphone algorithm was developed to track motion, recognize activity, and calculate key exercise variables in real time for squat, high knees, and lunge exercises. Additionally, a 3D avatar on the smartphone App allows users to observe and track their progress in real time or by replaying their exercise motions. A dynamic time warping (DTW) algorithm was also integrated into the system for scoring the similarity in two motions. The system's adaptability shows promise for applications in medical rehabilitation and sports.


Asunto(s)
Ejercicio Físico , Dispositivos Electrónicos Vestibles , Humanos , Terapia por Ejercicio , Pierna , Muslo
19.
Sensors (Basel) ; 24(10)2024 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-38793899

RESUMEN

Metabolic syndrome poses a significant health challenge worldwide, prompting the need for comprehensive strategies integrating physical activity monitoring and energy expenditure. Wearable sensor devices have been used both for energy intake and energy expenditure (EE) estimation. Traditionally, sensors are attached to the hip or wrist. The primary aim of this research is to investigate the use of an eyeglass-mounted wearable energy intake sensor (Automatic Ingestion Monitor v2, AIM-2) for simultaneous recognition of physical activity (PAR) and estimation of steady-state EE as compared to a traditional hip-worn device. Study data were collected from six participants performing six structured activities, with the reference EE measured using indirect calorimetry (COSMED K5) and reported as metabolic equivalents of tasks (METs). Next, a novel deep convolutional neural network-based multitasking model (Multitasking-CNN) was developed for PAR and EE estimation. The Multitasking-CNN was trained with a two-step progressive training approach for higher accuracy, where in the first step the model for PAR was trained, and in the second step the model was fine-tuned for EE estimation. Finally, the performance of Multitasking-CNN on AIM-2 attached to eyeglasses was compared to the ActiGraph GT9X (AG) attached to the right hip. On the AIM-2 data, Multitasking-CNN achieved a maximum of 95% testing accuracy of PAR, a minimum of 0.59 METs mean square error (MSE), and 11% mean absolute percentage error (MAPE) in EE estimation. Conversely, on AG data, the Multitasking-CNN model achieved a maximum of 82% testing accuracy in PAR, a minimum of 0.73 METs MSE, and 13% MAPE in EE estimation. These results suggest the feasibility of using an eyeglass-mounted sensor for both PAR and EE estimation.


Asunto(s)
Metabolismo Energético , Ejercicio Físico , Anteojos , Redes Neurales de la Computación , Dispositivos Electrónicos Vestibles , Humanos , Metabolismo Energético/fisiología , Ejercicio Físico/fisiología , Adulto , Masculino , Calorimetría Indirecta/instrumentación , Calorimetría Indirecta/métodos , Femenino , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos
20.
Sensors (Basel) ; 24(4)2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38400357

RESUMEN

Parkinson's disease (PD) is the second most prevalent dementia in the world. Wearable technology has been useful in the computer-aided diagnosis and long-term monitoring of PD in recent years. The fundamental issue remains how to assess the severity of PD using wearable devices in an efficient and accurate manner. However, in the real-world free-living environment, there are two difficult issues, poor annotation and class imbalance, both of which could potentially impede the automatic assessment of PD. To address these challenges, we propose a novel framework for assessing the severity of PD patient's in a free-living environment. Specifically, we use clustering methods to learn latent categories from the same activities, while latent Dirichlet allocation (LDA) topic models are utilized to capture latent features from multiple activities. Then, to mitigate the impact of data imbalance, we augment bag-level data while retaining key instance prototypes. To comprehensively demonstrate the efficacy of our proposed framework, we collected a dataset containing wearable-sensor signals from 83 individuals in real-life free-living conditions. The experimental results show that our framework achieves an astounding 73.48% accuracy in the fine-grained (normal, mild, moderate, severe) classification of PD severity based on hand movements. Overall, this study contributes to more accurate PD self-diagnosis in the wild, allowing doctors to provide remote drug intervention guidance.


Asunto(s)
Enfermedad de Parkinson , Dispositivos Electrónicos Vestibles , Humanos , Enfermedad de Parkinson/diagnóstico , Movimiento , Índice de Severidad de la Enfermedad , Extremidad Superior
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA