Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
1.
Mov Disord ; 38(7): 1327-1335, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37166278

RESUMO

BACKGROUND: Video-based tic detection and scoring is useful to independently and objectively assess tic frequency and severity in patients with Tourette syndrome. In trained raters, interrater reliability is good. However, video ratings are time-consuming and cumbersome, particularly in large-scale studies. Therefore, we developed two machine learning (ML) algorithms for automatic tic detection. OBJECTIVE: The aim of this study was to evaluate the performances of state-of-the-art ML approaches for automatic video-based tic detection in patients with Tourette syndrome. METHODS: We used 64 videos of n = 35 patients with Tourette syndrome. The data of six subjects (15 videos with ratings) were used as a validation set for hyperparameter optimization. For the binary classification task to distinguish between tic and no-tic segments, we established two different supervised learning approaches. First, we manually extracted features based on landmarks, which served as input for a Random Forest classifier (Random Forest). Second, a fully automated deep learning approach was used, where regions of interest in video snippets were input to a convolutional neural network (deep neural network). RESULTS: Tic detection F1 scores (and accuracy) were 82.0% (88.4%) in the Random Forest and 79.5% (88.5%) in the deep neural network approach. CONCLUSIONS: ML algorithms for automatic tic detection based on video recordings are feasible and reliable and could thus become a valuable assessment tool, for example, for objective tic measurements in clinical trials. ML algorithms might also be useful for the differential diagnosis of tics. © 2023 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.


Assuntos
Transtornos de Tique , Tiques , Síndrome de Tourette , Humanos , Tiques/diagnóstico , Síndrome de Tourette/diagnóstico , Reprodutibilidade dos Testes , Transtornos de Tique/diagnóstico , Aprendizado de Máquina
2.
Sensors (Basel) ; 23(19)2023 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-37837064

RESUMO

Machine learning with deep neural networks (DNNs) is widely used for human activity recognition (HAR) to automatically learn features, identify and analyze activities, and to produce a consequential outcome in numerous applications. However, learning robust features requires an enormous number of labeled data. Therefore, implementing a DNN either requires creating a large dataset or needs to use the pre-trained models on different datasets. Multitask learning (MTL) is a machine learning paradigm where a model is trained to perform multiple tasks simultaneously, with the idea that sharing information between tasks can lead to improved performance on each individual task. This paper presents a novel MTL approach that employs combined training for human activities with different temporal scales of atomic and composite activities. Atomic activities are basic, indivisible actions that are readily identifiable and classifiable. Composite activities are complex actions that comprise a sequence or combination of atomic activities. The proposed MTL approach can help in addressing challenges related to recognizing and predicting both atomic and composite activities. It can also help in providing a solution to the data scarcity problem by simultaneously learning multiple related tasks so that knowledge from each task can be reused by the others. The proposed approach offers advantages like improved data efficiency, reduced overfitting due to shared representations, and fast learning through the use of auxiliary information. The proposed approach exploits the similarities and differences between multiple tasks so that these tasks can share the parameter structure, which improves model performance. The paper also figures out which tasks should be learned together and which tasks should be learned separately. If the tasks are properly selected, the shared structure of each task can help it learn more from other tasks.


Assuntos
Aprendizado Profundo , Dispositivos Eletrônicos Vestíveis , Humanos , Atividades Cotidianas , Redes Neurais de Computação , Aprendizado de Máquina
3.
Sensors (Basel) ; 23(7)2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-37050506

RESUMO

The analysis of sleep stages for children plays an important role in early diagnosis and treatment. This paper introduces our sleep stage classification method addressing the following two challenges: the first is the data imbalance problem, i.e., the highly skewed class distribution with underrepresented minority classes. For this, a Gaussian Noise Data Augmentation (GNDA) algorithm was applied to polysomnography recordings to seek the balance of data sizes for different sleep stages. The second challenge is the difficulty in identifying a minority class of sleep stages, given their short sleep duration and similarities to other stages in terms of EEG characteristics. To overcome this, we developed a DeConvolution- and Self-Attention-based Model (DCSAM) which can inverse the feature map of a hidden layer to the input space to extract local features and extract the correlations between all possible pairs of features to distinguish sleep stages. The results on our dataset show that DCSAM based on GNDA obtains an accuracy of 90.26% and a macro F1-score of 86.51% which are higher than those of our previous method. We also tested DCSAM on a well-known public dataset-Sleep-EDFX-to prove whether it is applicable to sleep data from adults. It achieves a comparable performance to state-of-the-art methods, especially accuracies of 91.77%, 92.54%, 94.73%, and 95.30% for six-stage, five-stage, four-stage, and three-stage classification, respectively. These results imply that our DCSAM based on GNDA has a great potential to offer performance improvements in various medical domains by considering the data imbalance problems and correlations among features in time series data.


Assuntos
Eletroencefalografia , Sono , Adulto , Humanos , Criança , Eletroencefalografia/métodos , Fases do Sono , Polissonografia/métodos , Algoritmos
4.
Sensors (Basel) ; 23(19)2023 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-37837061

RESUMO

Multiple attempts to quantify pain objectively using single measures of physiological body responses have been performed in the past, but the variability across participants reduces the usefulness of such methods. Therefore, this study aims to evaluate whether combining multiple autonomic parameters is more appropriate to quantify the perceived pain intensity of healthy subjects (HSs) and chronic back pain patients (CBPPs) during experimental heat pain stimulation. HS and CBPP received different heat pain stimuli adjusted for individual pain tolerance via a CE-certified thermode. Different sensors measured physiological responses. Machine learning models were trained to evaluate performance in distinguishing pain levels and identify key sensors and features for the classification task. The results show that distinguishing between no and severe pain is significantly easier than discriminating lower pain levels. Electrodermal activity is the best marker for distinguishing between low and high pain levels. However, recursive feature elimination showed that an optimal subset of features for all modalities includes characteristics retrieved from several modalities. Moreover, the study's findings indicate that differences in physiological responses to pain in HS and CBPP remain small.


Assuntos
Temperatura Alta , Limiar da Dor , Humanos , Voluntários Saudáveis , Limiar da Dor/fisiologia , Percepção da Dor/fisiologia , Dor nas Costas
5.
Sensors (Basel) ; 24(1)2023 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-38202937

RESUMO

This paper addresses the problem of feature encoding for gait analysis using multimodal time series sensory data. In recent years, the dramatic increase in the use of numerous sensors, e.g., inertial measurement unit (IMU), in our daily wearable devices has gained the interest of the research community to collect kinematic and kinetic data to analyze the gait. The most crucial step for gait analysis is to find the set of appropriate features from continuous time series data to accurately represent human locomotion. This paper presents a systematic assessment of numerous feature extraction techniques. In particular, three different feature encoding techniques are presented to encode multimodal time series sensory data. In the first technique, we utilized eighteen different handcrafted features which are extracted directly from the raw sensory data. The second technique follows the Bag-of-Visual-Words model; the raw sensory data are encoded using a pre-computed codebook and a locality-constrained linear encoding (LLC)-based feature encoding technique. We evaluated two different machine learning algorithms to assess the effectiveness of the proposed features in the encoding of raw sensory data. In the third feature encoding technique, we proposed two end-to-end deep learning models to automatically extract the features from raw sensory data. A thorough experimental evaluation is conducted on four large sensory datasets and their outcomes are compared. A comparison of the recognition results with current state-of-the-art methods demonstrates the computational efficiency and high efficacy of the proposed feature encoding method. The robustness of the proposed feature encoding technique is also evaluated to recognize human daily activities. Additionally, this paper also presents a new dataset consisting of the gait patterns of 42 individuals, gathered using IMU sensors.


Assuntos
Análise da Marcha , Marcha , Humanos , Algoritmos , Cinética , Locomoção
6.
Sensors (Basel) ; 23(4)2023 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-36850556

RESUMO

Artificial intelligence and especially deep learning methods have achieved outstanding results for various applications in the past few years. Pain recognition is one of them, as various models have been proposed to replace the previous gold standard with an automated and objective assessment. While the accuracy of such models could be increased incrementally, the understandability and transparency of these systems have not been the main focus of the research community thus far. Thus, in this work, several outcomes and insights of explainable artificial intelligence applied to the electrodermal activity sensor data of the PainMonit and BioVid Heat Pain Database are presented. For this purpose, the importance of hand-crafted features is evaluated using recursive feature elimination based on impurity scores in Random Forest (RF) models. Additionally, Gradient-weighted class activation mapping is applied to highlight the most impactful features learned by deep learning models. Our studies highlight the following insights: (1) Very simple hand-crafted features can yield comparative performances to deep learning models for pain recognition, especially when properly selected with recursive feature elimination. Thus, the use of complex neural networks should be questioned in pain recognition, especially considering their computational costs; and (2) both traditional feature engineering and deep feature learning approaches rely on simple characteristics of the input time-series data to make their decision in the context of automated pain recognition.


Assuntos
Inteligência Artificial , Resposta Galvânica da Pele , Humanos , Redes Neurais de Computação , Pesquisa , Dor/diagnóstico
7.
Sensors (Basel) ; 23(12)2023 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-37420718

RESUMO

To drive safely, the driver must be aware of the surroundings, pay attention to the road traffic, and be ready to adapt to new circumstances. Most studies on driving safety focus on detecting anomalies in driver behavior and monitoring cognitive capabilities in drivers. In our study, we proposed a classifier for basic activities in driving a car, based on a similar approach that could be applied to the recognition of basic activities in daily life, that is, using electrooculographic (EOG) signals and a one-dimensional convolutional neural network (1D CNN). Our classifier achieved an accuracy of 80% for the 16 primary and secondary activities. The accuracy related to activities in driving, including crossroad, parking, roundabout, and secondary activities, was 97.9%, 96.8%, 97.4%, and 99.5%, respectively. The F1 score for secondary driving actions (0.99) was higher than for primary driving activities (0.93-0.94). Furthermore, using the same algorithm, it was possible to distinguish four activities related to activities of daily life that were secondary activities when driving a car.


Assuntos
Condução de Veículo , Condução de Veículo/psicologia , Acidentes de Trânsito/prevenção & controle , Automóveis , Redes Neurais de Computação , Algoritmos
9.
Sensors (Basel) ; 23(1)2022 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-36616678

RESUMO

This paper presents methods for floor assignation within an indoor localization system. We integrate the barometer of the phone as an additional sensor to detect floor changes. In contrast to state-of-the-art methods, our statistical model uses a discrete state variable as floor information, instead of a continuous one. Due to the inconsistency of the barometric sensor data, our approach is based on relative pressure readings. All we need beforehand is the ceiling height including the ceiling's thickness. Further, we discuss several variations of our method depending on the deployment scenario. Since a barometer alone is not able to detect the position of a pedestrian, we additionally incorporate Wi-Fi, iBeacons, Step and Turn Detection statistically in our experiments. This enables a realistic evaluation of our methods for floor assignation. The experimental results show that the usage of a barometer within 3D indoor localization systems can be highly recommended. In nearly all test cases, our approach improves the positioning accuracy while also keeping the update rates low.


Assuntos
Modelos Estatísticos , Pedestres , Humanos
10.
Sensors (Basel) ; 22(10)2022 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-35632039

RESUMO

Identifying accident patterns is one of the most vital research foci of driving analysis. Environmental or safety applications and the growing area of fleet management all benefit from accident detection contributions by minimizing the risk vehicles and drivers are subject to, improving their service and reducing overhead costs. Some solutions have been proposed in the past literature for automated accident detection that are mainly based on traffic data or external sensors. However, traffic data can be difficult to access, while external sensors can end up being difficult to set up and unreliable, depending on how they are used. Additionally, the scarcity of accident detection data has limited the type of approaches used in the past, leaving in particular, machine learning (ML) relatively unexplored. Thus, in this paper, we propose a ML framework for automated car accident detection based on mutimodal in-car sensors. Our work is a unique and innovative study on detecting real-world driving accidents by applying state-of-the-art feature extraction methods using basic sensors in cars. In total, five different feature extraction approaches, including techniques based on feature engineering and feature learning with deep learning are evaluated on the strategic highway research program (SHRP2) naturalistic driving study (NDS) crash data set. The main observations of this study are as follows: (1) CNN features with a SVM classifier obtain very promising results, outperforming all other tested approaches. (2) Feature engineering and feature learning approaches were finding different best performing features. Therefore, our fusion experiment indicates that these two feature sets can be efficiently combined. (3) Unsupervised feature extraction remarkably achieves a notable performance score.


Assuntos
Condução de Veículo , Automóveis , Acidentes de Trânsito/prevenção & controle , Aprendizado de Máquina
11.
Sensors (Basel) ; 22(20)2022 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-36298061

RESUMO

The perception of hunger and satiety is of great importance to maintaining a healthy body weight and avoiding chronic diseases such as obesity, underweight, or deficiency syndromes due to malnutrition. There are a number of disease patterns, characterized by a chronic loss of this perception. To our best knowledge, hunger and satiety cannot be classified using non-invasive measurements. Aiming to develop an objective classification system, this paper presents a multimodal sensory system using associated signal processing and pattern recognition methods for hunger and satiety detection based on non-invasive monitoring. We used an Empatica E4 smartwatch, a RespiBan wearable device, and JINS MEME smart glasses to capture physiological signals from five healthy normal weight subjects inactively sitting on a chair in a state of hunger and satiety. After pre-processing the signals, we compared different feature extraction approaches, either based on manual feature engineering or deep feature learning. Comparative experiments were carried out to determine the most appropriate sensor channel, device, and classifier to reliably discriminate between hunger and satiety states. Our experiments showed that the most discriminative features come from three specific sensor modalities: Electrodermal Activity (EDA), infrared Thermopile (Tmp), and Blood Volume Pulse (BVP).


Assuntos
Fome , Dispositivos Eletrônicos Vestíveis , Humanos , Fome/fisiologia , Aprendizado de Máquina , Obesidade , Peso Corporal
12.
Sensors (Basel) ; 21(4)2021 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-33562548

RESUMO

Gait patterns are a result of the complex kinematics that enable human two-legged locomotion, and they can reveal a lot about a person's state and health. Analysing them is useful for researchers to get new insights into the course of diseases, and for physicians to track the progress after healing from injuries. When a person walks and is interfered with in any way, the resulting disturbance can show up and be found in the gait patterns. This paper describes an experimental setup for capturing gait patterns with a capacitive sensor floor, which can detect the time and position of foot contacts on the floor. With this setup, a dataset was recorded where 42 participants walked over a sensor floor in different modes, inter alia, normal pace, closed eyes, and dual-task. A recurrent neural network based on Long Short-Term Memory units was trained and evaluated for the classification task of recognising the walking mode solely from the floor sensor data. Furthermore, participants were asked to do the Unilateral Heel-Rise Test, and their gait was recorded before and after doing the test. Another neural network instance was trained to predict the number of repetitions participants were able to do on the test. As the results of the classification tasks turned out to be promising, the combination of this sensor floor and the recurrent neural network architecture seems like a good system for further investigation leading to applications in health and care.


Assuntos
Marcha , Redes Neurais de Computação , Caminhada , Pisos e Cobertura de Pisos , Humanos , Locomoção
13.
Sensors (Basel) ; 21(7)2021 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-33805368

RESUMO

Human activity recognition (HAR) aims to recognize the actions of the human body through a series of observations and environmental conditions. The analysis of human activities has drawn the attention of the research community in the last two decades due to its widespread applications, diverse nature of activities, and recording infrastructure. Lately, one of the most challenging applications in this framework is to recognize the human body actions using unobtrusive wearable motion sensors. Since the human activities of daily life (e.g., cooking, eating) comprises several repetitive and circumstantial short sequences of actions (e.g., moving arm), it is quite difficult to directly use the sensory data for recognition because the multiple sequences of the same activity data may have large diversity. However, a similarity can be observed in the temporal occurrence of the atomic actions. Therefore, this paper presents a two-level hierarchical method to recognize human activities using a set of wearable sensors. In the first step, the atomic activities are detected from the original sensory data, and their recognition scores are obtained. Secondly, the composite activities are recognized using the scores of atomic actions. We propose two different methods of feature extraction from atomic scores to recognize the composite activities, and they include handcrafted features and the features obtained using the subspace pooling technique. The proposed method is evaluated on the large publicly available CogAge dataset, which contains the instances of both atomic and composite activities. The data is recorded using three unobtrusive wearable devices: smartphone, smartwatch, and smart glasses. We also investigated the performance evaluation of different classification algorithms to recognize the composite activities. The proposed method achieved 79% and 62.8% average recognition accuracies using the handcrafted features and the features obtained using subspace pooling technique, respectively. The recognition results of the proposed technique and their comparison with the existing state-of-the-art techniques confirm its effectiveness.


Assuntos
Atividades Humanas , Óculos Inteligentes , Algoritmos , Humanos , Reconhecimento Psicológico , Smartphone
14.
Sensors (Basel) ; 21(14)2021 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-34300578

RESUMO

While even the most common definition of pain is under debate, pain assessment has remained the same for decades. But the paramount importance of precise pain management for successful healthcare has encouraged initiatives to improve the way pain is assessed. Recent approaches have proposed automatic pain evaluation systems using machine learning models trained with data coming from behavioural or physiological sensors. Although yielding promising results, machine learning studies for sensor-based pain recognition remain scattered and not necessarily easy to compare to each other. In particular, the important process of extracting features is usually optimised towards specific datasets. We thus introduce a comparison of feature extraction methods for pain recognition based on physiological sensors in this paper. In addition, the PainMonit Database (PMDB), a new dataset including both objective and subjective annotations for heat-induced pain in 52 subjects, is introduced. In total, five different approaches including techniques based on feature engineering and feature learning with deep learning are evaluated on the BioVid and PMDB datasets. Our studies highlight the following insights: (1) Simple feature engineering approaches can still compete with deep learning approaches in terms of performance. (2) More complex deep learning architectures do not yield better performance compared to simpler ones. (3) Subjective self-reports by subjects can be used instead of objective temperature-based annotations to build a robust pain recognition system.


Assuntos
Temperatura Alta , Aprendizado de Máquina , Bases de Dados Factuais , Humanos , Dor/diagnóstico , Medição da Dor
15.
Entropy (Basel) ; 23(2)2021 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-33670018

RESUMO

Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques.

16.
Sensors (Basel) ; 20(16)2020 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-32806735

RESUMO

With the addition of the Fine Timing Measurement (FTM) protocol in IEEE 802.11-2016, a promising sensor for smartphone-based indoor positioning systems was introduced. FTM enables a Wi-Fi device to estimate the distance to a second device based on the propagation time of the signal. Recently, FTM has gotten more attention from the scientific community as more compatible devices become available. Due to the claimed robustness and accuracy, FTM is a promising addition to the often used Received Signal Strength Indication (RSSI). In this work, we evaluate FTM on the 2.4 GHz band with 20 MHz channel bandwidth in the context of realistic indoor positioning scenarios. For this purpose, we deploy a least-squares estimation method, a probabilistic positioning approach and a simplistic particle filter implementation. Each method is evaluated using FTM and RSSI separately to show the difference of the techniques. Our results show that, although FTM achieves smaller positioning errors compared to RSSI, its error behavior is similar to RSSI. Furthermore, we demonstrate that an empirically optimized correction value for FTM is required to account for the environment. This correction value can reduce the positioning error significantly.

17.
Sensors (Basel) ; 20(18)2020 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-32957598

RESUMO

General movements (GMs) are spontaneous movements of infants up to five months post-term involving the whole body varying in sequence, speed, and amplitude. The assessment of GMs has shown its importance for identifying infants at risk for neuromotor deficits, especially for the detection of cerebral palsy. As the assessment is based on videos of the infant that are rated by trained professionals, the method is time-consuming and expensive. Therefore, approaches based on Artificial Intelligence have gained significantly increased attention in the last years. In this article, we systematically analyze and discuss the main design features of all existing technological approaches seeking to transfer the Prechtl's assessment of general movements from an individual visual perception to computer-based analysis. After identifying their shared shortcomings, we explain the methodological reasons for their limited practical performance and classification rates. As a conclusion of our literature study, we conceptually propose a methodological solution to the defined problem based on the groundbreaking innovation in the area of Deep Learning.


Assuntos
Inteligência Artificial , Paralisia Cerebral , Paralisia Cerebral/diagnóstico , Humanos , Lactente , Movimento , Publicações , Gravação de Videoteipe
18.
Sensors (Basel) ; 20(15)2020 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-32751855

RESUMO

The scarcity of labelled time-series data can hinder a proper training of deep learning models. This is especially relevant for the growing field of ubiquitous computing, where data coming from wearable devices have to be analysed using pattern recognition techniques to provide meaningful applications. To address this problem, we propose a transfer learning method based on attributing sensor modality labels to a large amount of time-series data collected from various application fields. Using these data, our method firstly trains a Deep Neural Network (DNN) that can learn general characteristics of time-series data, then transfers it to another DNN designed to solve a specific target problem. In addition, we propose a general architecture that can adapt the transferred DNN regardless of the sensors used in the target field making our approach in particular suitable for multichannel data. We test our method for two ubiquitous computing problems-Human Activity Recognition (HAR) and Emotion Recognition (ER)-and compare it a baseline training the DNN without using transfer learning. For HAR, we also introduce a new dataset, Cognitive Village-MSBand (CogAge), which contains data for 61 atomic activities acquired from three wearable devices (smartphone, smartwatch, and smartglasses). Our results show that our transfer learning approach outperforms the baseline for both HAR and ER.

19.
Sensors (Basel) ; 20(22)2020 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-33212894

RESUMO

With the ubiquity of smartphones, the interest in indoor localization as a research area grew. Methods based on radio data are predominant, but due to the susceptibility of these radio signals to a number of dynamic influences, good localization solutions usually rely on additional sources of information, which provide relative information about the current location. Part of this role is often taken by the field of activity recognition, e.g., by estimating whether a pedestrian is currently taking the stairs. This work presents different approaches for activity recognition, considering the four most basic locomotion activities used when moving around inside buildings: standing, walking, ascending stairs, and descending stairs, as well as an additional messing around class for rejections. As main contribution, we introduce a novel approach based on analytical transformations combined with artificially constructed sensor channels, and compare that to two approaches adapted from existing literature, one based on codebooks, the other using statistical features. Data is acquired using accelerometer and gyroscope only. In addition to the most widely adopted use-case of carrying the smartphone in the trouser pockets, we will equally consider the novel use-case of hand-carried smartphones. This is required as in an indoor localization scenario, the smartphone is often used to display a user interface of some navigation application and thus needs to be carried in hand. For evaluation the well known MobiAct dataset for the pocket-case as well as a novel dataset for the hand-case were used. The approach based on analytical transformations surpassed the other approaches resulting in accuracies of 98.0% for pocket-case and 81.8% for the hand-case trained on the combination of both datasets. With activity recognition in the supporting role of indoor localization, this accuracy is acceptable, but has room for further improvement.


Assuntos
Acelerometria , Locomoção , Smartphone , Humanos , Posição Ortostática , Caminhada
20.
Sensors (Basel) ; 20(12)2020 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-32575451

RESUMO

This paper addresses wearable-based recognition of Activities of Daily Living (ADLs) which are composed of several repetitive and concurrent short movements having temporal dependencies. It is improbable to directly use sensor data to recognize these long-term composite activities because two examples (data sequences) of the same ADL result in largely diverse sensory data. However, they may be similar in terms of more semantic and meaningful short-term atomic actions. Therefore, we propose a two-level hierarchical model for recognition of ADLs. Firstly, atomic activities are detected and their probabilistic scores are generated at the lower level. Secondly, we deal with the temporal transitions of atomic activities using a temporal pooling method, rank pooling. This enables us to encode the ordering of probabilistic scores for atomic activities at the higher level of our model. Rank pooling leads to a 5-13% improvement in results as compared to the other popularly used techniques. We also produce a large dataset of 61 atomic and 7 composite activities for our experiments.


Assuntos
Atividades Cotidianas , Dispositivos Eletrônicos Vestíveis , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA