Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Sensors (Basel) ; 23(19)2023 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-37837064

RESUMO

Machine learning with deep neural networks (DNNs) is widely used for human activity recognition (HAR) to automatically learn features, identify and analyze activities, and to produce a consequential outcome in numerous applications. However, learning robust features requires an enormous number of labeled data. Therefore, implementing a DNN either requires creating a large dataset or needs to use the pre-trained models on different datasets. Multitask learning (MTL) is a machine learning paradigm where a model is trained to perform multiple tasks simultaneously, with the idea that sharing information between tasks can lead to improved performance on each individual task. This paper presents a novel MTL approach that employs combined training for human activities with different temporal scales of atomic and composite activities. Atomic activities are basic, indivisible actions that are readily identifiable and classifiable. Composite activities are complex actions that comprise a sequence or combination of atomic activities. The proposed MTL approach can help in addressing challenges related to recognizing and predicting both atomic and composite activities. It can also help in providing a solution to the data scarcity problem by simultaneously learning multiple related tasks so that knowledge from each task can be reused by the others. The proposed approach offers advantages like improved data efficiency, reduced overfitting due to shared representations, and fast learning through the use of auxiliary information. The proposed approach exploits the similarities and differences between multiple tasks so that these tasks can share the parameter structure, which improves model performance. The paper also figures out which tasks should be learned together and which tasks should be learned separately. If the tasks are properly selected, the shared structure of each task can help it learn more from other tasks.


Assuntos
Aprendizado Profundo , Dispositivos Eletrônicos Vestíveis , Humanos , Atividades Cotidianas , Redes Neurais de Computação , Aprendizado de Máquina
2.
Sensors (Basel) ; 23(7)2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-37050506

RESUMO

The analysis of sleep stages for children plays an important role in early diagnosis and treatment. This paper introduces our sleep stage classification method addressing the following two challenges: the first is the data imbalance problem, i.e., the highly skewed class distribution with underrepresented minority classes. For this, a Gaussian Noise Data Augmentation (GNDA) algorithm was applied to polysomnography recordings to seek the balance of data sizes for different sleep stages. The second challenge is the difficulty in identifying a minority class of sleep stages, given their short sleep duration and similarities to other stages in terms of EEG characteristics. To overcome this, we developed a DeConvolution- and Self-Attention-based Model (DCSAM) which can inverse the feature map of a hidden layer to the input space to extract local features and extract the correlations between all possible pairs of features to distinguish sleep stages. The results on our dataset show that DCSAM based on GNDA obtains an accuracy of 90.26% and a macro F1-score of 86.51% which are higher than those of our previous method. We also tested DCSAM on a well-known public dataset-Sleep-EDFX-to prove whether it is applicable to sleep data from adults. It achieves a comparable performance to state-of-the-art methods, especially accuracies of 91.77%, 92.54%, 94.73%, and 95.30% for six-stage, five-stage, four-stage, and three-stage classification, respectively. These results imply that our DCSAM based on GNDA has a great potential to offer performance improvements in various medical domains by considering the data imbalance problems and correlations among features in time series data.


Assuntos
Eletroencefalografia , Sono , Adulto , Humanos , Criança , Eletroencefalografia/métodos , Fases do Sono , Polissonografia/métodos , Algoritmos
3.
Sensors (Basel) ; 24(1)2023 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-38202937

RESUMO

This paper addresses the problem of feature encoding for gait analysis using multimodal time series sensory data. In recent years, the dramatic increase in the use of numerous sensors, e.g., inertial measurement unit (IMU), in our daily wearable devices has gained the interest of the research community to collect kinematic and kinetic data to analyze the gait. The most crucial step for gait analysis is to find the set of appropriate features from continuous time series data to accurately represent human locomotion. This paper presents a systematic assessment of numerous feature extraction techniques. In particular, three different feature encoding techniques are presented to encode multimodal time series sensory data. In the first technique, we utilized eighteen different handcrafted features which are extracted directly from the raw sensory data. The second technique follows the Bag-of-Visual-Words model; the raw sensory data are encoded using a pre-computed codebook and a locality-constrained linear encoding (LLC)-based feature encoding technique. We evaluated two different machine learning algorithms to assess the effectiveness of the proposed features in the encoding of raw sensory data. In the third feature encoding technique, we proposed two end-to-end deep learning models to automatically extract the features from raw sensory data. A thorough experimental evaluation is conducted on four large sensory datasets and their outcomes are compared. A comparison of the recognition results with current state-of-the-art methods demonstrates the computational efficiency and high efficacy of the proposed feature encoding method. The robustness of the proposed feature encoding technique is also evaluated to recognize human daily activities. Additionally, this paper also presents a new dataset consisting of the gait patterns of 42 individuals, gathered using IMU sensors.


Assuntos
Análise da Marcha , Marcha , Humanos , Algoritmos , Cinética , Locomoção
4.
Sensors (Basel) ; 23(12)2023 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-37420718

RESUMO

To drive safely, the driver must be aware of the surroundings, pay attention to the road traffic, and be ready to adapt to new circumstances. Most studies on driving safety focus on detecting anomalies in driver behavior and monitoring cognitive capabilities in drivers. In our study, we proposed a classifier for basic activities in driving a car, based on a similar approach that could be applied to the recognition of basic activities in daily life, that is, using electrooculographic (EOG) signals and a one-dimensional convolutional neural network (1D CNN). Our classifier achieved an accuracy of 80% for the 16 primary and secondary activities. The accuracy related to activities in driving, including crossroad, parking, roundabout, and secondary activities, was 97.9%, 96.8%, 97.4%, and 99.5%, respectively. The F1 score for secondary driving actions (0.99) was higher than for primary driving activities (0.93-0.94). Furthermore, using the same algorithm, it was possible to distinguish four activities related to activities of daily life that were secondary activities when driving a car.


Assuntos
Condução de Veículo , Condução de Veículo/psicologia , Acidentes de Trânsito/prevenção & controle , Automóveis , Redes Neurais de Computação , Algoritmos
5.
Sensors (Basel) ; 22(20)2022 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-36298061

RESUMO

The perception of hunger and satiety is of great importance to maintaining a healthy body weight and avoiding chronic diseases such as obesity, underweight, or deficiency syndromes due to malnutrition. There are a number of disease patterns, characterized by a chronic loss of this perception. To our best knowledge, hunger and satiety cannot be classified using non-invasive measurements. Aiming to develop an objective classification system, this paper presents a multimodal sensory system using associated signal processing and pattern recognition methods for hunger and satiety detection based on non-invasive monitoring. We used an Empatica E4 smartwatch, a RespiBan wearable device, and JINS MEME smart glasses to capture physiological signals from five healthy normal weight subjects inactively sitting on a chair in a state of hunger and satiety. After pre-processing the signals, we compared different feature extraction approaches, either based on manual feature engineering or deep feature learning. Comparative experiments were carried out to determine the most appropriate sensor channel, device, and classifier to reliably discriminate between hunger and satiety states. Our experiments showed that the most discriminative features come from three specific sensor modalities: Electrodermal Activity (EDA), infrared Thermopile (Tmp), and Blood Volume Pulse (BVP).


Assuntos
Fome , Dispositivos Eletrônicos Vestíveis , Humanos , Fome/fisiologia , Aprendizado de Máquina , Obesidade , Peso Corporal
6.
Sensors (Basel) ; 21(7)2021 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-33805368

RESUMO

Human activity recognition (HAR) aims to recognize the actions of the human body through a series of observations and environmental conditions. The analysis of human activities has drawn the attention of the research community in the last two decades due to its widespread applications, diverse nature of activities, and recording infrastructure. Lately, one of the most challenging applications in this framework is to recognize the human body actions using unobtrusive wearable motion sensors. Since the human activities of daily life (e.g., cooking, eating) comprises several repetitive and circumstantial short sequences of actions (e.g., moving arm), it is quite difficult to directly use the sensory data for recognition because the multiple sequences of the same activity data may have large diversity. However, a similarity can be observed in the temporal occurrence of the atomic actions. Therefore, this paper presents a two-level hierarchical method to recognize human activities using a set of wearable sensors. In the first step, the atomic activities are detected from the original sensory data, and their recognition scores are obtained. Secondly, the composite activities are recognized using the scores of atomic actions. We propose two different methods of feature extraction from atomic scores to recognize the composite activities, and they include handcrafted features and the features obtained using the subspace pooling technique. The proposed method is evaluated on the large publicly available CogAge dataset, which contains the instances of both atomic and composite activities. The data is recorded using three unobtrusive wearable devices: smartphone, smartwatch, and smart glasses. We also investigated the performance evaluation of different classification algorithms to recognize the composite activities. The proposed method achieved 79% and 62.8% average recognition accuracies using the handcrafted features and the features obtained using subspace pooling technique, respectively. The recognition results of the proposed technique and their comparison with the existing state-of-the-art techniques confirm its effectiveness.


Assuntos
Atividades Humanas , Óculos Inteligentes , Algoritmos , Humanos , Reconhecimento Psicológico , Smartphone
7.
Sensors (Basel) ; 20(18)2020 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-32957598

RESUMO

General movements (GMs) are spontaneous movements of infants up to five months post-term involving the whole body varying in sequence, speed, and amplitude. The assessment of GMs has shown its importance for identifying infants at risk for neuromotor deficits, especially for the detection of cerebral palsy. As the assessment is based on videos of the infant that are rated by trained professionals, the method is time-consuming and expensive. Therefore, approaches based on Artificial Intelligence have gained significantly increased attention in the last years. In this article, we systematically analyze and discuss the main design features of all existing technological approaches seeking to transfer the Prechtl's assessment of general movements from an individual visual perception to computer-based analysis. After identifying their shared shortcomings, we explain the methodological reasons for their limited practical performance and classification rates. As a conclusion of our literature study, we conceptually propose a methodological solution to the defined problem based on the groundbreaking innovation in the area of Deep Learning.


Assuntos
Inteligência Artificial , Paralisia Cerebral , Paralisia Cerebral/diagnóstico , Humanos , Lactente , Movimento , Publicações , Gravação de Videoteipe
8.
Sensors (Basel) ; 20(15)2020 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-32751855

RESUMO

The scarcity of labelled time-series data can hinder a proper training of deep learning models. This is especially relevant for the growing field of ubiquitous computing, where data coming from wearable devices have to be analysed using pattern recognition techniques to provide meaningful applications. To address this problem, we propose a transfer learning method based on attributing sensor modality labels to a large amount of time-series data collected from various application fields. Using these data, our method firstly trains a Deep Neural Network (DNN) that can learn general characteristics of time-series data, then transfers it to another DNN designed to solve a specific target problem. In addition, we propose a general architecture that can adapt the transferred DNN regardless of the sensors used in the target field making our approach in particular suitable for multichannel data. We test our method for two ubiquitous computing problems-Human Activity Recognition (HAR) and Emotion Recognition (ER)-and compare it a baseline training the DNN without using transfer learning. For HAR, we also introduce a new dataset, Cognitive Village-MSBand (CogAge), which contains data for 61 atomic activities acquired from three wearable devices (smartphone, smartwatch, and smartglasses). Our results show that our transfer learning approach outperforms the baseline for both HAR and ER.

9.
Sensors (Basel) ; 20(12)2020 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-32575451

RESUMO

This paper addresses wearable-based recognition of Activities of Daily Living (ADLs) which are composed of several repetitive and concurrent short movements having temporal dependencies. It is improbable to directly use sensor data to recognize these long-term composite activities because two examples (data sequences) of the same ADL result in largely diverse sensory data. However, they may be similar in terms of more semantic and meaningful short-term atomic actions. Therefore, we propose a two-level hierarchical model for recognition of ADLs. Firstly, atomic activities are detected and their probabilistic scores are generated at the lower level. Secondly, we deal with the temporal transitions of atomic activities using a temporal pooling method, rank pooling. This enables us to encode the ordering of probabilistic scores for atomic activities at the higher level of our model. Rank pooling leads to a 5-13% improvement in results as compared to the other popularly used techniques. We also produce a large dataset of 61 atomic and 7 composite activities for our experiments.


Assuntos
Atividades Cotidianas , Dispositivos Eletrônicos Vestíveis , Humanos
10.
Sensors (Basel) ; 18(2)2018 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-29495310

RESUMO

Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches-in particular deep-learning based-have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data.


Assuntos
Atividades Humanas , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis
11.
Comput Biol Med ; 166: 107501, 2023 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-37742416

RESUMO

Sleep is an important research area in nutritional medicine that plays a crucial role in human physical and mental health restoration. It can influence diet, metabolism, and hormone regulation, which can affect overall health and well-being. As an essential tool in the sleep study, the sleep stage classification provides a parsing of sleep architecture and a comprehensive understanding of sleep patterns to identify sleep disorders and facilitate the formulation of targeted sleep interventions. However, the class imbalance issue is typically salient in sleep datasets, which severely affects classification performances. To address this issue and to extract optimal multimodal features of EEG, EOG, and EMG that can improve the accuracy of sleep stage classification, a Borderline Synthetic Minority Oversampling Technique (B-SMOTE)-Based Supervised Convolutional Contrastive Learning (BST-SCCL) is proposed, which can avoid the risk of data mismatch between various sleep knowledge domains (varying health conditions and annotation rules) and strengthening learning characteristics of the N1 stage from the pair-wise segments comparison strategy. The lightweight residual network architecture with a novel truncated cross-entropy loss function is designed to accommodate multimodal time series and boost the training speed and performance stability. The proposed model has been validated on four well-known public sleep datasets (Sleep-EDF-20, Sleep-EDF-78, ISRUC-1, and ISRUC-3) and its superior performance (overall accuracy of 91.31-92.34%, MF1 of 88.21-90.08%, and Cohen's Kappa coefficient k of 0.87-0.89) has further demonstrated its effectiveness. It shows the great potential of contrastive learning for cross-domain knowledge interaction in precision medicine.

12.
Comput Biol Med ; 166: 107489, 2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-37769461

RESUMO

BACKGROUND: Flow experience is a specific positive and affective state that occurs when humans are completely absorbed in an activity and forget everything else. This state can lead to high performance, well-being, and productivity at work. Few studies have been conducted to determine the human flow experience using physiological wearable sensor devices. Other studies rely on self-reported data. METHODS: In this article, we use physiological data collected from 25 subjects with multimodal sensing devices, in particular the Empatica E4 wristband, the Emotiv Epoc X electroencephalography (EEG) headset, and the Biosignalplux RespiBAN - in arithmetic and reading tasks to automatically discriminate between flow and non-flow states using feature engineering and deep feature learning approaches. The most meaningful wearable device for flow detection is determined by comparing the performances of each device. We also investigate the connection between emotions and flow by testing transfer learning techniques involving an emotion recognition-related task on the source domain. RESULTS: The EEG sensor modalities yielded the best performances with an accuracy of 64.97%, and a macro Averaged F1 (AF1) score of 64.95%. An accuracy of 73.63% and an AF1 score of 72.70% were obtained after fusing all sensor modalities from all devices. Additionally, our proposed transfer learning approach using emotional arousal classification on the DEAP dataset led to an increase in performances with an accuracy of 75.10% and an AF1 score of 74.92%. CONCLUSION: The results of this study suggest that effective discrimination between flow and non-flow states is possible with multimodal sensor data. The success of transfer learning using the DEAP emotion dataset as a source domain indicates that emotions and flow are connected, and emotion recognition can be used as a latent task to enhance the performance of flow recognition.

13.
Front Psychol ; 12: 697093, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34566774

RESUMO

More and more teams are collaborating virtually across the globe, and the COVID-19 pandemic has further encouraged the dissemination of virtual teamwork. However, there are challenges for virtual teams - such as reduced informal communication - with implications for team effectiveness. Team flow is a concept with high potential for promoting team effectiveness, however its measurement and promotion are challenging. Traditional team flow measurements rely on self-report questionnaires that require interrupting the team process. Approaches in artificial intelligence, i.e., machine learning, offer methods to identify an algorithm based on behavioral and sensor data that is able to identify team flow and its dynamics over time without interrupting the process. Thus, in this article we present an approach to identify team flow in virtual teams, using machine learning methods. First of all, based on a literature review, we provide a model of team flow characteristics, composed of characteristics that are shared with individual flow and characteristics that are unique for team flow. It is argued that those characteristics that are unique for team flow are represented by the concept of collective communication. Based on that, we present physiological and behavioral correlates of team flow which are suitable - but not limited to - being assessed in virtual teams and which can be used as input data for a machine learning system to assess team flow in real time. Finally, we suggest interventions to support team flow that can be implemented in real time, in virtual environments and controlled by artificial intelligence. This article thus contributes to finding indicators and dynamics of team flow in virtual teams, to stimulate future research and to promote team effectiveness.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA