Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(4)2024 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-38400254

RESUMEN

Stress has emerged as a major concern in modern society, significantly impacting human health and well-being. Statistical evidence underscores the extensive social influence of stress, especially in terms of work-related stress and associated healthcare costs. This paper addresses the critical need for accurate stress detection, emphasising its far-reaching effects on health and social dynamics. Focusing on remote stress monitoring, it proposes an efficient deep learning approach for stress detection from facial videos. In contrast to the research on wearable devices, this paper proposes novel Hybrid Deep Learning (DL) networks for stress detection based on remote photoplethysmography (rPPG), employing (Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), 1D Convolutional Neural Network (1D-CNN)) models with hyperparameter optimisation and augmentation techniques to enhance performance. The proposed approach yields a substantial improvement in accuracy and efficiency in stress detection, achieving up to 95.83% accuracy with the UBFC-Phys dataset while maintaining excellent computational efficiency. The experimental results demonstrate the effectiveness of the proposed Hybrid DL models for rPPG-based-stress detection.


Asunto(s)
Aprendizaje Profundo , Humanos , Fotopletismografía , Cara , Costos de la Atención en Salud , Memoria a Largo Plazo
2.
IEEE Trans Biomed Eng ; 71(6): 1950-1957, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38252565

RESUMEN

This work proposes a new formulation for common spatial patterns (CSP), often used as a powerful feature extraction technique in brain-computer interfacing (BCI) and other neurological studies. In this approach, applied to multiple subjects' data and named as hyperCSP, the individual covariance and mutual correlation matrices between multiple simultaneously recorded subjects' electroencephalograms are exploited in the CSP formulation. This method aims at effectively isolating the common motor task between multiple heads and alleviate the effects of other spurious or undesired tasks inherently or intentionally performed by the subjects. This technique can provide a satisfactory classification performance while using small data size and low computational complexity. By using the proposed hyperCSP followed by support vector machines classifier, we obtained a classification accuracy of 81.82% over 8 trials in the presence of strong undesired tasks. We hope that this method could reduce the training error in multi-task BCI scenarios. The recorded valuable motor-related hyperscanning dataset is available for public use to promote the research in this area.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Procesamiento de Señales Asistido por Computador , Máquina de Vectores de Soporte , Humanos , Electroencefalografía/métodos , Algoritmos , Adulto , Masculino , Femenino , Encéfalo/fisiología
3.
J Neural Eng ; 18(2)2021 02 25.
Artículo en Inglés | MEDLINE | ID: mdl-33418548

RESUMEN

Objective.The novelty of this study consists of the exploration of multiple new approaches of data pre-processing of brainwave signals, wherein statistical features are extracted and then formatted as visual images based on the order in which dimensionality reduction algorithms select them. This data is then treated as visual input for 2D and 3D convolutional neural networks (CNNs) which then further extract 'features of features'.Approach.Statistical features derived from three electroencephalography (EEG) datasets are presented in visual space and processed in 2D and 3D space as pixels and voxels respectively. Three datasets are benchmarked, mental attention states and emotional valences from the four TP9, AF7, AF8 and TP10 10-20 electrodes and an eye state data from 64 electrodes. Seven hundred twenty-nine features are selected through three methods of selection in order to form 27 × 27 images and 9 × 9 × 9 cubes from the same datasets. CNNs engineered for the 2D and 3D preprocessing representations learn to convolve useful graphical features from the data.Main results.A 70/30 split method shows that the strongest methods for classification accuracy of feature selection are One Rule for attention state and Relative Entropy for emotional state both in 2D. In the eye state dataset 3D space is best, selected by Symmetrical Uncertainty. Finally, 10-fold cross validation is used to train best topologies. Final best 10-fold results are 97.03% for attention state (2D CNN), 98.4% for Emotional State (3D CNN), and 97.96% for Eye State (3D CNN).Significance.The findings of the framework presented by this work show that CNNs can successfully convolve useful features from a set of pre-computed statistical temporal features from raw EEG waves. The high performance of K-fold validated algorithms argue that the features learnt by the CNNs hold useful knowledge for classification in addition to the pre-computed features.


Asunto(s)
Electroencefalografía , Redes Neurales de la Computación , Algoritmos , Electroencefalografía/métodos , Emociones , Proyectos de Investigación
4.
PLoS One ; 15(10): e0241332, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33112931

RESUMEN

In this work we present a three-stage Machine Learning strategy to country-level risk classification based on countries that are reporting COVID-19 information. A K% binning discretisation (K = 25) is used to create four risk groups of countries based on the risk of transmission (coronavirus cases per million population), risk of mortality (coronavirus deaths per million population), and risk of inability to test (coronavirus tests per million population). The four risk groups produced by K% binning are labelled as 'low', 'medium-low', 'medium-high', and 'high'. Coronavirus-related data are then removed and the attributes for prediction of the three types of risk are given as the geopolitical and demographic data describing each country. Thus, the calculation of class label is based on coronavirus data but the input attributes are country-level information regardless of coronavirus data. The three four-class classification problems are then explored and benchmarked through leave-one-country-out cross validation to find the strongest model, producing a Stack of Gradient Boosting and Decision Tree algorithms for risk of transmission, a Stack of Support Vector Machine and Extra Trees for risk of mortality, and a Gradient Boosting algorithm for the risk of inability to test. It is noted that high risk for inability to test is often coupled with low risks for transmission and mortality, therefore the risk of inability to test should be interpreted first, before consideration is given to the predicted transmission and mortality risks. Finally, the approach is applied to more recent risk levels to data from September 2020 and weaker results are noted due to the growth of international collaboration detracting useful knowledge from country-level attributes which suggests that similar machine learning approaches are more useful prior to situations later unfolding.


Asunto(s)
Betacoronavirus , Infecciones por Coronavirus/epidemiología , Planificación en Desastres , Aprendizaje Automático , Modelos Teóricos , Pandemias , Neumonía Viral/epidemiología , Medición de Riesgo/métodos , Algoritmos , COVID-19 , Prueba de COVID-19 , Clasificación , Técnicas de Laboratorio Clínico , Infecciones por Coronavirus/diagnóstico , Infecciones por Coronavirus/mortalidad , Infecciones por Coronavirus/transmisión , Árboles de Decisión , Predicción , Salud Global , Humanos , Cooperación Internacional , Neumonía Viral/diagnóstico , Neumonía Viral/mortalidad , Neumonía Viral/transmisión , Juego de Reactivos para Diagnóstico/provisión & distribución , SARS-CoV-2 , Máquina de Vectores de Soporte
5.
Sensors (Basel) ; 20(18)2020 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-32917024

RESUMEN

In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a Convolutional Neural Network and optimised Artificial Neural Network, and the Leap Motion model is implemented by an evolutionary search of Artificial Neural Network topology. Next, the two best networks are fused for synchronised processing, which results in a better overall result (94.44%) as complementary features are learnt in addition to the original task. The hypothesis is further supported by application of the three models to a set of completely unseen data where a multimodality approach achieves the best results relative to the single sensor method. When transfer learning with the weights trained via British Sign Language, all three models outperform standard random weight distribution when classifying American Sign Language (ASL), and the best model overall for ASL classification was the transfer learning multimodality approach, which scored 82.55% accuracy.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Lengua de Signos , Computadores , Humanos , Movimiento , Reino Unido , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA