Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
IEEE Open J Eng Med Biol ; 4: 21-30, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37143920

RESUMO

Goal: To investigate whether a deep learning model can detect Covid-19 from disruptions in the human body's physiological (heart rate) and rest-activity rhythms (rhythmic dysregulation) caused by the SARS-CoV-2 virus. Methods: We propose CovidRhythm, a novel Gated Recurrent Unit (GRU) Network with Multi-Head Self-Attention (MHSA) that combines sensor and rhythmic features extracted from heart rate and activity (steps) data gathered passively using consumer-grade smart wearable to predict Covid-19. A total of 39 features were extracted (standard deviation, mean, min/max/avg length of sedentary and active bouts) from wearable sensor data. Biobehavioral rhythms were modeled using nine parameters (mesor, amplitude, acrophase, and intra-daily variability). These features were then input to CovidRhythm for predicting Covid-19 in the incubation phase (one day before biological symptoms manifest). Results: A combination of sensor and biobehavioral rhythm features achieved the highest AUC-ROC of 0.79 [Sensitivity = 0.69, Specificity = 0.89, F[Formula: see text] = 0.76], outperforming prior approaches in discriminating Covid-positive patients from healthy controls using 24 hours of historical wearable physiological. Rhythmic features were the most predictive of Covid-19 infection when utilized either alone or in conjunction with sensor features. Sensor features predicted healthy subjects best. Circadian rest-activity rhythms that combine 24 h activity and sleep information were the most disrupted. Conclusions: CovidRhythm demonstrates that biobehavioral rhythms derived from consumer-grade wearable data can facilitate timely Covid-19 detection. To the best of our knowledge, our work is the first to detect Covid-19 using deep learning and biobehavioral rhythms features derived from consumer-grade wearable data.

2.
IEEE Sens J ; 23(23): 29733-29748, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38186565

RESUMO

Consuming excessive amounts of alcohol causes impaired mobility and judgment and driving accidents, resulting in more than 800 injuries and fatalities each day. Passive methods to detect intoxicated drivers beyond the safe driving limit can facilitate Just-In-Time alerts and reduce Driving Under the Influence (DUI) incidents. Popularly-owned smartphones are not only equipped with motion sensors (accelerometer and gyroscope) that can be employed for passively collecting gait (walk) data but also have the processing power to run computationally expensive machine learning models. In this paper, we advance the state-of-the-art by proposing a novel method that utilizes a Bi-linear Convolution Neural Network (BiCNN) for analyzing smartphone accelerometer and gyroscope data to determine whether a smartphone user is over the legal driving limit (0.08) from their gait. After segmenting the gait data into steps, we converted the smartphone motion sensor data to a Gramian Angular Field (GAF) image and then leveraged the BiCNN architecture for intoxication classification. Distinguishing GAF-encoded images of the gait of intoxicated vs. sober users is challenging as the differences between the classes (intoxicated vs. sober) are subtle, also known as a fine-grained image classification problem. The BiCNN neural network has previously produced state-of-the-art results on fine-grained image classification of natural images. To the best of our knowledge, our work is the first to innovatively utilize the BiCNN to classify GAF encoded images of smartphone gait data in order to detect intoxication. Prior work had explored using the BiCNN to classify natural images or explored other gait-related tasks but not intoxication Our complete intoxication classification pipeline consists of several important pre-processing steps carefully adapted to the BAC classification task, including step detection and segmentation, data normalization to account for inter-subject variability, data fusion, GAF image generation from time-series data, and a BiCNN classification model. In rigorous evaluation, our BiCNN model achieves an accuracy of 83.5%, outperforming the previous state-of-the-art and demonstrating the feasibility of our approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA