RESUMO
BACKGROUND: With nearly 20% of the US adult population using fitness trackers, there is an increasing focus on how physiological data from these devices can provide actionable insights about workplace performance. However, in-the-wild studies that understand how these metrics correlate with cognitive performance measures across a diverse population are lacking, and claims made by device manufacturers are vague. While there has been extensive research leading to a variety of theories on how physiological measures affect cognitive performance, virtually all such studies have been conducted in highly controlled settings and their validity in the real world is poorly understood. OBJECTIVE: We seek to bridge this gap by evaluating prevailing theories on the effects of a variety of sleep, activity, and heart rate parameters on cognitive performance against data collected in real-world settings. METHODS: We used a Fitbit Charge 3 and a smartphone app to collect different physiological and neurobehavioral task data, respectively, as part of our 6-week-long in-the-wild study. We collected data from 24 participants across multiple population groups (shift workers, regular workers, and graduate students) on different performance measures (vigilant attention and cognitive throughput). Simultaneously, we used a fitness tracker to unobtrusively obtain physiological measures that could influence these performance measures, including over 900 nights of sleep and over 1 million minutes of heart rate and physical activity metrics. We performed a repeated measures correlation (rrm) analysis to investigate which sleep and physiological markers show association with each performance measure. We also report how our findings relate to existing theories and previous observations from controlled studies. RESULTS: Daytime alertness was found to be significantly correlated with total sleep duration on the previous night (rrm=0.17, P<.001) as well as the duration of rapid eye movement (rrm=0.12, P<.001) and light sleep (rrm=0.15, P<.001). Cognitive throughput, by contrast, was not found to be significantly correlated with sleep duration but with sleep timing-a circadian phase shift toward a later sleep time corresponded with lower cognitive throughput on the following day (rrm=-0.13, P<.001). Both measures show circadian variations, but only alertness showed a decline (rrm=-0.1, P<.001) as a result of homeostatic pressure. Both heart rate and physical activity correlate positively with alertness as well as cognitive throughput. CONCLUSIONS: Our findings reveal that there are significant differences in terms of which sleep-related physiological metrics influence each of the 2 performance measures. This makes the case for more targeted in-the-wild studies investigating how physiological measures from self-tracking data influence, or can be used to predict, specific aspects of cognitive performance.
Assuntos
Cognição/fisiologia , Comportamentos Relacionados com a Saúde/fisiologia , Sono/fisiologia , Adulto , Feminino , Humanos , Estudos Longitudinais , Masculino , Adulto JovemRESUMO
College students experience ever-increasing levels of stress, leading to a wide range of health problems. In this context, monitoring and predicting students' stress levels is crucial and, fortunately, made possible by the growing support for data collection via mobile devices. However, predicting stress levels from mobile phone data remains a challenging task, and off-the-shelf deep learning models are inapplicable or inefficient due to data irregularity, inter-subject variability, and the "cold start problem". To overcome these challenges, we developed a platform named Branched CALM-Net that aims to predict students' stress levels through dynamic clustering in a personalized manner. This is the first platform that leverages the branching technique in a multitask setting to achieve personalization and continuous adaptation. Our method achieves state-of-the-art performance in predicting student stress from mobile sensor data collected as part of the Dartmouth StudentLife study, with a ROC AUC 37% higher and a PR AUC surpassing that of the nearest baseline models. In the cold-start online learning setting, Branched CALM-Net outperforms other models, attaining an average F1 score of 87% with just 1 week of training data for a new student, which shows it is reliable and effective at predicting stress levels from mobile data.
RESUMO
BACKGROUND: Respiratory rate is a crucial indicator of disease severity yet is the most neglected vital sign. Subtle changes in respiratory rate may be the first sign of clinical deterioration in a variety of disease states. Current methods of respiratory rate monitoring are labor-intensive and sensitive to motion artifacts, which often leads to inaccurate readings or underreporting; therefore, new methods of respiratory monitoring are needed. The PulsON 440 (P440; TSDR Ultra Wideband Radios and Radars) radar module is a contactless sensor that uses an ultrawideband impulse radar to detect respiratory rate. It has previously demonstrated accuracy in a laboratory setting and may be a useful alternative for contactless respiratory monitoring in clinical settings; however, it has not yet been validated in a clinical setting. OBJECTIVE: The goal of this study was to (1) compare the P440 radar module to gold standard manual respiratory rate monitoring and standard of care telemetry respiratory monitoring through transthoracic impedance plethysmography and (2) compare the P440 radar to gold standard measurements of respiratory rate in subgroups based on sex and disease state. METHODS: This was a pilot study of adults aged 18 years or older being monitored in the emergency department. Participants were monitored with the P440 radar module for 2 hours and had gold standard (manual respiratory counting) and standard of care (telemetry) respiratory rates recorded at 15-minute intervals during that time. Respiratory rates between the P440, gold standard, and standard telemetry were compared using Bland-Altman plots and intraclass correlation coefficients. RESULTS: A total of 14 participants were enrolled in the study. The P440 and gold standard Bland-Altman analysis showed a bias of -0.76 (-11.16 to 9.65) and an intraclass correlation coefficient of 0.38 (95% CI 0.06-0.60). The P440 and gold standard had the best agreement at normal physiologic respiratory rates. There was no change in agreement between the P440 and the gold standard when grouped by admitting diagnosis or sex. CONCLUSIONS: Although the P440 did not have statistically significant agreement with gold standard respiratory rate monitoring, it did show a trend of increased agreement in the normal physiologic range, overestimating at low respiratory rates, and underestimating at high respiratory rates. This trend is important for adjusting future models to be able to accurately detect respiratory rates. Once validated, the contactless respiratory monitor provides a unique solution for monitoring patients in a variety of settings.
RESUMO
Long-term and high-dose prescription opioid use places individuals at risk for opioid misuse, opioid use disorder (OUD), and overdose. Existing methods for monitoring opioid use and detecting misuse rely on self-reports, which are prone to reporting bias, and toxicology testing, which may be infeasible in outpatient settings. Although wearable technologies for monitoring day-to-day health metrics have gained significant traction in recent years due to their ease of use, flexibility, and advancements in sensor technology, their application within the opioid use space remains underexplored. In the current work, we demonstrate that oral opioid administrations can be detected using physiological signals collected from a wrist sensor. More importantly, we show that models informed by opioid pharmacokinetics increase reliability in predicting the timing of opioid administrations. Forty-two individuals who were prescribed opioids as a part of their medical treatment in-hospital and after discharge were enrolled. Participants wore a wrist sensor throughout the study, while opioid administrations were tracked using electronic medical records and self-reports. We collected 1,983 hours of sensor data containing 187 opioid administrations from the inpatient setting and 927 hours of sensor data containing 40 opioid administrations from the outpatient setting. We demonstrate that a self-supervised pre-trained model, capable of learning the canonical time series of plasma concentration of the drug derived from opioid pharmacokinetics, can reliably detect opioid administration in both settings. Our work suggests the potential of pharmacokinetic-informed, data-driven models to objectively detect opioid use in daily life.
RESUMO
Syndromic surveillance is an effective tool for enabling the timely detection of infectious disease outbreaks and facilitating the implementation of effective mitigation strategies by public health authorities. While various information sources are currently utilized to collect syndromic signal data for analysis, the aggregated measurement of cough, an important symptom for many illnesses, is not widely employed as a syndromic signal. With recent advancements in ubiquitous sensing technologies, it becomes feasible to continuously measure population-level cough incidence in a contactless, unobtrusive, and automated manner. In this work, we demonstrate the utility of monitoring aggregated cough count as a syndromic indicator to estimate COVID-19 cases. In our study, we deployed a sensor-based platform (Syndromic Logger) in the emergency room of a large hospital. The platform captured syndromic signals from audio, thermal imaging, and radar, while the ground truth data were collected from the hospital's electronic health record. Our analysis revealed a significant correlation between the aggregated cough count and positive COVID-19 cases in the hospital (Pearson correlation of 0.40, p-value < 0.001). Notably, this correlation was higher than that observed with the number of individuals presenting with fever (ρ = 0.22, p = 0.04), a widely used syndromic signal and screening tool for such diseases. Furthermore, we demonstrate how the data obtained from our Syndromic Logger platform could be leveraged to estimate various COVID-19-related statistics using multiple modeling approaches. Aggregated cough counts and other data, such as people density collected from our platform, can be utilized to predict COVID-19 patient visits related metrics in a hospital waiting room, and SHAP and Gini feature importance-based metrics showed cough count as the important feature for these prediction models. Furthermore, we have shown that predictions based on cough counting outperform models based on fever detection (e.g., temperatures over 39°C), which require more intrusive engagement with the population. Our findings highlight that incorporating cough-counting based signals into syndromic surveillance systems can significantly enhance overall resilience against future public health challenges, such as emerging disease outbreaks or pandemics.
Assuntos
COVID-19 , Vigilância de Evento Sentinela , Humanos , COVID-19/epidemiologia , Salas de Espera , Hospitais , Surtos de Doenças/prevenção & controle , Febre/epidemiologiaRESUMO
Since 2005, health insurance (HI) coverage in India has significantly increased, largely because of the introduction of government-funded pro-poor insurance programs. As a result, the determinants of HI enrollment and their relative importance may have changed. Using National Family Health Survey (NFHS)-4 data, collected in 2015-2016, and employing a Probit regression model, we re-examine the determinants of household HI enrollment. Then, using a multinomial logistic regression model, we estimate the relative risk ratio for enrollment in different HI schemes. In comparison to the results on the determinants of HI enrollment using the NFHS data collected in 2005-2006, we find a decrease in the wealth gap in public HI enrollment. Nonetheless, disparities in enrollment remain, with some changes in those patterns. Households with low assets have lower enrollments in private and community-based health insurance (CBHI) programs. Households with a higher number of dependents have a higher likelihood of HI enrollment, especially in rural areas. In rural areas, poor Scheduled Caste and Scheduled Tribe households are more likely to be enrolled in public HI than the general Caste households. In urban areas, Muslim households have a lower likelihood of enrollment in any HI. The educational attainment of household heads is positively associated with enrollment in private HI, but it is negatively associated with enrollment in public HI. Since 2005-2006, while HI coverage has improved, disparities across social groups remain.
RESUMO
BACKGROUND: Caste plays a significant role in individual healthcare access and health outcomes in India. Discrimination against low-caste communities contributes to their poverty and poor health outcomes. The Rashtriya Swasthya Bima Yojana (RSBY), a national health insurance program, was created to improve healthcare access for the poor. This study accounts for caste-based disparities in RSBY enrollment in India by decomposing the contributions of relevant factors. METHODS: Using the data from the 2015-2016 round of the National Family Health Survey, we compare RSBY enrollment rates of low-caste and high-caste households. We use a non-linear extension of Oaxaca-Blinder decomposition and estimate two models by pooling coefficients across the comparison groups and all caste groups. Enrollment differentials are decomposed into individual- and household-level characteristics, media access, and state-level fixed effects, allowing 2000 replications and random ordering of variables. RESULTS: The analysis of 480,766 households show that scheduled tribe households have the highest enrollment (18.85%), followed by 14.13% for scheduled caste, 10.67% for other backward caste, and 9.33% for high caste. Household factors, family head's characteristics, media access, and state-level fixed effects account for a 32% to 52% gap in enrollment. More specifically, the enrollment gaps are attributable to differences in wealth status, educational attainment, residence, family size, dependency ratio, media access, and occupational activities of the households. CONCLUSIONS: Weaker socio-economic status of low-caste households explains their high RSBY enrollments.
Assuntos
Seguro Saúde , Classe Social , Humanos , Acessibilidade aos Serviços de Saúde , Índia , Programas Nacionais de Saúde , Fatores SocioeconômicosRESUMO
Syndromic surveillance is an effective tool for enabling the timely detection of infectious disease outbreaks and facilitating the implementation of effective mitigation strategies by public health authorities. While various information sources are currently utilized to collect syndromic signal data for analysis, the aggregated measurement of cough, an important symptom for many illnesses, is not widely employed as a syndromic signal. With recent advancements in ubiquitous sensing technologies, it becomes feasible to continuously measure population-level cough incidence in a contactless, unobtrusive, and automated manner. In this work, we demonstrate the utility of monitoring aggregated cough count as a syndromic indicator to estimate COVID-19 cases. In our study, we deployed a sensor-based platform (Syndromic Logger) in the emergency room of a large hospital. The platform captured syndromic signals from audio, thermal imaging, and radar, while the ground truth data were collected from the hospital's electronic health record. Our analysis revealed a significant correlation between the aggregated cough count and positive COVID-19 cases in the hospital (Pearson correlation of 0.40, p-value < 0.001). Notably, this correlation was higher than that observed with the number of individuals presenting with fever (ρ = 0.22, p = 0.04), a widely used syndromic signal and screening tool for such diseases. Furthermore, we demonstrate how the data obtained from our Syndromic Logger platform could be leveraged to estimate various COVID-19-related statistics using multiple modeling approaches. Our findings highlight the efficacy of aggregated cough count as a valuable syndromic indicator associated with the occurrence of COVID-19 cases. Incorporating this signal into syndromic surveillance systems for such diseases can significantly enhance overall resilience against future public health challenges, such as emerging disease outbreaks or pandemics.
RESUMO
Objective: The opioid crisis in the USA remains severe during the COVID-19 pandemic, which has reduced access to evidence-based interventions. This Stage 1 randomized controlled trial (RCT) assessed the preliminary efficacy of Zoom-based Mindfulness-Oriented Recovery Enhancement (MORE) plus Just-in-Time Adaptive Intervention (JITAI) prompts to practice mindfulness triggered by wearable sensors (MORE + JITAI). Method: Opioid-treated chronic pain patients (n = 63) were randomized to MORE + JITAI or a Zoom-based supportive group (SG) psychotherapy control. Participants completed ecological momentary assessments (EMA) of craving and pain (co-primary outcomes), as well as positive affect, and stress at one random probe per day for 90 days. EMA probes were also triggered when a wearable sensor detected the presence of physiological stress, as indicated by changes in heart rate variability (HRV), at which time participants in MORE + JITAI were prompted by an app to engage in audio-guided mindfulness practice. Results: EMA showed significantly greater reductions in craving, pain, and stress, and increased positive affect over time for participants in MORE + JITAI than for participants in SG. JITAI-initiated mindfulness practice was associated with significant improvements in these variables, as well as increases in HRV. Machine learning predicted JITAI-initiated mindfulness practice effectiveness with reasonable sensitivity and specificity. Conclusions: In this pilot trial, MORE + JITAI demonstrated preliminary efficacy for reducing opioid craving and pain, two factors implicated in opioid misuse. MORE + JITAI is a promising intervention that warrants investigation in a fully powered RCT. Preregistration: This study is registered on ClinicalTrials.gov (NCT04567043).
RESUMO
Opioid use disorder is one of the most pressing public health problems of our time. Mobile health tools, including wearable sensors, have great potential in this space, but have been underutilized. Of specific interest are digital biomarkers, or end-user generated physiologic or behavioral measurements that correlate with health or pathology. The current manuscript describes a longitudinal, observational study of adult patients receiving opioid analgesics for acute painful conditions. Participants in the study are monitored with a wrist-worn E4 sensor, during which time physiologic parameters (heart rate/variability, electrodermal activity, skin temperature, and accelerometry) are collected continuously. Opioid use events are recorded via electronic medical record and self-report. Three-hundred thirty-nine discreet dose opioid events from 36 participant are analyzed among 2070 h of sensor data. Fifty-one features are extracted from the data and initially compared pre- and post-opioid administration, and subsequently are used to generate machine learning models. Model performance is compared based on individual and treatment characteristics. The best performing machine learning model to detect opioid administration is a Channel-Temporal Attention-Temporal Convolutional Network (CTA-TCN) model using raw data from the wearable sensor. History of intravenous drug use is associated with better model performance, while middle age, and co-administration of non-narcotic analgesia or sedative drugs are associated with worse model performance. These characteristics may be candidate input features for future opioid detection model iterations. Once mature, this technology could provide clinicians with actionable data on opioid use patterns in real-world settings, and predictive analytics for early identification of opioid use disorder risk.
RESUMO
Limbic-prefrontal connectivity during negative emotional challenges underpins a wide range of psychiatric disorders, yet the early development of this system is largely unknown due to difficulties imaging young children. Functional Near-Infrared Spectroscopy (fNIRS) has advanced an understanding of early emotion-related prefrontal activation and psychopathology, but cannot detect activation below the outer cortex. Galvanic skin response (GSR) is a sensitive index of autonomic arousal strongly influenced by numerous limbic structures. We recorded simultaneous lateral prefrontal cortex (lPFC) activation via fNIRS and GSR in 73 3- to 5-year-old children, who ranged from low to severe levels of irritability, during a frustration task. The goal of the study was to test how frustration-related PFC activation modulated psychophysiology in preschool children, and whether associations were moderated by irritability severity. Results showed lPFC activation significantly increased, and GSR levels significantly decreased, as children moved from frustration to rest, such that preschoolers with the highest activation had the steepest recovery. Further, this relation was moderated by irritability such that children with severe irritability showed no association between lPFC activation and GSR. Results suggest functional connections between prefrontal and autonomic nervous systems are in place early in life, with evidence of lPFC down-regulation of frustration-based stress that is altered in early psychopathology. Combining fNIRS and GSR may be a promising novel approach for inferring limbic-PFC processes that drive early emotion regulation and psychopathology.
Assuntos
Regulação Emocional , Frustração , Nível de Alerta , Pré-Escolar , Humanos , Humor Irritável/fisiologia , Córtex Pré-Frontal/fisiologiaRESUMO
Opioid use disorder is a medical condition with major social and economic consequences. While ubiquitous physiological sensing technologies have been widely adopted and extensively used to monitor day-to-day activities and deliver targeted interventions to improve human health, the use of these technologies to detect drug use in natural environments has been largely underexplored. The long-term goal of our work is to develop a mobile technology system that can identify high-risk opioid-related events (i.e., development of tolerance in the setting of prescription opioid use, return-to-use events in the setting of opioid use disorder) and deploy just-in-time interventions to mitigate the risk of overdose morbidity and mortality. In the current paper, we take an initial step by asking a crucial question: Can opioid use be detected using physiological signals obtained from a wrist-mounted sensor? Thirty-six individuals who were admitted to the hospital for an acute painful condition and received opioid analgesics as part of their clinical care were enrolled. Subjects wore a noninvasive wrist sensor during this time (1-14 days) that continuously measured physiological signals (heart rate, skin temperature, accelerometry, electrodermal activity, and interbeat interval). We collected a total of 2070 hours (≈ 86 days) of physiological data and observed a total of 339 opioid administrations. Our results are encouraging and show that using a Channel-Temporal Attention TCN (CTA-TCN) model, we can detect an opioid administration in a time-window with an F1-score of 0.80, a specificity of 0.77, sensitivity of 0.80, and an AUC of 0.77. We also predict the exact moment of administration in this time-window with a normalized mean absolute error of 8.6% and R 2 coefficient of 0.85.
RESUMO
We developed a contactless syndromic surveillance platform FluSense that aims to expand the current paradigm of influenza-like illness (ILI) surveillance by capturing crowd-level bio-clinical signals directly related to physical symptoms of ILI from hospital waiting areas in an unobtrusive and privacy-sensitive manner. FluSense consists of a novel edge-computing sensor system, models and data processing pipelines to track crowd behaviors and influenza-related indicators, such as coughs, and to predict daily ILI and laboratory-confirmed influenza caseloads. FluSense uses a microphone array and a thermal camera along with a neural computing engine to passively and continuously characterize speech and cough sounds along with changes in crowd density on the edge in a real-time manner. We conducted an IRB-approved 7 month-long study from December 10, 2018 to July 12, 2019 where we deployed FluSense in four public waiting areas within the hospital of a large university. During this period, the FluSense platform collected and analyzed more than 350,000 waiting room thermal images and 21 million non-speech audio samples from the hospital waiting areas. FluSense can accurately predict daily patient counts with a Pearson correlation coefficient of 0.95. We also compared signals from FluSense with the gold standard laboratory-confirmed influenza case data obtained in the same facility and found that our sensor-based features are strongly correlated with laboratory-confirmed influenza trends.
RESUMO
Respiratory rate is an extremely important but poorly monitored vital sign for medical conditions. Current modalities for respiratory monitoring are suboptimal. This paper presents a proof of concept of a new algorithm using a contactless ultra-wideband (UWB) impulse radar-based sensor to detect respiratory rate in both a laboratory setting and in a two-subject case study in the Emergency Department. This novel approach has shown correlation with manual respiratory rate in the laboratory setting and shows promise in Emergency Department subjects. In order to improve respiratory rate monitoring, the UWB technology is also able to localize subject movement throughout the room. This technology has potential for utilization both in and out of the hospital environments to improve monitoring and to prevent morbidity and mortality from a variety of medical conditions associated with changes in respiratory rate.
RESUMO
Goal: The aim of the study herein reported was to review mobile health (mHealth) technologies and explore their use to monitor and mitigate the effects of the COVID-19 pandemic. Methods: A Task Force was assembled by recruiting individuals with expertise in electronic Patient-Reported Outcomes (ePRO), wearable sensors, and digital contact tracing technologies. Its members collected and discussed available information and summarized it in a series of reports. Results: The Task Force identified technologies that could be deployed in response to the COVID-19 pandemic and would likely be suitable for future pandemics. Criteria for their evaluation were agreed upon and applied to these systems. Conclusions: mHealth technologies are viable options to monitor COVID-19 patients and be used to predict symptom escalation for earlier intervention. These technologies could also be utilized to monitor individuals who are presumed non-infected and enable prediction of exposure to SARS-CoV-2, thus facilitating the prioritization of diagnostic testing.