Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Pers Ubiquitous Comput ; 27(2): 447-466, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36405389

RESUMEN

Smartphones have become an integral part of people's everyday lives. Smartphones are used across all household locations, including in the bed at night. Smartphone screens and other displays emit blue light, and exposure to blue light can affect one's sleep quality. Thus, smartphone use prior to bedtime could disrupt the quality of one's sleep, but research lacks quantitative studies on how smartphone use can influence sleep. This study combines smartphone application use data from 75 participants with sleep data collected by a wearable ring. On average, the participants used their smartphones in bed for 322.8 s (5 min and 22.8 s), with an IQR of 43.7-456. Participants spent an average of 42% of their time in bed using their smartphones (IQR of 5.87-55.5%). Our findings indicate that smartphone use in bed has significant adverse effects on sleep latency, awake time, average heart rate, and HR variability. We also find that smartphone use does not decrease sleep quality when used outside of bed. Our results indicate that intense smartphone use alone does not negatively affect well-being. Since all smartphone users do not use their phones in the same way, extending the investigation to different smartphone use types might yield more information than general smartphone use. In conclusion, this paper presents the first investigation of the association between smartphone application use logs and detailed sleep metrics. Our work also validates previous research results and highlights emerging future work.

2.
JMIR Mhealth Uhealth ; 9(7): e26540, 2021 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-34255713

RESUMEN

BACKGROUND: Depression is a prevalent mental health challenge. Current depression assessment methods using self-reported and clinician-administered questionnaires have limitations. Instrumenting smartphones to passively and continuously collect moment-by-moment data sets to quantify human behaviors has the potential to augment current depression assessment methods for early diagnosis, scalable, and longitudinal monitoring of depression. OBJECTIVE: The objective of this study was to investigate the feasibility of predicting depression with human behaviors quantified from smartphone data sets, and to identify behaviors that can influence depression. METHODS: Smartphone data sets and self-reported 8-item Patient Health Questionnaire (PHQ-8) depression assessments were collected from 629 participants in an exploratory longitudinal study over an average of 22.1 days (SD 17.90; range 8-86). We quantified 22 regularity, entropy, and SD behavioral markers from the smartphone data. We explored the relationship between the behavioral features and depression using correlation and bivariate linear mixed models (LMMs). We leveraged 5 supervised machine learning (ML) algorithms with hyperparameter optimization, nested cross-validation, and imbalanced data handling to predict depression. Finally, with the permutation importance method, we identified influential behavioral markers in predicting depression. RESULTS: Of the 629 participants from at least 56 countries, 69 (10.97%) were females, 546 (86.8%) were males, and 14 (2.2%) were nonbinary. Participants' age distribution is as follows: 73/629 (11.6%) were aged between 18 and 24, 204/629 (32.4%) were aged between 25 and 34, 156/629 (24.8%) were aged between 35 and 44, 166/629 (26.4%) were aged between 45 and 64, and 30/629 (4.8%) were aged 65 years and over. Of the 1374 PHQ-8 assessments, 1143 (83.19%) responses were nondepressed scores (PHQ-8 score <10), while 231 (16.81%) were depressed scores (PHQ-8 score ≥10), as identified based on PHQ-8 cut-off. A significant positive Pearson correlation was found between screen status-normalized entropy and depression (r=0.14, P<.001). LMM demonstrates an intraclass correlation of 0.7584 and a significant positive association between screen status-normalized entropy and depression (ß=.48, P=.03). The best ML algorithms achieved the following metrics: precision, 85.55%-92.51%; recall, 92.19%-95.56%; F1, 88.73%-94.00%; area under the curve receiver operating characteristic, 94.69%-99.06%; Cohen κ, 86.61%-92.90%; and accuracy, 96.44%-98.14%. Including age group and gender as predictors improved the ML performances. Screen and internet connectivity features were the most influential in predicting depression. CONCLUSIONS: Our findings demonstrate that behavioral markers indicative of depression can be unobtrusively identified from smartphone sensors' data. Traditional assessment of depression can be augmented with behavioral markers from smartphones for depression diagnosis and monitoring.


Asunto(s)
Depresión , Teléfono Inteligente , Adolescente , Adulto , Depresión/diagnóstico , Depresión/epidemiología , Femenino , Humanos , Estudios Longitudinales , Aprendizaje Automático , Masculino , Autoinforme , Adulto Joven
3.
JMIR Cancer ; 7(2): e27975, 2021 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-33904822

RESUMEN

BACKGROUND: Cancer treatments can cause a variety of symptoms that impair quality of life and functioning but are frequently missed by clinicians. Smartphone and wearable sensors may capture behavioral and physiological changes indicative of symptom burden, enabling passive and remote real-time monitoring of fluctuating symptoms. OBJECTIVE: The aim of this study was to examine whether smartphone and Fitbit data could be used to estimate daily symptom burden before and after pancreatic surgery. METHODS: A total of 44 patients scheduled for pancreatic surgery participated in this prospective longitudinal study and provided sufficient sensor and self-reported symptom data for analyses. Participants collected smartphone sensor and Fitbit data and completed daily symptom ratings starting at least two weeks before surgery, throughout their inpatient recovery, and for up to 60 days after postoperative discharge. Day-level behavioral features reflecting mobility and activity patterns, sleep, screen time, heart rate, and communication were extracted from raw smartphone and Fitbit data and used to classify the next day as high or low symptom burden, adjusted for each individual's typical level of reported symptoms. In addition to the overall symptom burden, we examined pain, fatigue, and diarrhea specifically. RESULTS: Models using light gradient boosting machine (LightGBM) were able to correctly predict whether the next day would be a high symptom day with 73.5% accuracy, surpassing baseline models. The most important sensor features for discriminating high symptom days were related to physical activity bouts, sleep, heart rate, and location. LightGBM models predicting next-day diarrhea (79.0% accuracy), fatigue (75.8% accuracy), and pain (79.6% accuracy) performed similarly. CONCLUSIONS: Results suggest that digital biomarkers may be useful in predicting patient-reported symptom burden before and after cancer surgery. Although model performance in this small sample may not be adequate for clinical implementation, findings support the feasibility of collecting mobile sensor data from older patients who are acutely ill as well as the potential clinical value of mobile sensing for passive monitoring of patients with cancer and suggest that data from devices that many patients already own and use may be useful in detecting worsening perioperative symptoms and triggering just-in-time symptom management interventions.

4.
Front Psychiatry ; 12: 625247, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33584388

RESUMEN

Background: Depression and anxiety are leading causes of disability worldwide but often remain undetected and untreated. Smartphone and wearable devices may offer a unique source of data to detect moment by moment changes in risk factors associated with mental disorders that overcome many of the limitations of traditional screening methods. Objective: The current study aimed to explore the extent to which data from smartphone and wearable devices could predict symptoms of depression and anxiety. Methods: A total of N = 60 adults (ages 24-68) who owned an Apple iPhone and Oura Ring were recruited online over a 2-week period. At the beginning of the study, participants installed the Delphi data acquisition app on their smartphone. The app continuously monitored participants' location (using GPS) and smartphone usage behavior (total usage time and frequency of use). The Oura Ring provided measures related to activity (step count and metabolic equivalent for task), sleep (total sleep time, sleep onset latency, wake after sleep onset and time in bed) and heart rate variability (HRV). In addition, participants were prompted to report their daily mood (valence and arousal). Participants completed self-reported assessments of depression, anxiety and stress (DASS-21) at baseline, midpoint and the end of the study. Results: Multilevel models demonstrated a significant negative association between the variability of locations visited and symptoms of depression (beta = -0.21, p = 0.037) and significant positive associations between total sleep time and depression (beta = 0.24, p = 0.023), time in bed and depression (beta = 0.26, p = 0.020), wake after sleep onset and anxiety (beta = 0.23, p = 0.035) and HRV and anxiety (beta = 0.26, p = 0.035). A combined model of smartphone and wearable features and self-reported mood provided the strongest prediction of depression. Conclusion: The current findings demonstrate that wearable devices may provide valuable sources of data in predicting symptoms of depression and anxiety, most notably data related to common measures of sleep.

5.
JMIR Mhealth Uhealth ; 9(1): e21926, 2021 01 28.
Artículo en Inglés | MEDLINE | ID: mdl-33507156

RESUMEN

BACKGROUND: Multimodal wearable technologies have brought forward wide possibilities in human activity recognition, and more specifically personalized monitoring of eating habits. The emerging challenge now is the selection of most discriminative information from high-dimensional data collected from multiple sources. The available fusion algorithms with their complex structure are poorly adopted to the computationally constrained environment which requires integrating information directly at the source. As a result, more simple low-level fusion methods are needed. OBJECTIVE: In the absence of a data combining process, the cost of directly applying high-dimensional raw data to a deep classifier would be computationally expensive with regard to the response time, energy consumption, and memory requirement. Taking this into account, we aimed to develop a data fusion technique in a computationally efficient way to achieve a more comprehensive insight of human activity dynamics in a lower dimension. The major objective was considering statistical dependency of multisensory data and exploring intermodality correlation patterns for different activities. METHODS: In this technique, the information in time (regardless of the number of sources) is transformed into a 2D space that facilitates classification of eating episodes from others. This is based on a hypothesis that data captured by various sensors are statistically associated with each other and the covariance matrix of all these signals has a unique distribution correlated with each activity which can be encoded on a contour representation. These representations are then used as input of a deep model to learn specific patterns associated with specific activity. RESULTS: In order to show the generalizability of the proposed fusion algorithm, 2 different scenarios were taken into account. These scenarios were different in terms of temporal segment size, type of activity, wearable device, subjects, and deep learning architecture. The first scenario used a data set in which a single participant performed a limited number of activities while wearing the Empatica E4 wristband. In the second scenario, a data set related to the activities of daily living was used where 10 different participants wore inertial measurement units while performing a more complex set of activities. The precision metric obtained from leave-one-subject-out cross-validation for the second scenario reached 0.803. The impact of missing data on performance degradation was also evaluated. CONCLUSIONS: To conclude, the proposed fusion technique provides the possibility of embedding joint variability information over different modalities in just a single 2D representation which results in obtaining a more global view of different aspects of daily human activities at hand, and yet preserving the desired performance level in activity recognition.


Asunto(s)
Aprendizaje Profundo , Ingestión de Alimentos , Dispositivos Electrónicos Vestibles , Actividades Cotidianas , Algoritmos , Humanos
6.
JMIR Mhealth Uhealth ; 8(11): e21543, 2020 11 26.
Artículo en Inglés | MEDLINE | ID: mdl-33242017

RESUMEN

BACKGROUND: Hand tremor typically has a negative impact on a person's ability to complete many common daily activities. Previous research has investigated how to quantify hand tremor with smartphones and wearable sensors, mainly under controlled data collection conditions. Solutions for daily real-life settings remain largely underexplored. OBJECTIVE: Our objective was to monitor and assess hand tremor severity in patients with Parkinson disease (PD), and to better understand the effects of PD medications in a naturalistic environment. METHODS: Using the Welch method, we generated periodograms of accelerometer data and computed signal features to compare patients with varying degrees of PD symptoms. RESULTS: We introduced and empirically evaluated the tremor intensity parameter (TIP), an accelerometer-based metric to quantify hand tremor severity in PD using smartphones. There was a statistically significant correlation between the TIP and self-assessed Unified Parkinson Disease Rating Scale (UPDRS) II tremor scores (Kendall rank correlation test: z=30.521, P<.001, τ=0.5367379; n=11). An analysis of the "before" and "after" medication intake conditions identified a significant difference in accelerometer signal characteristics among participants with different levels of rigidity and bradykinesia (Wilcoxon rank sum test, P<.05). CONCLUSIONS: Our work demonstrates the potential use of smartphone inertial sensors as a systematic symptom severity assessment mechanism to monitor PD symptoms and to assess medication effectiveness remotely. Our smartphone-based monitoring app may also be relevant for other conditions where hand tremor is a prevalent symptom.


Asunto(s)
Enfermedad de Parkinson , Teléfono Inteligente , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Enfermedad de Parkinson/complicaciones , Enfermedad de Parkinson/diagnóstico , Enfermedad de Parkinson/tratamiento farmacológico , Temblor/diagnóstico
7.
Addict Behav ; 83: 42-47, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29217132

RESUMEN

BACKGROUND: Real-time detection of drinking could improve timely delivery of interventions aimed at reducing alcohol consumption and alcohol-related injury, but existing detection methods are burdensome or impractical. OBJECTIVE: To evaluate whether phone sensor data and machine learning models are useful to detect alcohol use events, and to discuss implications of these results for just-in-time mobile interventions. METHODS: 38 non-treatment seeking young adult heavy drinkers downloaded AWARE app (which continuously collected mobile phone sensor data), and reported alcohol consumption (number of drinks, start/end time of prior day's drinking) for 28days. We tested various machine learning models using the 20 most informative sensor features to classify time periods as non-drinking, low-risk (1 to 3/4 drinks per occasion for women/men), and high-risk drinking (>4/5 drinks per occasion for women/men). RESULTS: Among 30 participants in the analyses, 207 non-drinking, 41 low-risk, and 45 high-risk drinking episodes were reported. A Random Forest model using 30-min windows with 1day of historical data performed best for detecting high-risk drinking, correctly classifying high-risk drinking windows 90.9% of the time. The most informative sensor features were related to time (i.e., day of week, time of day), movement (e.g., change in activities), device usage (e.g., screen duration), and communication (e.g., call duration, typing speed). CONCLUSIONS: Preliminary evidence suggests that sensor data captured from mobile phones of young adults is useful in building accurate models to detect periods of high-risk drinking. Interventions using mobile phone sensor features could trigger delivery of a range of interventions to potentially improve effectiveness.


Asunto(s)
Alcoholismo/diagnóstico , Alcoholismo/prevención & control , Técnicas Biosensibles/instrumentación , Teléfono Celular , Monitoreo Ambulatorio/instrumentación , Aprendizaje Automático Supervisado , Adulto , Técnicas Biosensibles/métodos , Evaluación Ecológica Momentánea , Femenino , Humanos , Masculino , Monitoreo Ambulatorio/métodos , Estudios Prospectivos , Encuestas y Cuestionarios , Adulto Joven
8.
J Med Internet Res ; 19(12): e420, 2017 12 19.
Artículo en Inglés | MEDLINE | ID: mdl-29258977

RESUMEN

BACKGROUND: Physical and psychological symptoms are common during chemotherapy in cancer patients, and real-time monitoring of these symptoms can improve patient outcomes. Sensors embedded in mobile phones and wearable activity trackers could be potentially useful in monitoring symptoms passively, with minimal patient burden. OBJECTIVE: The aim of this study was to explore whether passively sensed mobile phone and Fitbit data could be used to estimate daily symptom burden during chemotherapy. METHODS: A total of 14 patients undergoing chemotherapy for gastrointestinal cancer participated in the 4-week study. Participants carried an Android phone and wore a Fitbit device for the duration of the study and also completed daily severity ratings of 12 common symptoms. Symptom severity ratings were summed to create a total symptom burden score for each day, and ratings were centered on individual patient means and categorized into low, average, and high symptom burden days. Day-level features were extracted from raw mobile phone sensor and Fitbit data and included features reflecting mobility and activity, sleep, phone usage (eg, duration of interaction with phone and apps), and communication (eg, number of incoming and outgoing calls and messages). We used a rotation random forests classifier with cross-validation and resampling with replacement to evaluate population and individual model performance and correlation-based feature subset selection to select nonredundant features with the best predictive ability. RESULTS: Across 295 days of data with both symptom and sensor data, a number of mobile phone and Fitbit features were correlated with patient-reported symptom burden scores. We achieved an accuracy of 88.1% for our population model. The subset of features with the best accuracy included sedentary behavior as the most frequent activity, fewer minutes in light physical activity, less variable and average acceleration of the phone, and longer screen-on time and interactions with apps on the phone. Mobile phone features had better predictive ability than Fitbit features. Accuracy of individual models ranged from 78.1% to 100% (mean 88.4%), and subsets of relevant features varied across participants. CONCLUSIONS: Passive sensor data, including mobile phone accelerometer and usage and Fitbit-assessed activity and sleep, were related to daily symptom burden during chemotherapy. These findings highlight opportunities for long-term monitoring of cancer patients during chemotherapy with minimal patient burden as well as real-time adaptive interventions aimed at early management of worsening or severe symptoms.


Asunto(s)
Quimioterapia/métodos , Neoplasias/tratamiento farmacológico , Neoplasias/terapia , Medición de Resultados Informados por el Paciente , Telemedicina/métodos , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad
9.
Artículo en Inglés | MEDLINE | ID: mdl-35146236

RESUMEN

Alcohol use in young adults is common, with high rates of morbidity and mortality largely due to periodic, heavy drinking episodes (HDEs). Behavioral interventions delivered through electronic communication modalities (e.g., text messaging) can reduce the frequency of HDEs in young adults, but effects are small. One way to amplify these effects is to deliver support materials proximal to drinking occasions, but this requires knowledge of when they will occur. Mobile phones have built-in sensors that can potentially be useful in monitoring behavioral patterns associated with the initiation of drinking occasions. The objective of our work is to explore the detection of daily-life behavioral markers using mobile phone sensors and their utility in identifying drinking occasions. We utilized data from 30 young adults aged 21-28 with past hazardous drinking and collected mobile phone sensor data and daily Experience Sampling Method (ESM) of drinking for 28 consecutive days. We built a machine learning-based model that is 96.6% accurate at identifying non-drinking, drinking and heavy drinking episodes. We highlight the most important features for detecting drinking episodes and identify the amount of historical data needed for accurate detection. Our results suggest that mobile phone sensors can be used for automated, continuous monitoring of at-risk populations to detect drinking episodes and support the delivery of timely interventions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...