Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
J Med Internet Res ; 25: e42047, 2023 09 06.
Article in English | MEDLINE | ID: mdl-37672333

ABSTRACT

BACKGROUND: Predicting the likelihood of success of weight loss interventions using machine learning (ML) models may enhance intervention effectiveness by enabling timely and dynamic modification of intervention components for nonresponders to treatment. However, a lack of understanding and trust in these ML models impacts adoption among weight management experts. Recent advances in the field of explainable artificial intelligence enable the interpretation of ML models, yet it is unknown whether they enhance model understanding, trust, and adoption among weight management experts. OBJECTIVE: This study aimed to build and evaluate an ML model that can predict 6-month weight loss success (ie, ≥7% weight loss) from 5 engagement and diet-related features collected over the initial 2 weeks of an intervention, to assess whether providing ML-based explanations increases weight management experts' agreement with ML model predictions, and to inform factors that influence the understanding and trust of ML models to advance explainability in early prediction of weight loss among weight management experts. METHODS: We trained an ML model using the random forest (RF) algorithm and data from a 6-month weight loss intervention (N=419). We leveraged findings from existing explainability metrics to develop Prime Implicant Maintenance of Outcome (PRIMO), an interactive tool to understand predictions made by the RF model. We asked 14 weight management experts to predict hypothetical participants' weight loss success before and after using PRIMO. We compared PRIMO with 2 other explainability methods, one based on feature ranking and the other based on conditional probability. We used generalized linear mixed-effects models to evaluate participants' agreement with ML predictions and conducted likelihood ratio tests to examine the relationship between explainability methods and outcomes for nested models. We conducted guided interviews and thematic analysis to study the impact of our tool on experts' understanding and trust in the model. RESULTS: Our RF model had 81% accuracy in the early prediction of weight loss success. Weight management experts were significantly more likely to agree with the model when using PRIMO (χ2=7.9; P=.02) compared with the other 2 methods with odds ratios of 2.52 (95% CI 0.91-7.69) and 3.95 (95% CI 1.50-11.76). From our study, we inferred that our software not only influenced experts' understanding and trust but also impacted decision-making. Several themes were identified through interviews: preference for multiple explanation types, need to visualize uncertainty in explanations provided by PRIMO, and need for model performance metrics on similar participant test instances. CONCLUSIONS: Our results show the potential for weight management experts to agree with the ML-based early prediction of success in weight loss treatment programs, enabling timely and dynamic modification of intervention components to enhance intervention effectiveness. Our findings provide methods for advancing the understandability and trust of ML models among weight management experts.


Subject(s)
Artificial Intelligence , Software , Humans , Machine Learning , Trust , Weight Loss
2.
IEEE J Biomed Health Inform ; 27(8): 3878-3888, 2023 08.
Article in English | MEDLINE | ID: mdl-37192033

ABSTRACT

Automated detection of intake gestures with wearable sensors has been a critical area of research for advancing our understanding and ability to intervene in people's eating behavior. Numerous algorithms have been developed and evaluated in terms of accuracy. However, ensuring the system is not only accurate in making predictions but also efficient in doing so is critical for real-world deployment. Despite the growing research on accurate detection of intake gestures using wearables, many of these algorithms are often energy inefficient, impeding on-device deployment for continuous and real-time monitoring of diet. This article presents a template-based optimized multicenter classifier that enables accurate intake gesture detection while maintaining low-inference time and energy consumption using a wrist-worn accelerometer and gyroscope. We designed an Intake Gesture Counter smartphone application (CountING) and validated the practicality of our algorithm against seven state-of-the-art approaches on three public datasets (In-lab FIC, Clemson, and OREBA). Compared with other methods, we achieved optimal accuracy (81.60% F1 score) and very low inference time (15.97 msec per 2.20-sec data sample) on the Clemson dataset, and among the top performing algorithms, we achieve comparable accuracy (83.0% F1 score compared with 85.6% in the top performing algorithm) but superior inference time (13.8x faster, 33.14 msec per 2.20-sec data sample) on the In-lab FIC dataset and comparable accuracy (83.40% F1 score compared with 88.10% in the top-performing algorithm) but superior inference time (33.9x faster, 16.71 msec inference time per 2.20-sec data sample) on the OREBA dataset. On average, our approach achieved a 25-hour battery lifetime (44% to 52% improvement over state-of-the-art approaches) when tested on a commercial smartwatch for continuous real-time detection. Our approach demonstrates an effective and efficient method, enabling real-time intake gesture detection using wrist-worn devices in longitudinal studies.


Subject(s)
Wearable Electronic Devices , Wrist , Humans , Algorithms , Gestures
3.
Digit Health ; 9: 20552076231158314, 2023.
Article in English | MEDLINE | ID: mdl-37138585

ABSTRACT

Objectives: Overeating interventions and research often focus on single determinants and use subjective or nonpersonalized measures. We aim to (1) identify automatically detectable features that predict overeating and (2) build clusters of eating episodes that identify theoretically meaningful and clinically known problematic overeating behaviors (e.g., stress eating), as well as new phenotypes based on social and psychological features. Method: Up to 60 adults with obesity in the Chicagoland area will be recruited for a 14-day free-living observational study. Participants will complete ecological momentary assessments and wear 3 sensors designed to capture features of overeating episodes (e.g., chews) that can be visually confirmed. Participants will also complete daily dietitian-administered 24-hour recalls of all food and beverages consumed. Analysis: Overeating is defined as caloric consumption exceeding 1 standard deviation of an individual's mean consumption per eating episode. To identify features that predict overeating, we will apply 2 complementary machine learning methods: correlation-based feature selection and wrapper-based feature selection. We will then generate clusters of overeating types and assess how they align with clinically meaningful overeating phenotypes. Conclusions: This study will be the first to assess characteristics of eating episodes in situ over a multiweek period with visual confirmation of eating behaviors. An additional strength of this study is the assessment of predictors of problematic eating during periods when individuals are not on a structured diet and/or engaged in a weight loss intervention. Our assessment of overeating episodes in real-world settings is likely to yield new insights regarding determinants of overeating that may translate into novel interventions.

4.
Infancy ; 28(1): 136-157, 2023 01.
Article in English | MEDLINE | ID: mdl-36070207

ABSTRACT

The association between prenatal stress and children's socioemotional development is well established. The COVID-19 pandemic has been a particularly stressful period, which may impact the gestational environment. However, most studies to-date have examined prenatal stress at a single time point, potentially masking the natural variation in stress that occurs over time, especially during a time as uncertain as the pandemic. This study leveraged dense ecological momentary assessments from a prenatal randomized control trial to examine patterns of prenatal stress over a 14-week period (up to four assessments/day) in a U.S. sample of 72 mothers and infants. We first examined whether varied features of stress exposure (lability, mean, and baseline stress) differed depending on whether mothers reported on their stress before or during the pandemic. We next examined which features of stress were associated with 3-month-old infants' negative affect. We did not find differences in stress patterns before and during the pandemic. However, greater stress lability, accounting for baseline and mean stress, was associated with higher infant negative affect. These findings suggest that pathways from prenatal stress exposure to infant socioemotional development are complex, and close attention to stress patterns over time will be important for explicating these pathways.


Subject(s)
COVID-19 , Pandemics , Child , Female , Pregnancy , Infant , Humans , Stress, Psychological/metabolism , Stress, Psychological/psychology , Mothers/psychology , Affect
5.
Article in English | MEDLINE | ID: mdl-36448973

ABSTRACT

Automated detection and validation of fine-grained human activities from egocentric vision has gained increased attention in recent years due to the rich information afforded by RGB images. However, it is not easy to discern how much rich information is necessary to detect the activity of interest reliably. Localization of hands and objects in the image has proven helpful to distinguishing between hand-related fine-grained activities. This paper describes the design of a hand-object-based mask obfuscation method (HOBM) and assesses its effect on automated recognition of fine-grained human activities. HOBM masks all pixels other than the hand and object in-hand, improving the protection of personal user information (PUI). We test a deep learning model trained with and without obfuscation using a public egocentric activity dataset with 86 class labels and achieve almost similar classification accuracies (2% decrease with obfuscation). Our findings show that it is possible to protect PUI at smaller image utility costs (loss of accuracy).

6.
Article in English | MEDLINE | ID: mdl-36448975

ABSTRACT

Screen time is associated with several health risk behaviors including mindless eating, sedentary behavior, and decreased academic performance. Screen time behavior is traditionally assessed with self-report measures, which are known to be burdensome, inaccurate, and imprecise. Recent methods to automatically detect screen time are geared more towards detecting television screens from wearable cameras that record high-resolution video. Activity-oriented wearable cameras (i.e., cameras oriented towards the wearer with a fisheye lens) have recently been designed and shown to reduce privacy concerns, yet pose a greater challenge in capturing screens due to their orientation and fewer pixels on target. Methods that detect screens from low-power, low-resolution wearable camera video are needed given the increased adoption of such devices in longitudinal studies. We propose a method that leverages deep learning algorithms and lower-resolution images from an activity-oriented camera to detect screen presence from multiple types of screens with high variability of pixel on target (e.g., near and far TV, smartphones, laptops, and tablets). We test our system in a real-world study comprising 10 individuals, 80 hours of data, and 1.2 million low-resolution RGB frames. Our results outperform existing state-of-the-art video screen detection methods yielding an F1-score of 81%. This paper demonstrates the potential for detecting screen-watching behavior in longitudinal studies using activity-oriented cameras, paving the way for a nuanced understanding of screen time's relationship with health risk behaviors.

7.
Article in English | MEDLINE | ID: mdl-36447642

ABSTRACT

Wearable cameras provide an informative view of wearer activities, context, and interactions. Video obtained from wearable cameras is useful for life-logging, human activity recognition, visual confirmation, and other tasks widely utilized in mobile computing today. Extracting foreground information related to the wearer and separating irrelevant background pixels is the fundamental operation underlying these tasks. However, current wearer foreground extraction methods that depend on image data alone are slow, energy-inefficient, and even inaccurate in some cases, making many tasks-like activity recognition- challenging to implement in the absence of significant computational resources. To fill this gap, we built ActiSight, a wearable RGB-Thermal video camera that uses thermal information to make wearer segmentation practical for body-worn video. Using ActiSight, we collected a total of 59 hours of video from 6 participants, capturing a wide variety of activities in a natural setting. We show that wearer foreground extracted with ActiSight achieves a high dice similarity score while significantly lowering execution time and energy cost when compared with an RGB-only approach.

8.
JMIR Mhealth Uhealth ; 10(8): e33850, 2022 08 02.
Article in English | MEDLINE | ID: mdl-35917157

ABSTRACT

BACKGROUND: Cognitive behavioral therapy-based interventions are effective in reducing prenatal stress, which can have severe adverse health effects on mothers and newborns if unaddressed. Predicting next-day physiological or perceived stress can help to inform and enable pre-emptive interventions for a likely physiologically and perceptibly stressful day. Machine learning models are useful tools that can be developed to predict next-day physiological and perceived stress by using data collected from the previous day. Such models can improve our understanding of the specific factors that predict physiological and perceived stress and allow researchers to develop systems that collect selected features for assessment in clinical trials to minimize the burden of data collection. OBJECTIVE: The aim of this study was to build and evaluate a machine-learned model that predicts next-day physiological and perceived stress by using sensor-based, ecological momentary assessment (EMA)-based, and intervention-based features and to explain the prediction results. METHODS: We enrolled pregnant women into a prospective proof-of-concept study and collected electrocardiography, EMA, and cognitive behavioral therapy intervention data over 12 weeks. We used the data to train and evaluate 6 machine learning models to predict next-day physiological and perceived stress. After selecting the best performing model, Shapley Additive Explanations were used to identify the feature importance and explainability of each feature. RESULTS: A total of 16 pregnant women enrolled in the study. Overall, 4157.18 hours of data were collected, and participants answered 2838 EMAs. After applying feature selection, 8 and 10 features were found to positively predict next-day physiological and perceived stress, respectively. A random forest classifier performed the best in predicting next-day physiological stress (F1 score of 0.84) and next-day perceived stress (F1 score of 0.74) by using all features. Although any subset of sensor-based, EMA-based, or intervention-based features could reliably predict next-day physiological stress, EMA-based features were necessary to predict next-day perceived stress. The analysis of explainability metrics showed that the prolonged duration of physiological stress was highly predictive of next-day physiological stress and that physiological stress and perceived stress were temporally divergent. CONCLUSIONS: In this study, we were able to build interpretable machine learning models to predict next-day physiological and perceived stress, and we identified unique features that were highly predictive of next-day stress that can help to reduce the burden of data collection.


Subject(s)
Machine Learning , Pregnant Women , Algorithms , Female , Humans , Infant, Newborn , Pregnancy , Prospective Studies , Stress, Physiological
9.
Sensors (Basel) ; 22(4)2022 Feb 14.
Article in English | MEDLINE | ID: mdl-35214377

ABSTRACT

Mobile and wearable devices have enabled numerous applications, including activity tracking, wellness monitoring, and human-computer interaction, that measure and improve our daily lives. Many of these applications are made possible by leveraging the rich collection of low-power sensors found in many mobile and wearable devices to perform human activity recognition (HAR). Recently, deep learning has greatly pushed the boundaries of HAR on mobile and wearable devices. This paper systematically categorizes and summarizes existing work that introduces deep learning methods for wearables-based HAR and provides a comprehensive analysis of the current advancements, developing trends, and major challenges. We also present cutting-edge frontiers and future directions for deep learning-based HAR.


Subject(s)
Deep Learning , Wearable Electronic Devices , Human Activities , Humans
10.
Article in English | MEDLINE | ID: mdl-37179571

ABSTRACT

Researchers have been leveraging wearable cameras to both visually confirm and automatically detect individuals' eating habits. However, energy-intensive tasks such as continuously collecting and storing RGB images in memory, or running algorithms in real-time to automate detection of eating, greatly impacts battery life. Since eating moments are spread sparsely throughout the day, battery life can be mitigated by recording and processing data only when there is a high likelihood of eating. We present a framework comprising a golf-ball sized wearable device using a low-powered thermal sensor array and real-time activation algorithm that activates high-energy tasks when a hand-to-mouth gesture is confirmed by the thermal sensor array. The high-energy tasks tested are turning on the RGB camera (Trigger RGB mode) and running inference on an on-device machine learning model (Trigger ML mode). Our experimental setup involved the design of a wearable camera, 6 participants collecting 18 hours of data with and without eating, the implementation of a feeding gesture detection algorithm on-device, and measures of power saving using our activation method. Our activation algorithm demonstrates an average of at-least 31.5% increase in battery life time, with minimal drop of recall (5%) and without impacting the accuracy of detecting eating (a slight 4.1% increase in F1-Score).

11.
Article in English | MEDLINE | ID: mdl-38031552

ABSTRACT

Smoking is the leading cause of preventable death worldwide. Cigarette smoke includes thousands of chemicals that are harmful and cause tobacco-related diseases. To date, the causality between human exposure to specific compounds and the harmful effects is unknown. A first step in closing the gap in knowledge has been measuring smoking topography, or how the smoker smokes the cigarette (puffs, puff volume, and duration). However, current gold-standard approaches to smoking topography involve expensive, bulky, and obtrusive sensor devices, creating unnatural smoking behavior and preventing their potential for real-time interventions in the wild. Although motion-based wearable sensors and their corresponding machine-learned models have shown promise in unobtrusively tracking smoking gestures, they are notorious for confounding smoking with other similar hand-to-mouth gestures such as eating and drinking. In this paper, we present SmokeMon, a chest-worn thermal-sensing wearable system that can capture spatial, temporal, and thermal information around the wearer and cigarette all day to unobtrusively and passively detect smoking events. We also developed a deep learning-based framework to extract puffs and smoking topography. We evaluate SmokeMon in both controlled and free-living experiments with a total of 19 participants, more than 110 hours of data, and 115 smoking sessions achieving an F1-score of 0.9 for puff detection in the laboratory and 0.8 in the wild. By providing SmokeMon as an open platform, we provide measurement of smoking topography in free-living settings to enable testing of smoking topography in the real world, with potential to facilitate timely smoking cessation interventions.

12.
J Acad Nutr Diet ; 122(4): 825-832.e1, 2022 04.
Article in English | MEDLINE | ID: mdl-34662722

ABSTRACT

BACKGROUND: Commercial nutrition apps are increasingly used to evaluate diet. Evaluating the comparative validity of nutrient data from commercial nutrition app databases is important to determine the merits of using these apps for dietary assessment. OBJECTIVE: Nutrient data from four commercial nutrition apps were compared with a research-based food database, Nutrition Data System for Research (NDSR) (version 2017). DESIGN: Comparative validation study. PARTICIPANTS/SETTING: An investigator identified the 50 most frequently consumed foods (22% of total reported foods) from a weight-loss study in Chicago, IL, during 2017. Nutrient data were compared between four commercial databases with NDSR. MAIN OUTCOME MEASURES: Comparative validity of energy, macronutrients, and other nutrient data (ie, total sugars, fiber, saturated fat, cholesterol, calcium, and sodium). STATISTICAL ANALYSES PERFORMED: Intraclass correlation coefficients (ICCs) evaluated agreement between commercial databases with the NDSR for foods that were primarily un- and minimally processed and by the three most frequently consumed food groups. Bland-Altman plots determined degree of bias for calories between commercial databases and NDSR. RESULTS: This study observed excellent agreement between NDSR and CalorieKing (ICC range = 0.90 to 1.00). Compared with NDSR, agreement for Lose It! and MyFitnessPal ranged from good to excellent (ICC range = 0.89 to 1.00), with the exception of fiber in MyFitnessPal (ICC = 0.67). Fitbit showed the widest variability with NDSR (ICC range = 0.52 to 0.98). When evaluating by food group, Fitbit had poor agreement for all food groups, with the lowest agreement observed for fiber within the vegetable group (ICC = 0.16). Bland-Altman plots confirmed ICC energy results but also found that MyFitnessPal had the poorest agreement to NDSR (mean 8.35 [SD 133.31] kcal) for all food items. CONCLUSIONS: Degree of agreement varied by commercial nutrition app. CalorieKing and Lose It! had mostly excellent agreement with NDSR for all investigated nutrients. Fitbit showed the widest variability in agreement with NDSR for most nutrients, which may reflect how well the app can accurately capture diet.


Subject(s)
Mobile Applications , Diet , Diet Records , Energy Intake , Fast Foods , Humans , Nutritional Status , Reproducibility of Results
13.
Appetite ; 167: 105653, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34418505

ABSTRACT

Personalized weight management strategies are gaining interest. However, knowledge is limited regarding eating habits and association with energy intake, and current technologies limit assessment in free-living situations. We assessed associations between eating behavior and time of day with energy intake using a wearable camera under free-living conditions and explored if obesity modifies the associations. Sixteen participants (50% with obesity) recorded free-living eating behaviors using a wearable fish-eye camera for 14 days. Videos were viewed by trained annotators who confirmed number of bites, eating speed, and time of day for each eating episode. Energy intake was determined by a trained dietitian performing 24-h diet recalls. Greater number of bites, reduced eating speed, and increased BMI significantly predicted higher energy intake among all participants (P < 0.05, each). There were no significant interactions between obesity and number of bites, eating speed, or time of day (p > 0.05). Greater number of bites and reduced eating speed were significantly associated with higher energy intake in participants without obesity. Results show that under free-living conditions, more bites and slower eating speed predicted higher energy intake when examining consumption of foods with beverages. Obesity did not modify these associations. Findings highlight how eating behaviors can impact energy balance and can inform weight management interventions using wearable technology.


Subject(s)
Social Conditions , Wearable Electronic Devices , Humans , Diet , Eating , Energy Intake , Feeding Behavior
14.
Fam Syst Health ; 39(1): 19-28, 2021 03.
Article in English | MEDLINE | ID: mdl-34014727

ABSTRACT

INTRODUCTION: Short message service (SMS) is a widely accepted telecommunications approach used to support health informatics, including behavioral interventions, data collection, and patient-provider communication. However, SMS delivery platforms are not standardized and platforms are typically commercial "off-the-shelf" or developed "in-house." As a consequence of platform variability, implementing SMS-based interventions may be challenging for both providers and patients. Off-the-shelf SMS delivery platforms may require minimal development or technical resources from providers, but users are often limited in their functionality. Conversely, platforms that are developed in-house are often specified for individual projects, requiring specialized development and technical expertise. Patients are on the receiving end of programming and technical specification challenges; message delays or lagged data affect quality of SMS communications. To date, little work has been done to develop a generalizable SMS platform that can be scaled across health initiatives. OBJECTIVE: We propose the Configurable Assessment Messaging Platform for Interventions (CAMPI) to mitigate challenges associated with SMS intervention implementation (e.g., programming, data collection, message delivery). METHOD: CAMPI aims to optimize health data captured from a multitude of sources and enhance patient-provider communication through a technology that is simple and familiar to patients. Using representative examples from three behavioral intervention case studies implemented among diverse populations (pregnant women, young sexual minority men, and parents with young children), we describe CAMPI capabilities and feasibility. CONCLUSION: As a generalizable SMS platform, CAMPI can be scaled to meet the priorities of various health initiatives, while reducing unnecessary resource utilization and burden on providers and patients. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Medical Informatics/trends , Text Messaging/standards , Family Health/trends , Feasibility Studies , Humans , Text Messaging/instrumentation
15.
Health Psychol ; 40(12): 897-908, 2021 Dec.
Article in English | MEDLINE | ID: mdl-33570978

ABSTRACT

OBJECTIVE: We applied the ORBIT model to digitally define dynamic treatment pathways whereby intervention improves multiple risk behaviors. We hypothesized that effective intervention improves the frequency and consistency of targeted health behaviors and that both correlate with automaticity (habit) and self-efficacy (self-regulation). METHOD: Study 1: Via location scale mixed modeling we compared effects when hybrid mobile intervention did versus did not target each behavior in the Make Better Choices 1 (MBC1) trial (n = 204). Participants had all of four risk behaviors: low moderate-vigorous physical activity (MVPA) and fruit and vegetable consumption (FV), and high saturated fat (FAT) and sedentary leisure screen time (SED). Models estimated the mean (location), between-subjects variance, and within-subject variance (scale). RESULTS: Treatment by time interactions showed that location increased for MVPA and FV (Bs = 1.68, .61; ps < .001) and decreased for SED and FAT (Bs = -2.01, -.07; ps < .05) more when treatments targeted the behavior. Within-subject variance modeling revealed group by time interactions for scale (taus = -.19, -.75, -.17, -.11; ps < .001), indicating that all behaviors grew more consistent when targeted. METHOD: Study 2: In the MBC2 trial (n = 212) we examined correlations between location, scale, self-efficacy, and automaticity for the three targeted behaviors. RESULTS: For SED, higher scale (less consistency) but not location correlated with lower self-efficacy (r = -.22, p = .014) and automaticity (r = -.23, p = .013). For FV and MVPA, higher location, but not scale, correlated with higher self-efficacy (rs = .38, .34, ps < .001) and greater automaticity (rs = .46, .42, ps < .001). CONCLUSIONS: Location scale mixed modeling suggests that both habit and self-regulation changes probably accompany acquisition of complex diet and activity behaviors. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Exercise , Health Behavior , Diet , Humans , Sedentary Behavior , Vegetables
16.
Dev Psychobiol ; 63(4): 622-640, 2021 05.
Article in English | MEDLINE | ID: mdl-33225463

ABSTRACT

Prenatal stress exposure increases vulnerability to virtually all forms of psychopathology. Based on this robust evidence base, we propose a "Mental Health, Earlier" paradigm shift for prenatal stress research, which moves from the documentation of stress-related outcomes to their prevention, with a focus on infant neurodevelopmental indicators of vulnerability to subsequent mental health problems. Achieving this requires an expansive team science approach. As an exemplar, we introduce the Promoting Healthy Brain Project (PHBP), a randomized trial testing the impact of the Wellness-4-2 personalized prenatal stress-reduction intervention on stress-related alterations in infant neurodevelopmental trajectories in the first year of life. Wellness-4-2 utilizes bio-integrated stress monitoring for just-in-time adaptive intervention. We highlight unique challenges and opportunities this novel team science approach presents in synergizing expertise across predictive analytics, bioengineering, health information technology, prevention science, maternal-fetal medicine, neonatology, pediatrics, and neurodevelopmental science. We discuss how innovations across many areas of study facilitate this personalized preventive approach, using developmentally sensitive brain and behavioral methods to investigate whether altering children's adverse gestational exposures, i.e., maternal stress in the womb, can improve their mental health outlooks. In so doing, we seek to propel developmental SEED research towards preventive applications with the potential to reduce the pernicious effect of prenatal stress on neurodevelopment, mental health, and wellbeing.


Subject(s)
Mental Disorders , Prenatal Exposure Delayed Effects , Brain , Child , Female , Humans , Infant , Mental Health , Pregnancy , Prenatal Exposure Delayed Effects/prevention & control
17.
JMIR Cancer ; 6(2): e24137, 2020 Dec 03.
Article in English | MEDLINE | ID: mdl-33156810

ABSTRACT

BACKGROUND: eHealth technologies have been found to facilitate health-promoting practices among cancer survivors with BMI in overweight or obese categories; however, little is known about their engagement with eHealth to promote weight management and facilitate patient-clinician communication. OBJECTIVE: The objective of this study was to determine whether eHealth use was associated with sociodemographic characteristics, as well as medical history and experiences (ie, patient-related factors) among cancer survivors with BMI in overweight or obese categories. METHODS: Data were analyzed from a nationally representative cross-sectional survey (National Cancer Institute's Health Information National Trends Survey). Latent class analysis was used to derive distinct classes among cancer survivors based on sociodemographic characteristics, medical attributes, and medical experiences. Logistic regression was used to examine whether class membership was associated with different eHealth practices. RESULTS: Three distinct classes of cancer survivors with BMI in overweight or obese categories emerged: younger with no comorbidities, younger with comorbidities, and older with comorbidities. Compared to the other classes, the younger with comorbidities class had the highest probability of identifying as female (73%) and Hispanic (46%) and feeling that clinicians did not address their concerns (75%). The older with comorbidities class was 6.5 times more likely than the younger with comorbidities class to share eHealth data with a clinician (odds ratio [OR] 6.53, 95% CI 1.08-39.43). In contrast, the younger with no comorbidities class had a higher likelihood of using a computer to look for health information (OR 1.93, 95% CI 1.10-3.38), using an electronic device to track progress toward a health-related goal (OR 2.02, 95% CI 1.08-3.79), and using the internet to watch health-related YouTube videos (OR 2.70, 95% CI 1.52-4.81) than the older with comorbidities class. CONCLUSIONS: Class membership was associated with different patterns of eHealth engagement, indicating the importance of tailored digital strategies for delivering effective care. Future eHealth weight loss interventions should investigate strategies to engage younger cancer survivors with comorbidities and address racial and ethnic disparities in eHealth use.

18.
NPJ Digit Med ; 3: 38, 2020.
Article in English | MEDLINE | ID: mdl-32195373

ABSTRACT

Dietary intake, eating behaviors, and context are important in chronic disease development, yet our ability to accurately assess these in research settings can be limited by biased traditional self-reporting tools. Objective measurement tools, specifically, wearable sensors, present the opportunity to minimize the major limitations of self-reported eating measures by generating supplementary sensor data that can improve the validity of self-report data in naturalistic settings. This scoping review summarizes the current use of wearable devices/sensors that automatically detect eating-related activity in naturalistic research settings. Five databases were searched in December 2019, and 618 records were retrieved from the literature search. This scoping review included N = 40 studies (from 33 articles) that reported on one or more wearable sensors used to automatically detect eating activity in the field. The majority of studies (N = 26, 65%) used multi-sensor systems (incorporating > 1 wearable sensors), and accelerometers were the most commonly utilized sensor (N = 25, 62.5%). All studies (N = 40, 100.0%) used either self-report or objective ground-truth methods to validate the inferred eating activity detected by the sensor(s). The most frequently reported evaluation metrics were Accuracy (N = 12) and F1-score (N = 10). This scoping review highlights the current state of wearable sensors' ability to improve upon traditional eating assessment methods by passively detecting eating activity in naturalistic settings, over long periods of time, and with minimal user interaction. A key challenge in this field, wide variation in eating outcome measures and evaluation metrics, demonstrates the need for the development of a standardized form of comparability among sensors/multi-sensor systems and multidisciplinary collaboration.

19.
Article in English | MEDLINE | ID: mdl-34222759

ABSTRACT

We present the design, implementation, and evaluation of a multi-sensor, low-power necklace, NeckSense, for automatically and unobtrusively capturing fine-grained information about an individual's eating activity and eating episodes, across an entire waking day in a naturalistic setting. NeckSense fuses and classifies the proximity of the necklace from the chin, the ambient light, the Lean Forward Angle, and the energy signals to determine chewing sequences, a building block of the eating activity. It then clusters the identified chewing sequences to determine eating episodes. We tested NeckSense on 11 participants with and 9 participants without obesity, across two studies, where we collected more than 470 hours of data in a naturalistic setting. Our results demonstrate that NeckSense enables reliable eating detection for individuals with diverse body mass index (BMI) profiles, across an entire waking day, even in free-living environments. Overall, our system achieves an F1-score of 81.6% in detecting eating episodes in an exploratory study. Moreover, our system can achieve an F1-score of 77.1% for episodes even in an all-day-long free-living setting. With more than 15.8 hours of battery life, NeckSense will allow researchers and dietitians to better understand natural chewing and eating behaviors. In the future, researchers and dietitians can use NeckSense to provide appropriate real-time interventions when an eating episode is detected or when problematic eating is identified.

20.
J Med Internet Res ; 21(12): e14904, 2019 12 04.
Article in English | MEDLINE | ID: mdl-31799938

ABSTRACT

BACKGROUND: Conventional diet assessment approaches such as the 24-hour self-reported recall are burdensome, suffer from recall bias, and are inaccurate in estimating energy intake. Wearable sensor technology, coupled with advanced algorithms, is increasingly showing promise in its ability to capture behaviors that provide useful information for estimating calorie and macronutrient intake. OBJECTIVE: This paper aimed to summarize current technological approaches to monitoring energy intake on the basis of expert opinion from a workshop panel and to make recommendations to advance technology and algorithms to improve estimation of energy expenditure. METHODS: A 1-day invitational workshop sponsored by the National Science Foundation was held at Northwestern University. A total of 30 participants, including population health researchers, engineers, and intervention developers, from 6 universities and the National Institutes of Health participated in a panel discussing the state of evidence with regard to monitoring calorie intake and eating behaviors. RESULTS: Calorie monitoring using technological approaches can be characterized into 3 domains: (1) image-based sensing (eg, wearable and smartphone-based cameras combined with machine learning algorithms); (2) eating action unit (EAU) sensors (eg, to measure feeding gesture and chewing rate); and (3) biochemical measures (eg, serum and plasma metabolite concentrations). We discussed how each domain functions, provided examples of promising solutions, and highlighted potential challenges and opportunities in each domain. Image-based sensor research requires improved ground truth (context and known information about the foods), accurate food image segmentation and recognition algorithms, and reliable methods of estimating portion size. EAU-based domain research is limited by the understanding of when their systems (device and inference algorithm) succeed and fail, need for privacy-protecting methods of capturing ground truth, and uncertainty in food categorization. Although an exciting novel technology, the challenges of biochemical sensing range from a lack of adaptability to environmental effects (eg, temperature change) and mechanical impact, instability of wearable sensor performance over time, and single-use design. CONCLUSIONS: Conventional approaches to calorie monitoring rely predominantly on self-reports. These approaches can gain contextual information from image-based and EAU-based domains that can map automatically captured food images to a food database and detect proxies that correlate with food volume and caloric intake. Although the continued development of advanced machine learning techniques will advance the accuracy of such wearables, biochemical sensing provides an electrochemical analysis of sweat using soft bioelectronics on human skin, enabling noninvasive measures of chemical compounds that provide insight into the digestive and endocrine systems. Future computing-based researchers should focus on reducing the burden of wearable sensors, aligning data across multiple devices, automating methods of data annotation, increasing rigor in studying system acceptability, increasing battery lifetime, and rigorously testing validity of the measure. Such research requires moving promising technological solutions from the controlled laboratory setting to the field.


Subject(s)
Energy Intake , Feeding Behavior , Wearable Electronic Devices , Algorithms , Education , Humans , Smartphone , Telemedicine , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...