Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 125
Filtrar
Más filtros

Base de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Front Nutr ; 11: 1428771, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39371944

RESUMEN

Background: Shared plate eating (SPE), defined as two or more individuals eating directly from the same plate or bowl, is a common household food consumption practice in many Low- and Middle-Income Countries (LMICs). Examination of household engagement in SPE remains largely unexplored, highlighting a gap in research when interpreting dietary information obtained from these settings. The dearth of research into SPE can be attributed to the inherent limitations of traditional dietary assessment methods which constrain their usability in settings where SPE is common. Objective: In this expository narrative, we describe what SPE is when it is practiced in an LMIC such as Ghana; and also compare the frequency of SPE versus individual plate eating (IPE) by different household members in rural and urban households using a wearable camera (Automatic Ingestion Monitor version 2: AIM-2). Methods: Purposive convenience sampling was employed to recruit and enroll 30 households each from an urban and a rural community (n = 60 households) in Ghana. The AIM-2 was worn on eyeglass frames for 3 days by selected household members. The AIM-2, when worn, automatically collects images to capture food consumption in participants' environments, thus enabling passive capture of household SPE dynamics. Results: A higher percentage of SPE occasions was observed for rural (96.7%) compared to urban (36.7%) households (p < 0.001). Common SPE dynamics included only adults sharing, adults and children sharing, only children sharing, and non-household member participation in SPE. Conclusion: The wearable camera captured eating dynamics within households that would have likely been missed or altered by traditional dietary assessment methods. Obtaining reliable and accurate data is crucial for assessing dietary intake in settings where SPE is a norm.

2.
Sensors (Basel) ; 24(10)2024 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-38793899

RESUMEN

Metabolic syndrome poses a significant health challenge worldwide, prompting the need for comprehensive strategies integrating physical activity monitoring and energy expenditure. Wearable sensor devices have been used both for energy intake and energy expenditure (EE) estimation. Traditionally, sensors are attached to the hip or wrist. The primary aim of this research is to investigate the use of an eyeglass-mounted wearable energy intake sensor (Automatic Ingestion Monitor v2, AIM-2) for simultaneous recognition of physical activity (PAR) and estimation of steady-state EE as compared to a traditional hip-worn device. Study data were collected from six participants performing six structured activities, with the reference EE measured using indirect calorimetry (COSMED K5) and reported as metabolic equivalents of tasks (METs). Next, a novel deep convolutional neural network-based multitasking model (Multitasking-CNN) was developed for PAR and EE estimation. The Multitasking-CNN was trained with a two-step progressive training approach for higher accuracy, where in the first step the model for PAR was trained, and in the second step the model was fine-tuned for EE estimation. Finally, the performance of Multitasking-CNN on AIM-2 attached to eyeglasses was compared to the ActiGraph GT9X (AG) attached to the right hip. On the AIM-2 data, Multitasking-CNN achieved a maximum of 95% testing accuracy of PAR, a minimum of 0.59 METs mean square error (MSE), and 11% mean absolute percentage error (MAPE) in EE estimation. Conversely, on AG data, the Multitasking-CNN model achieved a maximum of 82% testing accuracy in PAR, a minimum of 0.73 METs MSE, and 13% MAPE in EE estimation. These results suggest the feasibility of using an eyeglass-mounted sensor for both PAR and EE estimation.


Asunto(s)
Metabolismo Energético , Ejercicio Físico , Anteojos , Redes Neurales de la Computación , Dispositivos Electrónicos Vestibles , Humanos , Metabolismo Energético/fisiología , Ejercicio Físico/fisiología , Adulto , Masculino , Calorimetría Indirecta/instrumentación , Calorimetría Indirecta/métodos , Femenino , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos
3.
Front Robot AI ; 11: 1267072, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38680622

RESUMEN

Robotic lower-limb prostheses, with their actively powered joints, may significantly improve amputee users' mobility and enable them to obtain healthy-like gait in various modes of locomotion in daily life. However, timely recognition of the amputee users' locomotive mode and mode transition still remains a major challenge in robotic lower-limb prosthesis control. In the paper, the authors present a new multi-dimensional dynamic time warping (mDTW)-based intent recognizer to provide high-accuracy recognition of the locomotion mode/mode transition sufficiently early in the swing phase, such that the prosthesis' joint-level motion controller can operate in the correct locomotive mode and assist the user to complete the desired (and often power-demanding) motion in the stance phase. To support the intent recognizer development, the authors conducted a multi-modal gait data collection study to obtain the related sensor signal data in various modes of locomotion. The collected data were then segmented into individual cycles, generating the templates used in the mDTW classifier. Considering the large number of sensor signals available, we conducted feature selection to identify the most useful sensor signals as the input to the mDTW classifier. We also augmented the standard mDTW algorithm with a voting mechanism to make full use of the data generated from the multiple subjects. To validate the proposed intent recognizer, we characterized its performance using the data cumulated at different percentages of progression into the gait cycle (starting from the beginning of the swing phase). It was shown that the mDTW classifier was able to recognize three locomotive mode/mode transitions (walking, walking to stair climbing, and walking to stair descending) with 99.08% accuracy at 30% progression into the gait cycle, well before the stance phase starts. With its high performance, low computational load, and easy personalization (through individual template generation), the proposed mDTW intent recognizer may become a highly useful building block of a prosthesis control system to facilitate the robotic prostheses' real-world use among lower-limb amputees.

4.
Obesity (Silver Spring) ; 32(5): 857-870, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38426232

RESUMEN

OBJECTIVE: Big Data are increasingly used in obesity and nutrition research to gain new insights and derive personalized guidance; however, this data in raw form are often not usable. Substantial preprocessing, which requires machine learning (ML), human judgment, and specialized software, is required to transform Big Data into artificial intelligence (AI)- and ML-ready data. These preprocessing steps are the most complex part of the entire modeling pipeline. Understanding the complexity of these steps by the end user is critical for reducing misunderstanding, faulty interpretation, and erroneous downstream conclusions. METHODS: We reviewed three popular obesity/nutrition Big Data sources: microbiome, metabolomics, and accelerometry. The preprocessing pipelines, specialized software, challenges, and how decisions impact final AI- and ML-ready products were detailed. RESULTS: Opportunities for advances to improve quality control, speed of preprocessing, and intelligent end user consumption were presented. CONCLUSIONS: Big Data have the exciting potential for identifying new modifiable factors that impact obesity research. However, to ensure accurate interpretation of conclusions arising from Big Data, the choices involved in preparing AI- and ML-ready data need to be transparent to investigators and clinicians relying on the conclusions.

5.
Sci Rep ; 14(1): 1665, 2024 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-38238423

RESUMEN

The first step in any dietary monitoring system is the automatic detection of eating episodes. To detect eating episodes, either sensor data or images can be used, and either method can result in false-positive detection. This study aims to reduce the number of false positives in the detection of eating episodes by a wearable sensor, Automatic Ingestion Monitor v2 (AIM-2). Thirty participants wore the AIM-2 for two days each (pseudo-free-living and free-living). The eating episodes were detected by three methods: (1) recognition of solid foods and beverages in images captured by AIM-2; (2) recognition of chewing from the AIM-2 accelerometer sensor; and (3) hierarchical classification to combine confidence scores from image and accelerometer classifiers. The integration of image- and sensor-based methods achieved 94.59% sensitivity, 70.47% precision, and 80.77% F1-score in the free-living environment, which is significantly better than either of the original methods (8% higher sensitivity). The proposed method successfully reduces the number of false positives in the detection of eating episodes.


Asunto(s)
Dieta , Masticación , Humanos , Monitoreo Fisiológico , Reconocimiento en Psicología , Procesos Mentales
6.
IEEE Trans Cybern ; 54(2): 679-692, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37028043

RESUMEN

Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviors of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this article, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning for dietary intake assessment in real-life settings.


Asunto(s)
Ingestión de Alimentos , Privacidad , Dieta , Evaluación Nutricional , Conducta Alimentaria
7.
Artículo en Inglés | MEDLINE | ID: mdl-38083112

RESUMEN

A comprehensive assessment of cigarette smoking behavior and its effect on health requires a detailed examination of smoke exposure. We propose a CNN-LSTM-based deep learning architecture named DeepPuff to quantify Respiratory Smoke Exposure Metrics (RSEM). Smoke inhalations were detected from the breathing and hand gesture sensors of the Personal Automatic Cigarette Tracker v2 (PACT 2.0). The DeepPuff model for smoke inhalation detection was developed using data collected from 190 cigarette smoking events from 38 medium to heavy smokers and optimized for precision (avoidance of false positives). An independent dataset of 459 smoking events from 45 participants (90 smoking events in the lab and 369 smoking events in free-living conditions) was used for testing the model. The proposed model achieved a precision of 82.39% on the training and 93.80% on the testing dataset (95.88% in the lab and 93.78% in free-living). RSEM metrics were then computed from the breathing signal of each detected smoke inhalation. Results from the RSEM algorithm were compared with respiratory metrics obtained from video annotation. Smoke exposure metrics of puff duration, inhale-exhale duration, and inhalation duration were not statistically different from the ground truth generated through video annotation. The results suggest that DeepPuff may be used as a reliable means to measure respiratory smoke exposure metrics collected under free-living conditions.


Asunto(s)
Fumar Cigarrillos , Aprendizaje Profundo , Productos de Tabaco , Humanos , Respiración
8.
Artículo en Inglés | MEDLINE | ID: mdl-38083186

RESUMEN

This paper introduces a novel wearable shoe sensor named the Smart Lacelock Sensor. The sensor can be securely attached to the top of a shoe with laces and incorporates a loadcell to measure the force applied by the shoelace, providing valuable information related to ankle movement and foot loading. As the first step towards the automated balance assessment, this paper investigates the correlations between various levels of physical performance measured by the wearable Smart Lacelock Sensor and the SPPB clinical method in community-living older persons. 19 adults (age 76.84 ± 3.45 years), including those with and without recent fall history and SPPB score ranging from 4 to 12, participated in the study. The Smart Lacelock Sensor was attached to both shoes of each participant by skilled research staff, who then led them through the SPPB evaluation. The data obtained from the Smart Lacelock Sensors after the SPPB assessment were used to evaluate the deviation between the SPPB scores assigned by the research staff and the signals generated by the sensors for various participants. Results demonstrate that the standard deviation of the Smart Lacelock Sensor's loadcell response (both feet) for the side-by-side balance testing is significantly correlated (R2 = 0.68) with the SPPB score, demonstrating the capability of the Smart Lacelock Sensor in balance assessment.


Asunto(s)
Pie , Vida Independiente , Adulto , Humanos , Anciano , Anciano de 80 o más Años , Extremidad Inferior , Articulación del Tobillo , Rendimiento Físico Funcional
9.
IEEE Sens J ; 23(5): 5391-5400, 2023 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-37799776

RESUMEN

Automatic food portion size estimation (FPSE) with minimal user burden is a challenging task. Most of the existing FPSE methods use fiducial markers and/or virtual models as dimensional references. An alternative approach is to estimate the dimensions of the eating containers prior to estimating the portion size. In this article, we propose a wearable sensor system (the automatic ingestion monitor integrated with a ranging sensor) and a related method for the estimation of dimensions of plates and bowls. The contributions of this study are: 1) the model eliminates the need for fiducial markers; 2) the camera system [automatic ingestion monitor version 2 (AIM-2)] is not restricted in terms of positioning relative to the food item; 3) our model accounts for radial lens distortion caused due to lens aberrations; 4) a ranging sensor directly gives the distance between the sensor and the eating surface; 5) the model is not restricted to circular plates; and 6) the proposed system implements a passive method that can be used for assessment of container dimensions with minimum user interaction. The error rates (mean ± std. dev) for dimension estimation were 2.01% ± 4.10% for plate widths/diameters, 2.75% ± 38.11% for bowl heights, and 4.58% ± 6.78% for bowl diameters.

10.
Nutrients ; 15(18)2023 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-37764857

RESUMEN

BACKGROUND: Accurate estimation of dietary intake is challenging. However, whilst some progress has been made in high-income countries, low- and middle-income countries (LMICs) remain behind, contributing to critical nutritional data gaps. This study aimed to validate an objective, passive image-based dietary intake assessment method against weighed food records in London, UK, for onward deployment to LMICs. METHODS: Wearable camera devices were used to capture food intake on eating occasions in 18 adults and 17 children of Ghanaian and Kenyan origin living in London. Participants were provided pre-weighed meals of Ghanaian and Kenyan cuisine and camera devices to automatically capture images of the eating occasions. Food images were assessed for portion size, energy, nutrient intake, and the relative validity of the method compared to the weighed food records. RESULTS: The Pearson and Intraclass correlation coefficients of estimates of intakes of food, energy, and 19 nutrients ranged from 0.60 to 0.95 and 0.67 to 0.90, respectively. Bland-Altman analysis showed good agreement between the image-based method and the weighed food record. Under-estimation of dietary intake by the image-based method ranged from 4 to 23%. CONCLUSIONS: Passive food image capture and analysis provides an objective assessment of dietary intake comparable to weighed food records.


Asunto(s)
Ingestión de Alimentos , Alimentos , Humanos , Adulto , Niño , Londres , Ghana , Kenia
11.
Front Nutr ; 10: 1191962, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37575335

RESUMEN

Introduction: Dietary assessment is important for understanding nutritional status. Traditional methods of monitoring food intake through self-report such as diet diaries, 24-hour dietary recall, and food frequency questionnaires may be subject to errors and can be time-consuming for the user. Methods: This paper presents a semi-automatic dietary assessment tool we developed - a desktop application called Image to Nutrients (I2N) - to process sensor-detected eating events and images captured during these eating events by a wearable sensor. I2N has the capacity to offer multiple food and nutrient databases (e.g., USDA-SR, FNDDS, USDA Global Branded Food Products Database) for annotating eating episodes and food items. I2N estimates energy intake, nutritional content, and the amount consumed. The components of I2N are three-fold: 1) sensor-guided image review, 2) annotation of food images for nutritional analysis, and 3) access to multiple food databases. Two studies were used to evaluate the feasibility and usefulness of I2N: 1) a US-based study with 30 participants and a total of 60 days of data and 2) a Ghana-based study with 41 participants and a total of 41 days of data). Results: In both studies, a total of 314 eating episodes were annotated using at least three food databases. Using I2N's sensor-guided image review, the number of images that needed to be reviewed was reduced by 93% and 85% for the two studies, respectively, compared to reviewing all the images. Discussion: I2N is a unique tool that allows for simultaneous viewing of food images, sensor-guided image review, and access to multiple databases in one tool, making nutritional analysis of food images efficient. The tool is flexible, allowing for nutritional analysis of images if sensor signals aren't available.

12.
Sensors (Basel) ; 23(11)2023 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-37300082

RESUMEN

Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual's motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual's intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use.


Asunto(s)
Intención , Robótica , Humanos , Caminata
13.
Front Nutr ; 10: 1119542, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37252243

RESUMEN

Introduction: The aim of this feasibility and proof-of-concept study was to examine the use of a novel wearable device for automatic food intake detection to capture the full range of free-living eating environments of adults with overweight and obesity. In this paper, we document eating environments of individuals that have not been thoroughly described previously in nutrition software as current practices rely on participant self-report and methods with limited eating environment options. Methods: Data from 25 participants and 116 total days (7 men, 18 women, Mage = 44 ± 12 years, BMI 34.3 ± 5.2 kg/mm2), who wore the passive capture device for at least 7 consecutive days (≥12h waking hours/d) were analyzed. Data were analyzed at the participant level and stratified amongst meal type into breakfast, lunch, dinner, and snack categories. Out of 116 days, 68.1% included breakfast, 71.5% included lunch, 82.8% included dinner, and 86.2% included at least one snack. Results: The most prevalent eating environment among all eating occasions was at home and with one or more screens in use (breakfast: 48.1%, lunch: 42.2%, dinner: 50%, and snacks: 55%), eating alone (breakfast: 75.9%, lunch: 89.2%, dinner: 74.3%, snacks: 74.3%), in the dining room (breakfast: 36.7%, lunch: 30.1%, dinner: 45.8%) or living room (snacks: 28.0%), and in multiple locations (breakfast: 44.3%, lunch: 28.8%, dinner: 44.8%, snacks: 41.3%). Discussion: Results suggest a passive capture device can provide accurate detection of food intake in multiple eating environments. To our knowledge, this is the first study to classify eating occasions in multiple eating environments and may be a useful tool for future behavioral research studies to accurately codify eating environments.

14.
Nicotine Tob Res ; 25(7): 1391-1399, 2023 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-36905322

RESUMEN

INTRODUCTION: There has been little research objectively examining use-patterns among individuals who use electronic cigarettes (e-cigarettes). The primary aim of this study was to identify patterns of e-cigarette use and categorize distinct use-groups by analyzing patterns of puff topography variables over time. The secondary aim was to identify the extent to which self-report questions about use accurately assess e-cigarette use-behavior. AIMS AND METHODS: Fifty-seven adult e-cigarette-only users completed a 4-hour ad libitum puffing session. Self-reports of use were collected both before and after this session. RESULTS: Three distinct use-groups emerged from exploratory and confirmatory cluster analyses. The first was labeled the "Graze" use-group (29.8% of participants), in which the majority of puffs were unclustered (ie, puffs were greater than 60 seconds apart) with a small minority in short clusters (2-5 puffs). The second was labeled the "Clumped" use group (12.3%), in which the majority of puffs were within clusters (short, medium [6-10 puffs], and/or long [>10 puffs]) and a small minority of puffs were unclustered. The third was labeled the "Hybrid" use-group (57.9%), in which most puffs were either within short clusters or were unclustered. Significant differences emerged between observed and self-reported use-behaviors with a general tendency for participants to overreport use. Furthermore, commonly utilized assessments demonstrated limited accuracy in capturing use behaviors observed in this sample. CONCLUSIONS: This research addressed several limitations previously identified in the e-cigarette literature and collected novel data that provided substantial information about e-cigarette puff topography and its relationship with self-report measures and use-type categorization. IMPLICATIONS: This is the first study to identify and distinguish three empirically based e-cigarette use-groups. These use-groups, as well as the specific topography data discussed, can provide a foundation for future research assessing the impact of use across different use types. Furthermore, as participants tended to overreport use and assessments did not capture use accurately, this study can serve as a foundation for future work developing more appropriate assessments for use in research studies as well as clinical practice.


Asunto(s)
Sistemas Electrónicos de Liberación de Nicotina , Productos de Tabaco , Vapeo , Adulto , Humanos , Autoinforme , Recolección de Datos
15.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-36679357

RESUMEN

Sensor-based food intake monitoring has become one of the fastest-growing fields in dietary assessment. Researchers are exploring imaging-sensor-based food detection, food recognition, and food portion size estimation. A major problem that is still being tackled in this field is the segmentation of regions of food when multiple food items are present, mainly when similar-looking foods (similar in color and/or texture) are present. Food image segmentation is a relatively under-explored area compared with other fields. This paper proposes a novel approach to food imaging consisting of two imaging sensors: color (Red-Green-Blue) and thermal. Furthermore, we propose a multi-modal four-Dimensional (RGB-T) image segmentation using a k-means clustering algorithm to segment regions of similar-looking food items in multiple combinations of hot, cold, and warm (at room temperature) foods. Six food combinations of two food items each were used to capture RGB and thermal image data. RGB and thermal data were superimposed to form a combined RGB-T image and three sets of data (RGB, thermal, and RGB-T) were tested. A bootstrapped optimization of within-cluster sum of squares (WSS) was employed to determine the optimal number of clusters for each case. The combined RGB-T data achieved better results compared with RGB and thermal data, used individually. The mean ± standard deviation (std. dev.) of the F1 score for RGB-T data was 0.87 ± 0.1 compared with 0.66 ± 0.13 and 0.64 ± 0.39, for RGB and Thermal data, respectively.


Asunto(s)
Algoritmos , Frío , Análisis por Conglomerados , Reconocimiento en Psicología , Imagen Multimodal , Color
16.
Int J Obes (Lond) ; 46(11): 2050-2057, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36192533

RESUMEN

OBJECTIVES: Dietary assessment methods not relying on self-report are needed. The Automatic Ingestion Monitor 2 (AIM-2) combines a wearable camera that captures food images with sensors that detect food intake. We compared energy intake (EI) estimates of meals derived from AIM-2 chewing sensor signals, AIM-2 images, and an internet-based diet diary, with researcher conducted weighed food records (WFR) as the gold standard. SUBJECTS/METHODS: Thirty adults wore the AIM-2 for meals self-selected from a university food court on one day in mixed laboratory and free-living conditions. Daily EI was determined from a sensor regression model, manual image analysis, and a diet diary and compared with that from WFR. A posteriori analysis identified sources of error for image analysis and WFR differences. RESULTS: Sensor-derived EI from regression modeling (R2 = 0.331) showed the closest agreement with EI from WFR, followed by diet diary estimates. EI from image analysis differed significantly from that by WFR. Bland-Altman analysis showed wide limits of agreement for all three test methods with WFR, with the sensor method overestimating at lower and underestimating at higher EI. Nutritionist error in portion size estimation and irreconcilable differences in portion size between food and nutrient databases used for WFR and image analyses were the greatest contributors to image analysis and WFR differences (44.4% and 44.8% of WFR EI, respectively). CONCLUSIONS: Estimation of daily EI from meals using sensor-derived features offers a promising alternative to overcome limitations of self-report. Image analysis may benefit from computerized analytical procedures to reduce identified sources of error.


Asunto(s)
Ingestión de Energía , Dispositivos Electrónicos Vestibles , Humanos , Adulto , Registros de Dieta , Comidas , Dieta
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2993-2996, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36085821

RESUMEN

The choice of appropriate machine learning algorithms is crucial for classification problems. This study compares the performance of state-of-the-art time-series deep learning algorithms for classifying food intake using sensor signals. The sensor signals were collected with the help of a wearable sensor system (the Automatic Ingestion Monitor v2, or AIM-2). AIM-2 used an optical and 3-axis accelerometer sensor to capture temporalis muscle activation. Raw signals from those sensors were used to train five classifiers (multilayer perceptron (MLP), time Convolutional Neural Network (time-CNN), Fully Convolutional Neural Network (FCN), Residual Neural Network (ResNet), and Inception network) to differentiate food intake (eating and drinking) from other activities. Data were collected from 17 pilot subjects over the course of 23 days in free-living conditions. A leave one subject out cross-validation scheme was used for training and testing. Time-CNN, FCN, ResNet, and Inception achieved average balanced classification accuracy of 88.84%, 90.18%, 93.47%, and 92.15%, respectively. The results indicate that ResNet outperforms other state-of-the-art deep learning algorithms for this specific problem.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Progresión de la Enfermedad , Ingestión de Alimentos , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1787-1791, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086477

RESUMEN

Detailed assessment of smoking topography (puffing and post-puffing metrics) can lead to a better understanding of factors that influence tobacco use. Research suggests that portable mouthpiece-based devices used for puff topography measurement may alter natural smoking behavior. This paper evaluated the impact of a portable puff topography device (CReSS Pocket) on puffing & post-puffing topography using a wearable system, the Personal Automatic Cigarette Tracker v2 (PACT 2.0) as a reference measurement. Data from 45 smokers who smoked one cigarette in the lab and an unrestricted number of cigarettes under free-living conditions over 4 consecutive days were used for analysis. PACT 2.0 was worn on all four days. A puff topography instrument (CReSS pocket) was used for cigarette smoking on two random days during the four days of study in the laboratory and free-living conditions. Smoke inhalations were automatically detected using PACT2.0 signals. Respiratory smoke exposure metrics (i.e., puff count, duration of cigarette, puff duration, inhale-exhale duration, inhale-exhale volume, volume over time, smoke hold duration, inter-puff interval) were computed for each puff/smoke inhalation. Analysis comparing respiratory smoke exposure metrics during CReSS days and days without CReSS revealed a significant difference in puff duration, inhale-exhale duration and volume, smoke hold duration, inter-puff interval, and volume over time. However, the number of cigarettes per day and number of puffs per cigarette were statistically the same irrespective of the use of the CReSS device. The results suggested that the use of mouthpiece-based puff topography devices may influence measures of smoking topography with corresponding changes in smoking behavior and smoke exposure.


Asunto(s)
Productos de Tabaco , Tabaquismo , Dispositivos Electrónicos Vestibles , Humanos , Nicotina , Fumar
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4917-4920, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086530

RESUMEN

Cumulative screen exposure has been increased due to the explosion of digital technology ownership in the past decade for all people, including children who face exposure related risks such as obesity, eye problems, and disrupted sleep. Screen exposure is linked to physical and mental health risks among both children and adults. Current methods of screen exposure assessment have their limitations, mostly in the prospective of objectiveness, robustness, and invasiveness. In this paper, we propose a novel method to measure screen exposure time using a wearable sensor and computer vision technology. We use a customized, lightweight, wearable senor to capture egocentric images and use deep learning-based object detection module to identify the existence of electronic screens. The duration of screen exposure is further estimated using post-processing technology to filter consecutive frames regarding to the screen usage. Our method is non-invasive and robust, providing an objective and accurate means to screen exposure measurement. We conduct experiments on various environments to identify the existence of three types of screens and duration of screen exposure. The experimental results demonstrate the feasibility of automatically assessing screen time exposure and great potential to be applied in large scale experiments for behavioral study.


Asunto(s)
Tiempo de Pantalla , Dispositivos Electrónicos Vestibles , Adulto , Niño , Computadores , Humanos , Obesidad
20.
Front Nutr ; 9: 941001, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35958246

RESUMEN

Background: A fast rate of eating is associated with a higher risk for obesity but existing studies are limited by reliance on self-report and the consistency of eating rate has not been examined across all meals in a day. The goal of the current analysis was to examine associations between meal duration, rate of eating, and body mass index (BMI) and to assess the variance of meal duration and eating rate across different meals during the day. Methods: Using an observational cross-sectional study design, non-smoking participants aged 18-45 years (N = 29) consumed all meals (breakfast, lunch, and dinner) on a single day in a pseudo free-living environment. Participants were allowed to choose any food and beverages from a University food court and consume their desired amount with no time restrictions. Weighed food records and a log of meal start and end times, to calculate duration, were obtained by a trained research assistant. Spearman's correlations and multiple linear regressions examined associations between BMI and meal duration and rate of eating. Results: Participants were 65% male and 48% white. A shorter meal duration was associated with a higher BMI at breakfast but not lunch or dinner, after adjusting for age and sex (p = 0.03). Faster rate of eating was associated with higher BMI across all meals (p = 0.04) and higher energy intake for all meals (p < 0.001). Intra-individual rates of eating were not significantly different across breakfast, lunch, and dinner (p = 0.96). Conclusion: Shorter beakfast and a faster rate of eating across all meals were associated with higher BMI in a pseudo free-living environment. An individual's rate of eating is constant over all meals in a day. These data support weight reduction interventions focusing on the rate of eating at all meals throughout the day and provide evidence for specifically directing attention to breakfast eating behaviors.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA