Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38959146

RESUMEN

Eating speed is an important indicator that has been widely investigated in nutritional studies. The relationship between eating speed and several intake-related problems such as obesity, diabetes, and oral health has received increased attention from researchers. However, existing studies mainly use self-reported questionnaires to obtain participants' eating speed, where they choose options from slow, medium, and fast. Such a non-quantitative method is highly subjective and coarse at the individual level. This study integrates two classical tasks in automated food intake monitoring domain: bite detection and eating episode detection, to advance eating speed measurement in near-free-living environments automatically and objectively. Specifically, a temporal convolutional network combined with a multi-head attention module (TCN-MHA) is developed to detect bites (including eating and drinking gestures) from IMU data. The predicted bite sequences are then clustered into eating episodes. Eating speed is calculated by using the time taken to finish the eating episode to divide the number of bites. To validate the proposed approach on eating speed measurement, a 7-fold cross validation is applied to the self-collected fine-annotated full-day-I (FD-I) dataset, and a holdout experiment is conducted on the full-day-II (FD-II) dataset. The two datasets are collected from 61 participants with a total duration of 513 h, which are publicly available. Experimental results show that the proposed approach achieves a mean absolute percentage error (MAPE) of 0.110 and 0.146 in the FD-I and FD-II datasets, respectively, showcasing the feasibility of automated eating speed measurement in near-free-living environments.

2.
IEEE J Biomed Health Inform ; 28(2): 1000-1011, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38051610

RESUMEN

Unhealthy dietary habits are considered as the primary cause of various chronic diseases, including obesity and diabetes. The automatic food intake monitoring system has the potential to improve the quality of life (QoL) of people with diet-related diseases through dietary assessment. In this work, we propose a novel contactless radar-based approach for food intake monitoring. Specifically, a Frequency Modulated Continuous Wave (FMCW) radar sensor is employed to recognize fine-grained eating and drinking gestures. The fine-grained eating/drinking gesture contains a series of movements from raising the hand to the mouth until putting away the hand from the mouth. A 3D temporal convolutional network with self-attention (3D-TCN-Att) is developed to detect and segment eating and drinking gestures in meal sessions by processing the Range-Doppler Cube (RD Cube). Unlike previous radar-based research, this work collects data in continuous meal sessions (more realistic scenarios). We create a public dataset comprising 70 meal sessions (4,132 eating gestures and 893 drinking gestures) from 70 participants with a total duration of 1,155 minutes. Four eating styles (fork & knife, chopsticks, spoon, hand) are included in this dataset. To validate the performance of the proposed approach, seven-fold cross-validation method is applied. The 3D-TCN-Att model achieves a segmental F1-score of 0.896 and 0.868 for eating and drinking gestures, respectively. The results of the proposed approach indicate the feasibility of using radar for fine-grained eating and drinking gesture detection and segmentation in meal sessions.


Asunto(s)
Gestos , Calidad de Vida , Humanos , Radar , Mano , Extremidad Superior
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1778-1782, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36085938

RESUMEN

Maintaining adequate hydration is important for health. Inadequate liquid intake can cause dehydration problems. Despite the increasing development of liquid intake monitoring, there are still open challenges in drinking detection under free-living conditions. This paper proposes an automatic liquid intake monitoring system comprised of wrist-worn Inertial Measurement Units (IMU s) to recognize drinking gesture in free-living environments. We build an end-to-end approach for drinking gesture detection by employing a novel multi-stage temporal convolutional network (MS-TCN). Two datasets are collected in this research, one contains 8.9 hours data from 13 participants in semi-controlled environments, the other one contains 45.2 hours data from 7 participants in free-living environments. The Leave-One-Subject-Out (LOSO) evaluation shows that this method achieves a segmental F1-score of 0.943 and 0.900 in the semi-controlled and free-living datasets, respectively. The results also indicate that our approach outperforms the convolutional neural network and long-short-term-memory network combined model (CNN-LSTM) on our datasets. The dataset used in this paper is available at https://github.com/Pituohai/drinking-gesture-dataset/. Clinical Relevance- This automatic liquid intake monitoring system can detect drinking gesture in daily life. It has the potential to be used to record the frequency of drinking water for at-risk elderly or patients in the hospital.


Asunto(s)
Gestos , Muñeca , Anciano , Ingestión de Alimentos , Humanos , Redes Neurales de la Computación , Articulación de la Muñeca
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA