Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Public Health Nutr ; : 1-11, 2022 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-35616087

RESUMO

OBJECTIVE: Passive, wearable sensors can be used to obtain objective information in infant feeding, but their use has not been tested. Our objective was to compare assessment of infant feeding (frequency, duration and cues) by self-report and that of the Automatic Ingestion Monitor-2 (AIM-2). DESIGN: A cross-sectional pilot study was conducted in Ghana. Mothers wore the AIM-2 on eyeglasses for 1 d during waking hours to assess infant feeding using images automatically captured by the device every 15 s. Feasibility was assessed using compliance with wearing the device. Infant feeding practices collected by the AIM-2 images were annotated by a trained evaluator and compared with maternal self-report via interviewer-administered questionnaire. SETTING: Rural and urban communities in Ghana. PARTICIPANTS: Participants were thirty eight (eighteen rural and twenty urban) breast-feeding mothers of infants (child age ≤7 months). RESULTS: Twenty-five mothers reported exclusive breast-feeding, which was common among those < 30 years of age (n 15, 60 %) and those residing in urban communities (n 14, 70 %). Compliance with wearing the AIM-2 was high (83 % of wake-time), suggesting low user burden. Maternal report differed from the AIM-2 data, such that mothers reported higher mean breast-feeding frequency (eleven v. eight times, P = 0·041) and duration (18·5 v. 10 min, P = 0·007) during waking hours. CONCLUSION: The AIM-2 was a feasible tool for the assessment of infant feeding among mothers in Ghana as a passive, objective method and identified overestimation of self-reported breast-feeding frequency and duration. Future studies using the AIM-2 are warranted to determine validity on a larger scale.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4191-4195, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018921

RESUMO

With technological advancement, wearable egocentric camera systems have extensively been studied to develop food intake monitoring devices for the assessment of eating behavior. This paper provides a detailed description of the implementation of CNN based image classifier in the Cortex-M7 microcontroller. The proposed network classifies the captured images by the wearable egocentric camera as food and no food images in real-time. This real-time food image detection can potentially lead the monitoring devices to consume less power, less storage, and more user-friendly in terms of privacy by saving only images that are detected as food images. A derivative of pre-trained MobileNet is trained to detect food images from camera captured images. The proposed network needs 761.99KB of flash and 501.76KB of RAM to implement which is built for an optimal trade-off between accuracy, computational cost, and memory footprint considering implementation on a Cortex-M7 microcontroller. The image classifier achieved an average precision of 82%±3% and an average F-score of 74%±2% while testing on 15343 (2127 food images and 13216 no food images) images of five full days collected from five participants.


Assuntos
Comportamento Alimentar , Dispositivos Eletrônicos Vestíveis , Coleta de Dados , Ingestão de Alimentos , Alimentos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA