Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Vis Exp ; (169)2021 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-33779595

RESUMO

Due to the issues and costs associated with manual dietary assessment approaches, automated solutions are required to ease and speed up the work and increase its quality. Today, automated solutions are able to record a person's dietary intake in a much simpler way, such as by taking an image with a smartphone camera. In this article, we will focus on such image-based approaches to dietary assessment. For the food image recognition problem, deep neural networks have achieved the state of the art in recent years, and we present our work in this field. In particular, we first describe the method for food and beverage image recognition using a deep neural network architecture, called NutriNet. This method, like most research done in the early days of deep learning-based food image recognition, is limited to one output per image, and therefore unsuitable for images with multiple food or beverage items. That is why approaches that perform food image segmentation are considerably more robust, as they are able to identify any number of food or beverage items in the image. We therefore also present two methods for food image segmentation - one is based on fully convolutional networks (FCNs), and the other on deep residual networks (ResNet).


Assuntos
Bebidas/análise , Análise de Alimentos/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Avaliação Nutricional , Smartphone/estatística & dados numéricos , Humanos
2.
Br J Nutr ; 126(10): 1489-1497, 2021 11 28.
Artigo em Inglês | MEDLINE | ID: mdl-33509307

RESUMO

As individuals seek increasingly individualised nutrition and lifestyle guidance, numerous apps and nutrition programmes have emerged. However, complex individual variations in dietary behaviours, genotypes, gene expression and composition of the microbiome are increasingly recognised. Advances in digital tools and artificial intelligence can help individuals more easily track nutrient intakes and identify nutritional gaps. However, the influence of these nutrients on health outcomes can vary widely among individuals depending upon life stage, genetics and microbial composition. For example, folate may elicit favourable epigenetic effects on brain development during a critical developmental time window of pregnancy. Genes affecting vitamin B12 metabolism may lead to cardiometabolic traits that play an essential role in the context of obesity. Finally, an individual's gut microbial composition can determine their response to dietary fibre interventions during weight loss. These recent advances in understanding can lead to a more complete and integrated approach to promoting optimal health through personalised nutrition, in clinical practice settings and for individuals in their daily lives. The purpose of this review is to summarise presentations made during the DSM Science and Technology Award Symposium at the 13th European Nutrition Conference, which focused on personalised nutrition and novel technologies for health in the modern world.


Assuntos
Dieta , Microbioma Gastrointestinal , Nutrientes/administração & dosagem , Nutrigenômica , Fibras na Dieta , Humanos , Medicina de Precisão
3.
Public Health Nutr ; 22(7): 1193-1202, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-29623869

RESUMO

OBJECTIVE: The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. DESIGN: The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. RESULTS: The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. CONCLUSIONS: The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.


Assuntos
Aprendizado Profundo , Registros de Dieta , Processamento de Imagem Assistida por Computador , Processamento de Linguagem Natural , Algoritmos , Preferências Alimentares , Humanos , Avaliação Nutricional
4.
Nutrients ; 9(7)2017 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-28653995

RESUMO

Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86 . 72 % , along with an accuracy of 94 . 47 % on a detection dataset containing 130 , 517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson's disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 % , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson's disease patients.


Assuntos
Bebidas , Alimentos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Avaliação Nutricional , Algoritmos , Simulação por Computador , Internet , Aplicativos Móveis , Redes Neurais de Computação , Smartphone
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA