Your browser doesn't support javascript.
loading
Automated detection and recognition system for chewable food items using advanced deep learning models.
Kumar, Yogesh; Koul, Apeksha; Wozniak, Marcin; Shafi, Jana; Ijaz, Muhammad Fazal.
Afiliação
  • Kumar Y; Department of CSE, School of Technology, Pandit Deendayal Energy University, Gandhinagar, Gujarat, India.
  • Koul A; Department of Computer Science and Engineering, Punjabi University, Patiala, Punjab, India.
  • Kamini; Southern Alberta Institute of Technology, Calgary, Alberta, Canada.
  • Wozniak M; Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, 44100, Gliwice, Poland. marcin.wozniak@polsl.pl.
  • Shafi J; Department of Computer Engineering and Information, College of Engineering in Wadi Al Dawasir, Prince Sattam Bin Abdulaziz University, 11991, Wadi Al Dawasir, Saudi Arabia.
  • Ijaz MF; School of IT and Engineering, Melbourne Institute of Technology, Melbourne, 3000, Australia. mfazal@mit.edu.au.
Sci Rep ; 14(1): 6589, 2024 03 19.
Article em En | MEDLINE | ID: mdl-38504098
ABSTRACT
Identifying and recognizing the food on the basis of its eating sounds is a challenging task, as it plays an important role in avoiding allergic foods, providing dietary preferences to people who are restricted to a particular diet, showcasing its cultural significance, etc. In this research paper, the aim is to design a novel methodology that helps to identify food items by analyzing their eating sounds using various deep learning models. To achieve this objective, a system has been proposed that extracts meaningful features from food-eating sounds with the help of signal processing techniques and deep learning models for classifying them into their respective food classes. Initially, 1200 audio files for 20 food items labeled have been collected and visualized to find relationships between the sound files of different food items. Later, to extract meaningful features, various techniques such as spectrograms, spectral rolloff, spectral bandwidth, and mel-frequency cepstral coefficients are used for the cleaning of audio files as well as to capture the unique characteristics of different food items. In the next phase, various deep learning models like GRU, LSTM, InceptionResNetV2, and the customized CNN model have been trained to learn spectral and temporal patterns in audio signals. Besides this, the models have also been hybridized i.e. Bidirectional LSTM + GRU and RNN + Bidirectional LSTM, and RNN + Bidirectional GRU to analyze their performance for the same labeled data in order to associate particular patterns of sound with their corresponding class of food item. During evaluation, the highest accuracy, precision,F1 score, and recall have been obtained by GRU with 99.28%, Bidirectional LSTM + GRU with 97.7% as well as 97.3%, and RNN + Bidirectional LSTM with 97.45%, respectively. The results of this study demonstrate that deep learning models have the potential to precisely identify foods on the basis of their sound by computing the best outcomes.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Limite: Humans Idioma: En Revista: Sci Rep Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Índia

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Limite: Humans Idioma: En Revista: Sci Rep Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Índia