Your browser doesn't support javascript.
loading
The Acoustic Dissection of Cough: Diving Into Machine Listening-based COVID-19 Analysis and Detection.
Ren, Zhao; Chang, Yi; Bartl-Pokorny, Katrin D; Pokorny, Florian B; Schuller, Björn W.
Afiliación
  • Ren Z; EIHW - Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany; L3S Research Center, Hannover, Germany. Electronic address: zren@l3s.de.
  • Chang Y; GLAM - Group on Language, Audio, & Music, Imperial College London, London, United Kingdom.
  • Bartl-Pokorny KD; EIHW - Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany; Division of Phoniatrics, Medical University of Graz, Graz, Austria; Division of Physiology, Medical University of Graz, Graz, Austria. Electronic address: katrin.bartl-pokorny@medunigraz.a
  • Pokorny FB; EIHW - Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany; Division of Phoniatrics, Medical University of Graz, Graz, Austria; Division of Physiology, Medical University of Graz, Graz, Austria.
  • Schuller BW; EIHW - Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany; GLAM - Group on Language, Audio, & Music, Imperial College London, London, United Kingdom.
J Voice ; 2022 Jun 15.
Article en En | MEDLINE | ID: mdl-35835648
OBJECTIVES: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19's transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. METHODS: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the ComParE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. RESULTS: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). CONCLUSIONS: Based on the acoustic correlates analysis on the ComParE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in conventional group difference testing are also higher weighted in the machine learning models.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Diagnostic_studies / Prognostic_studies Idioma: En Revista: J Voice Asunto de la revista: OTORRINOLARINGOLOGIA Año: 2022 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Diagnostic_studies / Prognostic_studies Idioma: En Revista: J Voice Asunto de la revista: OTORRINOLARINGOLOGIA Año: 2022 Tipo del documento: Article Pais de publicación: Estados Unidos