Your browser doesn't support javascript.
loading
Brain-optimized extraction of complex sound features that drive continuous auditory perception.
Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F.
Afiliação
  • Berezutskaya J; Department of Neurology and Neurosurgery, Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands.
  • Freudenburg ZV; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
  • Güçlü U; Department of Neurology and Neurosurgery, Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands.
  • van Gerven MAJ; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
  • Ramsey NF; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
PLoS Comput Biol ; 16(7): e1007992, 2020 07.
Article em En | MEDLINE | ID: mdl-32614826
ABSTRACT
Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Córtex Auditivo / Percepção Auditiva / Som / Redes Neurais de Computação / Modelos Neurológicos Tipo de estudo: Prognostic_studies Limite: Adolescent / Adult / Female / Humans / Male Idioma: En Revista: PLoS Comput Biol Assunto da revista: BIOLOGIA / INFORMATICA MEDICA Ano de publicação: 2020 Tipo de documento: Article País de afiliação: Holanda

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Córtex Auditivo / Percepção Auditiva / Som / Redes Neurais de Computação / Modelos Neurológicos Tipo de estudo: Prognostic_studies Limite: Adolescent / Adult / Female / Humans / Male Idioma: En Revista: PLoS Comput Biol Assunto da revista: BIOLOGIA / INFORMATICA MEDICA Ano de publicação: 2020 Tipo de documento: Article País de afiliação: Holanda