Your browser doesn't support javascript.
loading
Comparison of speech and music input in North American infants' home environment over the first 2 years of life.
Hippe, Lindsay; Hennessy, Victoria; Ramirez, Naja Ferjan; Zhao, T Christina.
Afiliação
  • Hippe L; Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, USA.
  • Hennessy V; Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA.
  • Ramirez NF; Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, USA.
  • Zhao TC; Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, USA.
Dev Sci ; : e13528, 2024 May 21.
Article em En | MEDLINE | ID: mdl-38770599
ABSTRACT
Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants' daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants' home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https//youtu.be/lFj_sEaBMN4.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article