Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Dev Sci ; 24(6): e13121, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34060181

RESUMEN

The power and precision with which humans link language to cognition is unique to our species. By 3-4 months of age, infants have already established this link: simply listening to human language facilitates infants' success in fundamental cognitive processes. Initially, this link to cognition is also engaged by a broader set of acoustic stimuli, including non-human primate vocalizations (but not other sounds, like backwards speech). But by 6 months, non-human primate vocalizations no longer confer this cognitive advantage that persists for speech. What remains unknown is the mechanism by which these sounds influence infant cognition, and how this initially broader set of privileged sounds narrows to only human speech between 4 and 6 months. Here, we recorded 4- and 6-month-olds' EEG responses to acoustic stimuli whose behavioral effects on infant object categorization have been previously established: infant-directed speech, backwards speech, and non-human primate vocalizations. We document that by 6 months, infants' 4-9 Hz neural activity is modulated in response to infant-directed speech and non-human primate vocalizations (the two stimuli that initially support categorization), but that 4-9 Hz neural activity is not modulated at either age by backward speech (an acoustic stimulus that doesn't support categorization at either age). These results advance the prior behavioral evidence to suggest that by 6 months, speech and non-human primate vocalizations elicit distinct changes in infants' cognitive state, influencing performance on foundational cognitive tasks such as object categorization.


Asunto(s)
Lenguaje , Percepción del Habla , Animales , Desarrollo Infantil/fisiología , Cognición/fisiología , Humanos , Lactante , Desarrollo del Lenguaje , Habla/fisiología , Percepción del Habla/fisiología
2.
PLoS One ; 16(3): e0247430, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33705442

RESUMEN

Recent evidence reveals a precocious link between language and cognition in human infants: listening to their native language supports infants' core cognitive processes, including object categorization, and does so in a way that other acoustic signals (e.g., time-reversed speech; sine-wave tone sequences) do not. Moreover, language is not the only signal that confers this cognitive advantage: listening to vocalizations of non-human primates also supports object categorization in 3- and 4-month-olds. Here, we move beyond primate vocalizations to clarify the breadth of acoustic signals that promote infant cognition. We ask whether listening to birdsong, another naturally produced animal vocalization, also supports object categorization in 3- and 4-month-old infants. We report that listening to zebra finch song failed to confer a cognitive advantage. This outcome brings us closer to identifying a boundary condition on the range of non-linguistic acoustic signals that initially support infant cognition.


Asunto(s)
Estimulación Acústica/métodos , Desarrollo Infantil/fisiología , Cognición/fisiología , Animales , Percepción Auditiva/fisiología , Femenino , Humanos , Lactante , Lenguaje , Desarrollo del Lenguaje , Masculino , Pájaros Cantores , Habla/fisiología , Vocalización Animal/fisiología
3.
Brain Lang ; 164: 43-52, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27701006

RESUMEN

Speech communication involves integration and coordination of sensory perception and motor production, requiring precise temporal coupling. Beat synchronization, the coordination of movement with a pacing sound, can be used as an index of this sensorimotor timing. We assessed adolescents' synchronization and capacity to correct asynchronies when given online visual feedback. Variability of synchronization while receiving feedback predicted phonological memory and reading sub-skills, as well as maturation of cortical auditory processing; less variable synchronization during the presence of feedback tracked with maturation of cortical processing of sound onsets and resting gamma activity. We suggest the ability to incorporate feedback during synchronization is an index of intentional, multimodal timing-based integration in the maturing adolescent brain. Precision of temporal coding across modalities is important for speech processing and literacy skills that rely on dynamic interactions with sound. Synchronization employing feedback may prove useful as a remedial strategy for individuals who struggle with timing-based language learning impairments.


Asunto(s)
Encéfalo/fisiología , Retroalimentación , Lectura , Sonido , Estimulación Acústica , Adolescente , Percepción Auditiva/fisiología , Femenino , Ritmo Gamma , Humanos , Desarrollo del Lenguaje , Aprendizaje , Lingüística , Masculino , Habla , Factores de Tiempo
4.
Hear Res ; 344: 148-157, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27864051

RESUMEN

From bustling classrooms to unruly lunchrooms, school settings are noisy. To learn effectively in the unwelcome company of numerous distractions, children must clearly perceive speech in noise. In older children and adults, speech-in-noise perception is supported by sensory and cognitive processes, but the correlates underlying this critical listening skill in young children (3-5 year olds) remain undetermined. Employing a longitudinal design (two evaluations separated by ∼12 months), we followed a cohort of 59 preschoolers, ages 3.0-4.9, assessing word-in-noise perception, cognitive abilities (intelligence, short-term memory, attention), and neural responses to speech. Results reveal changes in word-in-noise perception parallel changes in processing of the fundamental frequency (F0), an acoustic cue known for playing a role central to speaker identification and auditory scene analysis. Four unique developmental trajectories (speech-in-noise perception groups) confirm this relationship, in that improvements and declines in word-in-noise perception couple with enhancements and diminishments of F0 encoding, respectively. Improvements in word-in-noise perception also pair with gains in attention. Word-in-noise perception does not relate to strength of neural harmonic representation or short-term memory. These findings reinforce previously-reported roles of F0 and attention in hearing speech in noise in older children and adults, and extend this relationship to preschool children.


Asunto(s)
Atención , Vías Auditivas/fisiología , Individualidad , Neuronas/fisiología , Ruido/efectos adversos , Enmascaramiento Perceptual , Percepción del Habla , Estimulación Acústica , Factores de Edad , Audiometría del Habla , Vías Auditivas/citología , Conducta Infantil , Desarrollo Infantil , Preescolar , Comprensión , Potenciales Evocados Auditivos del Tronco Encefálico , Femenino , Humanos , Masculino , Acústica del Lenguaje , Inteligibilidad del Habla , Calidad de la Voz
5.
Sci Rep ; 6: 19737, 2016 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-26804355

RESUMEN

Speech signals contain information in hierarchical time scales, ranging from short-duration (e.g., phonemes) to long-duration cues (e.g., syllables, prosody). A theoretical framework to understand how the brain processes this hierarchy suggests that hemispheric lateralization enables specialized tracking of acoustic cues at different time scales, with the left and right hemispheres sampling at short (25 ms; 40 Hz) and long (200 ms; 5 Hz) periods, respectively. In adults, both speech-evoked and endogenous cortical rhythms are asymmetrical: low-frequency rhythms predominate in right auditory cortex, and high-frequency rhythms in left auditory cortex. It is unknown, however, whether endogenous resting state oscillations are similarly lateralized in children. We investigated cortical oscillations in children (3-5 years; N = 65) at rest and tested our hypotheses that this temporal asymmetry is evident early in life and facilitates recognition of speech in noise. We found a systematic pattern of increasing leftward asymmetry for higher frequency oscillations; this pattern was more pronounced in children who better perceived words in noise. The observed connection between left-biased cortical oscillations in phoneme-relevant frequencies and speech-in-noise perception suggests hemispheric specialization of endogenous oscillatory activity may support speech processing in challenging listening environments, and that this infrastructure is present during early childhood.


Asunto(s)
Corteza Auditiva/fisiología , Lateralidad Funcional , Ruido , Percepción del Habla , Estimulación Acústica , Preescolar , Potenciales Evocados Auditivos , Humanos
6.
PLoS Biol ; 13(7): e1002196, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26172057

RESUMEN

Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y), we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.


Asunto(s)
Alfabetización , Ruido , Percepción del Habla/fisiología , Adolescente , Biomarcadores , Niño , Preescolar , Femenino , Humanos , Discapacidades para el Aprendizaje/diagnóstico , Masculino
7.
Hear Res ; 328: 34-47, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26113025

RESUMEN

Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood.


Asunto(s)
Percepción Auditiva , Audición , Ruido/efectos adversos , Percepción del Habla , Estimulación Acústica , Preescolar , Estudios de Cohortes , Electrofisiología , Femenino , Análisis de Fourier , Pruebas Auditivas , Humanos , Lenguaje , Masculino , Neurofisiología , Fonética , Factores de Riesgo , Procesamiento de Señales Asistido por Computador , Habla , Acústica del Lenguaje
8.
Proc Natl Acad Sci U S A ; 111(40): 14559-64, 2014 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-25246562

RESUMEN

Temporal cues are important for discerning word boundaries and syllable segments in speech; their perception facilitates language acquisition and development. Beat synchronization and neural encoding of speech reflect precision in processing temporal cues and have been linked to reading skills. In poor readers, diminished neural precision may contribute to rhythmic and phonological deficits. Here we establish links between beat synchronization and speech processing in children who have not yet begun to read: preschoolers who can entrain to an external beat have more faithful neural encoding of temporal modulations in speech and score higher on tests of early language skills. In summary, we propose precise neural encoding of temporal modulations as a key mechanism underlying reading acquisition. Because beat synchronization abilities emerge at an early age, these findings may inform strategies for early detection of and intervention for language-based learning disabilities.


Asunto(s)
Vías Nerviosas/fisiología , Lectura , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Análisis de Varianza , Percepción Auditiva/fisiología , Preescolar , Señales (Psicología) , Electrodos , Electrofisiología/instrumentación , Electrofisiología/métodos , Femenino , Humanos , Desarrollo del Lenguaje , Aprendizaje/fisiología , Masculino , Fonética
9.
J Neurosci ; 33(45): 17667-74, 2013 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-24198359

RESUMEN

Aging results in pervasive declines in nervous system function. In the auditory system, these declines include neural timing delays in response to fast-changing speech elements; this causes older adults to experience difficulty understanding speech, especially in challenging listening environments. These age-related declines are not inevitable, however: older adults with a lifetime of music training do not exhibit neural timing delays. Yet many people play an instrument for a few years without making a lifelong commitment. Here, we examined neural timing in a group of human older adults who had nominal amounts of music training early in life, but who had not played an instrument for decades. We found that a moderate amount (4-14 years) of music training early in life is associated with faster neural timing in response to speech later in life, long after training stopped (>40 years). We suggest that early music training sets the stage for subsequent interactions with sound. These experiences may interact over time to sustain sharpened neural processing in central auditory nuclei well into older age.


Asunto(s)
Envejecimiento/fisiología , Encéfalo/fisiología , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Plasticidad Neuronal/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Música , Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...