Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Dev Sci ; 27(4): e13483, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38470174

RESUMEN

Impaired sensorimotor synchronization (SMS) to acoustic rhythm may be a marker of atypical language development. Here, Motion Capture was used to assess gross motor rhythmic movement at six time points between 5- and 11 months of age. Infants were recorded drumming to acoustic stimuli of varying linguistic and temporal complexity: drumbeats, repeated syllables and nursery rhymes. Here we show, for the first time, developmental change in infants' movement timing in response to auditory stimuli over the first year of life. Longitudinal analyses revealed that whilst infants could not yet reliably synchronize their movement to auditory rhythms, infant spontaneous motor tempo became faster with age, and by 11 months, a subset of infants decelerate from their spontaneous motor tempo, which better accords with the incoming tempo. Further, infants became more regular drummers with age, with marked decreases in the variability of spontaneous motor tempo and variability in response to drumbeats. This latter effect was subdued in response to linguistic stimuli. The current work lays the foundation for using individual differences in precursors of SMS in infancy to predict later language outcomes. RESEARCH HIGHLIGHT: We present the first longitudinal investigation of infant rhythmic movement over the first year of life Whilst infants generally move more quickly and with higher regularity over their first year, by 11 months infants begin to counter this pattern when hearing slower infant-directed song Infant movement is more variable to speech than non-speech stimuli In the context of the larger Cambridge UK BabyRhythm Project, we lay the foundation for rhythmic movement in infancy to predict later language outcomes.


Asunto(s)
Estimulación Acústica , Desarrollo del Lenguaje , Habla , Humanos , Lactante , Estudios Longitudinales , Habla/fisiología , Femenino , Masculino , Desarrollo Infantil/fisiología , Movimiento/fisiología , Periodicidad , Percepción Auditiva/fisiología
2.
Dev Sci ; 27(4): e13502, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38482775

RESUMEN

It is known that the rhythms of speech are visible on the face, accurately mirroring changes in the vocal tract. These low-frequency visual temporal movements are tightly correlated with speech output, and both visual speech (e.g., mouth motion) and the acoustic speech amplitude envelope entrain neural oscillations. Low-frequency visual temporal information ('visual prosody') is known from behavioural studies to be perceived by infants, but oscillatory studies are currently lacking. Here we measure cortical tracking of low-frequency visual temporal information by 5- and 8-month-old infants using a rhythmic speech paradigm (repetition of the syllable 'ta' at 2 Hz). Eye-tracking data were collected simultaneously with EEG, enabling computation of cortical tracking and phase angle during visual-only speech presentation. Significantly higher power at the stimulus frequency indicated that cortical tracking occurred across both ages. Further, individual differences in preferred phase to visual speech related to subsequent measures of language acquisition. The difference in phase between visual-only speech and the same speech presented as auditory-visual at 6- and 9-months was also examined. These neural data suggest that individual differences in early language acquisition may be related to the phase of entrainment to visual rhythmic input in infancy. RESEARCH HIGHLIGHTS: Infant preferred phase to visual rhythmic speech predicts language outcomes. Significant cortical tracking of visual speech is present at 5 and 8 months. Phase angle to visual speech at 8 months predicted greater receptive and productive vocabulary at 24 months.


Asunto(s)
Desarrollo del Lenguaje , Percepción del Habla , Habla , Humanos , Lactante , Masculino , Femenino , Percepción del Habla/fisiología , Habla/fisiología , Electroencefalografía , Individualidad , Percepción Visual/fisiología , Tecnología de Seguimiento Ocular , Estimulación Acústica , Estimulación Luminosa
3.
Neuroimage ; 247: 118698, 2022 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-34798233

RESUMEN

The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding at multiple time scales. Neurophysiological signals are known to track the amplitude envelope of adult-directed speech (ADS), particularly in the theta-band. Acoustic analysis of infant-directed speech (IDS) has revealed significantly greater modulation energy than ADS in an amplitude-modulation (AM) band centred on ∼2 Hz. Accordingly, cortical tracking of IDS by delta-band neural signals may be key to language acquisition. Speech also contains acoustic information within its higher-frequency bands (beta, gamma). Adult EEG and MEG studies reveal an oscillatory hierarchy, whereby low-frequency (delta, theta) neural phase dynamics temporally organize the amplitude of high-frequency signals (phase amplitude coupling, PAC). Whilst consensus is growing around the role of PAC in the matured adult brain, its role in the development of speech processing is unexplored. Here, we examined the presence and maturation of low-frequency (<12 Hz) cortical speech tracking in infants by recording EEG longitudinally from 60 participants when aged 4-, 7- and 11- months as they listened to nursery rhymes. After establishing stimulus-related neural signals in delta and theta, cortical tracking at each age was assessed in the delta, theta and alpha [control] bands using a multivariate temporal response function (mTRF) method. Delta-beta, delta-gamma, theta-beta and theta-gamma phase-amplitude coupling (PAC) was also assessed. Significant delta and theta but not alpha tracking was found. Significant PAC was present at all ages, with both delta and theta -driven coupling observed.


Asunto(s)
Ritmo Delta/fisiología , Percepción del Habla/fisiología , Ritmo Teta/fisiología , Estimulación Acústica , Corteza Auditiva/fisiología , Encéfalo/fisiología , Electroencefalografía , Humanos , Lactante , Estudios Longitudinales , Reino Unido
4.
PLoS Biol ; 15(4): e2000219, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28441393

RESUMEN

Learning complex ordering relationships between sensory events in a sequence is fundamental for animal perception and human communication. While it is known that rhythmic sensory events can entrain brain oscillations at different frequencies, how learning and prior experience with sequencing relationships affect neocortical oscillations and neuronal responses is poorly understood. We used an implicit sequence learning paradigm (an "artificial grammar") in which humans and monkeys were exposed to sequences of nonsense words with regularities in the ordering relationships between the words. We then recorded neural responses directly from the auditory cortex in both species in response to novel legal sequences or ones violating specific ordering relationships. Neural oscillations in both monkeys and humans in response to the nonsense word sequences show strikingly similar hierarchically nested low-frequency phase and high-gamma amplitude coupling, establishing this form of oscillatory coupling-previously associated with speech processing in the human auditory cortex-as an evolutionarily conserved biological process. Moreover, learned ordering relationships modulate the observed form of neural oscillatory coupling in both species, with temporally distinct neural oscillatory effects that appear to coordinate neuronal responses in the monkeys. This study identifies the conserved auditory cortical neural signatures involved in monitoring learned sequencing operations, evident as modulations of transient coupling and neuronal responses to temporally structured sensory input.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Modelos Neurológicos , Neuronas/fisiología , Acoplamiento Neurovascular , Percepción del Habla , Aprendizaje Verbal , Adulto , Animales , Audiometría de Respuesta Evocada , Corteza Auditiva/diagnóstico por imagen , Vías Auditivas/diagnóstico por imagen , Evolución Biológica , Mapeo Encefálico , Femenino , Neuroimagen Funcional , Humanos , Macaca mulatta , Imagen por Resonancia Magnética , Masculino , Conducción Nerviosa , Tiempo de Reacción , Especificidad de la Especie , Análisis y Desempeño de Tareas
5.
J Neurosci Methods ; 403: 110036, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38128783

RESUMEN

BACKGROUND: Computational models that successfully decode neural activity into speech are increasing in the adult literature, with convolutional neural networks (CNNs), backward linear models, and mutual information (MI) models all being applied to neural data in relation to speech input. This is not the case in the infant literature. NEW METHOD: Three different computational models, two novel for infants, were applied to decode low-frequency speech envelope information. Previously-employed backward linear models were compared to novel CNN and MI-based models. Fifty infants provided EEG recordings when aged 4, 7, and 11 months, while listening passively to natural speech (sung or chanted nursery rhymes) presented by video with a female singer. RESULTS: Each model computed speech information for these nursery rhymes in two different low-frequency bands, delta and theta, thought to provide different types of linguistic information. All three models demonstrated significant levels of performance for delta-band neural activity from 4 months of age, with two of three models also showing significant performance for theta-band activity. All models also demonstrated higher accuracy for the delta-band neural responses. None of the models showed developmental (age-related) effects. COMPARISONS WITH EXISTING METHODS: The data demonstrate that the choice of algorithm used to decode speech envelope information from neural activity in the infant brain determines the developmental conclusions that can be drawn. CONCLUSIONS: The modelling shows that better understanding of the strengths and weaknesses of each modelling approach is fundamental to improving our understanding of how the human brain builds a language system.


Asunto(s)
Percepción del Habla , Habla , Adulto , Humanos , Femenino , Lactante , Habla/fisiología , Electroencefalografía , Modelos Lineales , Encéfalo , Redes Neurales de la Computación , Percepción del Habla/fisiología
6.
Nat Commun ; 14(1): 7789, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38040720

RESUMEN

Even prior to producing their first words, infants are developing a sophisticated speech processing system, with robust word recognition present by 4-6 months of age. These emergent linguistic skills, observed with behavioural investigations, are likely to rely on increasingly sophisticated neural underpinnings. The infant brain is known to robustly track the speech envelope, however previous cortical tracking studies were unable to demonstrate the presence of phonetic feature encoding. Here we utilise temporal response functions computed from electrophysiological responses to nursery rhymes to investigate the cortical encoding of phonetic features in a longitudinal cohort of infants when aged 4, 7 and 11 months, as well as adults. The analyses reveal an increasingly detailed and acoustically invariant phonetic encoding emerging over the first year of life, providing neurophysiological evidence that the pre-verbal human cortex learns phonetic categories. By contrast, we found no credible evidence for age-related increases in cortical tracking of the acoustic spectrogram.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Adulto , Lactante , Humanos , Fonética , Corteza Auditiva/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Acústica , Estimulación Acústica
7.
Brain Lang ; 243: 105301, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37399686

RESUMEN

Atypical phase alignment of low-frequency neural oscillations to speech rhythm has been implicated in phonological deficits in developmental dyslexia. Atypical phase alignment to rhythm could thus also characterize infants at risk for later language difficulties. Here, we investigate phase-language mechanisms in a neurotypical infant sample. 122 two-, six- and nine-month-old infants were played speech and non-speech rhythms while EEG was recorded in a longitudinal design. The phase of infants' neural oscillations aligned consistently to the stimuli, with group-level convergence towards a common phase. Individual low-frequency phase alignment related to subsequent measures of language acquisition up to 24 months of age. Accordingly, individual differences in language acquisition are related to the phase alignment of cortical tracking of auditory and audiovisual rhythms in infancy, an automatic neural mechanism. Automatic rhythmic phase-language mechanisms could eventually serve as biomarkers, identifying at-risk infants and enabling intervention at the earliest stages of development.


Asunto(s)
Percepción del Habla , Lactante , Humanos , Lenguaje , Habla , Desarrollo del Lenguaje
8.
Sci Rep ; 12(1): 6381, 2022 04 16.
Artículo en Inglés | MEDLINE | ID: mdl-35430617

RESUMEN

There is substantial evidence that learning and using multiple languages modulates selective attention in children. The current study investigated the mechanisms that drive this modification. Specifically, we asked whether the need for constant management of competing languages in bilinguals increases attentional capacity, or draws on the available resources such that they need to be economised to support optimal task performance. Monolingual and bilingual children aged 7-12 attended to a narrative presented in one ear, while ignoring different types of interference in the other ear. We used EEG to capture the neural encoding of attended and unattended speech envelopes, and assess how well they can be reconstructed from the responses of the neuronal populations that encode them. Despite equivalent behavioral performance, monolingual and bilingual children encoded attended speech differently, with the pattern of encoding across conditions in bilinguals suggesting a redistribution of the available attentional capacity, rather than its enhancement.


Asunto(s)
Multilingüismo , Percepción del Habla , Atención/fisiología , Niño , Humanos , Lenguaje , Habla , Percepción del Habla/fisiología
9.
Front Neurosci ; 16: 842447, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35495026

RESUMEN

Here we duplicate a neural tracking paradigm, previously published with infants (aged 4 to 11 months), with adult participants, in order to explore potential developmental similarities and differences in entrainment. Adults listened and watched passively as nursery rhymes were sung or chanted in infant-directed speech. Whole-head EEG (128 channels) was recorded, and cortical tracking of the sung speech in the delta (0.5-4 Hz), theta (4-8 Hz) and alpha (8-12 Hz) frequency bands was computed using linear decoders (multivariate Temporal Response Function models, mTRFs). Phase-amplitude coupling (PAC) was also computed to assess whether delta and theta phases temporally organize higher-frequency amplitudes for adults in the same pattern as found in the infant brain. Similar to previous infant participants, the adults showed significant cortical tracking of the sung speech in both delta and theta bands. However, the frequencies associated with peaks in stimulus-induced spectral power (PSD) in the two populations were different. PAC was also different in the adults compared to the infants. PAC was stronger for theta- versus delta- driven coupling in adults but was equal for delta- versus theta-driven coupling in infants. Adults also showed a stimulus-induced increase in low alpha power that was absent in infants. This may suggest adult recruitment of other cognitive processes, possibly related to comprehension or attention. The comparative data suggest that while infant and adult brains utilize essentially the same cortical mechanisms to track linguistic input, the operation of and interplay between these mechanisms may change with age and language experience.

10.
Dev Cogn Neurosci ; 54: 101075, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35078120

RESUMEN

Amplitude rise times play a crucial role in the perception of rhythm in speech, and reduced perceptual sensitivity to differences in rise time is related to developmental language difficulties. Amplitude rise times also play a mechanistic role in neural entrainment to the speech amplitude envelope. Using an ERP paradigm, here we examined for the first time whether infants at the ages of seven and eleven months exhibit an auditory mismatch response to changes in the rise times of simple repeating auditory stimuli. We found that infants exhibited a mismatch response (MMR) to all of the oddball rise times used for the study. The MMR was more positive at seven than eleven months of age. At eleven months, there was a shift to a mismatch negativity (MMN) that was more pronounced over left fronto-central electrodes. The MMR over right fronto-central electrodes was sensitive to the size of the difference in rise time. The results indicate that neural processing of changes in rise time is present at seven months, supporting the possibility that early speech processing is facilitated by neural sensitivity to these important acoustic cues.


Asunto(s)
Potenciales Evocados Auditivos , Percepción del Habla , Estimulación Acústica/métodos , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Humanos , Lactante , Habla , Percepción del Habla/fisiología
11.
Brain Lang ; 220: 104968, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34111684

RESUMEN

Currently there are no reliable means of identifying infants at-risk for later language disorders. Infant neural responses to rhythmic stimuli may offer a solution, as neural tracking of rhythm is atypical in children with developmental language disorders. However, infant brain recordings are noisy. As a first step to developing accurate neural biomarkers, we investigate whether infant brain responses to rhythmic stimuli can be classified reliably using EEG from 95 eight-week-old infants listening to natural stimuli (repeated syllables or drumbeats). Both Convolutional Neural Network (CNN) and Support Vector Machine (SVM) approaches were employed. Applied to one infant at a time, the CNN discriminated syllables from drumbeats with a mean AUC of 0.87, against two levels of noise. The SVM classified with AUC 0.95 and 0.86 respectively, showing reduced performance as noise increased. Our proof-of-concept modelling opens the way to the development of clinical biomarkers for language disorders related to rhythmic entrainment.


Asunto(s)
Aprendizaje Automático , Habla , Niño , Electroencefalografía , Humanos , Lactante , Redes Neurales de la Computación , Máquina de Vectores de Soporte
12.
Sci Rep ; 6: 36259, 2016 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-27827366

RESUMEN

There is considerable interest in understanding the ontogeny and phylogeny of the human language system, yet, neurobiological work at the interface of both fields is absent. Syntactic processes in language build on sensory processing and sequencing capabilities on the side of the receiver. While we better understand language-related ontogenetic changes in the human brain, it remains a mystery how neurobiological processes at specific human development stages compare with those in phylogenetically closely related species. To address this knowledge gap, we measured EEG event-related potentials (ERPs) in two macaque monkeys using a paradigm developed to evaluate human infant and adult brain potentials associated with the processing of non-adjacent ordering relationships in sequences of syllable triplets. Frequent standard triplet sequences were interspersed with infrequent voice pitch or non-adjacent rule deviants. Monkey ERPs show early pitch and rule deviant mismatch responses that are strikingly similar to those previously reported in human infants. This stands in contrast to adults' later ERP responses for rule deviants. The results reveal how non-adjacent sequence ordering relationships are processed in the primate brain and provide evidence for evolutionarily conserved neurophysiological effects, some of which are remarkably like those seen at an early human developmental stage.


Asunto(s)
Estimulación Acústica/métodos , Encéfalo/fisiología , Potenciales Evocados , Adulto , Animales , Electroencefalografía , Potenciales Evocados Auditivos , Evolución Molecular , Humanos , Lactante , Lenguaje , Desarrollo del Lenguaje , Macaca mulatta , Percepción de la Altura Tonal
13.
Brain Lang ; 148: 74-80, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25529405

RESUMEN

Electroencephalography (EEG) has identified human brain potentials elicited by Artificial Grammar (AG) learning paradigms, which present participants with rule-based sequences of stimuli. Nonhuman animals are sensitive to certain AGs; therefore, evaluating which EEG Event Related Potentials (ERPs) are associated with AG learning in nonhuman animals could identify evolutionarily conserved processes. We recorded EEG potentials during an auditory AG learning experiment in two Rhesus macaques. The animals were first exposed to sequences of nonsense words generated by the AG. Then surface-based ERPs were recorded in response to sequences that were 'consistent' with the AG and 'violation' sequences containing illegal transitions. The AG violations strongly modulated an early component, potentially homologous to the Mismatch Negativity (mMMN), a P200 and a late frontal positivity (P500). The macaque P500 is similar in polarity and time of occurrence to a late EEG positivity reported in human AG learning studies but might differ in functional role.


Asunto(s)
Encéfalo/fisiología , Electroencefalografía , Potenciales Evocados/fisiología , Aprendizaje/fisiología , Lingüística , Macaca mulatta/fisiología , Estimulación Acústica , Animales , Mapeo Encefálico , Femenino , Humanos , Masculino , Percepción del Habla/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA