Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.118
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Annu Rev Neurosci ; 42: 47-65, 2019 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-30699049

RESUMO

The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into (a) minimal representations at the periphery for speech reception, (b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, (c) the developmental neuroscience of language and hearing, and (d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.


Assuntos
Percepção Auditiva/fisiologia , Implantes Cocleares , Período Crítico Psicológico , Desenvolvimento da Linguagem , Animais , Transtornos da Percepção Auditiva/etiologia , Encéfalo/crescimento & desenvolvimento , Implante Coclear , Compreensão , Sinais (Psicologia) , Surdez/congênito , Surdez/fisiopatologia , Surdez/psicologia , Surdez/cirurgia , Desenho de Equipamento , Humanos , Transtornos do Desenvolvimento da Linguagem/etiologia , Transtornos do Desenvolvimento da Linguagem/prevenção & controle , Aprendizagem/fisiologia , Plasticidade Neuronal , Estimulação Luminosa
2.
Proc Natl Acad Sci U S A ; 121(34): e2411167121, 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39136991

RESUMO

Evidence accumulates that the cerebellum's role in the brain is not restricted to motor functions. Rather, cerebellar activity seems to be crucial for a variety of tasks that rely on precise event timing and prediction. Due to its complex structure and importance in communication, human speech requires a particularly precise and predictive coordination of neural processes to be successfully comprehended. Recent studies proposed that the cerebellum is indeed a major contributor to speech processing, but how this contribution is achieved mechanistically remains poorly understood. The current study aimed to reveal a mechanism underlying cortico-cerebellar coordination and demonstrate its speech-specificity. In a reanalysis of magnetoencephalography data, we found that activity in the cerebellum aligned to rhythmic sequences of noise-vocoded speech, irrespective of its intelligibility. We then tested whether these "entrained" responses persist, and how they interact with other brain regions, when a rhythmic stimulus stopped and temporal predictions had to be updated. We found that only intelligible speech produced sustained rhythmic responses in the cerebellum. During this "entrainment echo," but not during rhythmic speech itself, cerebellar activity was coupled with that in the left inferior frontal gyrus, and specifically at rates corresponding to the preceding stimulus rhythm. This finding represents evidence for specific cerebellum-driven temporal predictions in speech processing and their relay to cortical regions.


Assuntos
Cerebelo , Magnetoencefalografia , Humanos , Cerebelo/fisiologia , Masculino , Feminino , Adulto , Percepção da Fala/fisiologia , Adulto Jovem , Fala/fisiologia , Inteligibilidade da Fala/fisiologia
3.
Cereb Cortex ; 34(9)2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39329356

RESUMO

Evidence suggests that the articulatory motor system contributes to speech perception in a context-dependent manner. This study tested 2 hypotheses using magnetoencephalography: (i) the motor cortex is involved in phonological processing, and (ii) it aids in compensating for speech-in-noise challenges. A total of 32 young adults performed a phonological discrimination task under 3 noise conditions while their brain activity was recorded using magnetoencephalography. We observed simultaneous activation in the left ventral primary motor cortex and bilateral posterior-superior temporal gyrus when participants correctly identified pairs of syllables. This activation was significantly more pronounced for phonologically different than identical syllable pairs. Notably, phonological differences were resolved more quickly in the left ventral primary motor cortex than in the left posterior-superior temporal gyrus. Conversely, the noise level did not modulate the activity in frontal motor regions and the involvement of the left ventral primary motor cortex in phonological discrimination was comparable across all noise conditions. Our results show that the ventral primary motor cortex is crucial for phonological processing but not for compensation in challenging listening conditions. Simultaneous activation of left ventral primary motor cortex and bilateral posterior-superior temporal gyrus supports an interactive model of speech perception, where auditory and motor regions shape perception. The ventral primary motor cortex may be involved in a predictive coding mechanism that influences auditory-phonetic processing.


Assuntos
Magnetoencefalografia , Córtex Motor , Fonética , Percepção da Fala , Humanos , Masculino , Feminino , Córtex Motor/fisiologia , Adulto Jovem , Percepção da Fala/fisiologia , Adulto , Lateralidade Funcional/fisiologia , Discriminação Psicológica/fisiologia , Estimulação Acústica , Mapeamento Encefálico , Ruído
4.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38687241

RESUMO

Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Fonética , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Feminino , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto , Adulto Jovem , Linguística , Estimulação Acústica/métodos , Compreensão/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
5.
Cereb Cortex ; 34(5)2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38715408

RESUMO

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Assuntos
Envelhecimento , Encéfalo , Compreensão , Ruído , Espectroscopia de Luz Próxima ao Infravermelho , Percepção da Fala , Humanos , Adulto , Percepção da Fala/fisiologia , Masculino , Feminino , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Pessoa de Meia-Idade , Adulto Jovem , Idoso , Compreensão/fisiologia , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Envelhecimento/fisiologia , Mapeamento Encefálico/métodos , Estimulação Acústica/métodos
6.
Cereb Cortex ; 34(3)2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38494418

RESUMO

Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.


Assuntos
Idioma , Fala , Humanos , Feminino , Fala/fisiologia , Lobo Temporal , Estimulação Magnética Transcraniana , Ruído
7.
Cereb Cortex ; 34(1)2024 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-38163443

RESUMO

The onset of hearing loss can lead to altered brain structure and functions. However, hearing restoration may also result in distinct cortical reorganization. A differential pattern of functional remodeling was observed between post- and prelingual cochlear implant users, but it remains unclear how these speech processing networks are reorganized after cochlear implantation. To explore the impact of language acquisition and hearing restoration on speech perception in cochlear implant users, we conducted assessments of brain activation, functional connectivity, and graph theory-based analysis using functional near-infrared spectroscopy. We examined the effects of speech-in-noise stimuli on three groups: postlingual cochlear implant users (n = 12), prelingual cochlear implant users (n = 10), and age-matched individuals with hearing controls (HC) (n = 22). The activation of auditory-related areas in cochlear implant users showed a lower response compared with the HC group. Wernicke's area and Broca's area demonstrated differences network attributes in speech processing networks in post- and prelingual cochlear implant users. In addition, cochlear implant users maintain a high efficiency of the speech processing network to process speech information. Taken together, our results characterize the speech processing networks, in varying noise environments, in post- and prelingual cochlear implant users and provide new insights for theories of how implantation modes impact remodeling of the speech processing functional networks.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Humanos , Fala , Surdez/cirurgia , Audição , Percepção da Fala/fisiologia
8.
Cereb Cortex ; 34(2)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38282455

RESUMO

Individual variability in functional connectivity underlies individual differences in cognition and behaviors, yet its association with functional specialization in the auditory cortex remains elusive. Using resting-state functional magnetic resonance imaging data from the Human Connectome Project, this study was designed to investigate the spatial distribution of auditory cortex individual variability in its whole-brain functional network architecture. An inherent hierarchical axis of the variability was discerned, which radiates from the medial to lateral orientation, with the left auditory cortex demonstrating more pronounced variations than the right. This variability exhibited a significant correlation with the variations in structural and functional metrics in the auditory cortex. Four auditory cortex subregions, which were identified from a clustering analysis based on this variability, exhibited unique connectional fingerprints and cognitive maps, with certain subregions showing specificity to speech perception functional activation. Moreover, the lateralization of the connectional fingerprint exhibited a U-shaped trajectory across the subregions. These findings emphasize the role of individual variability in functional connectivity in understanding cortical functional organization, as well as in revealing its association with functional specialization from the activation, connectome, and cognition perspectives.


Assuntos
Córtex Auditivo , Conectoma , Humanos , Córtex Auditivo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Conectoma/métodos , Encéfalo , Cognição
9.
J Neurosci ; 43(26): 4856-4866, 2023 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-37127361

RESUMO

Listening in noisy environments requires effort- the active engagement of attention and other cognitive abilities- as well as increased arousal. The ability to separately quantify the contribution of these components is key to understanding the dynamics of effort and how it may change across listening situations and in certain populations. We concurrently measured two types of ocular data in young participants (both sexes): pupil dilation (PD; thought to index arousal aspects of effort) and microsaccades (MS; hypothesized to reflect automatic visual exploratory sampling), while they performed a speech-in-noise task under high- (HL) and low- (LL) listening load conditions. Sentences were manipulated so that the behaviorally relevant information (keywords) appeared at the end (Experiment 1) or beginning (Experiment 2) of the sentence, resulting in different temporal demands on focused attention. In line with previous reports, PD effects were associated with increased dilation under load. We observed a sustained difference between HL and LL conditions, consistent with increased phasic and tonic arousal. Importantly we show that MS rate was also modulated by listening load. This was manifested as a reduced MS rate in HL relative to LL. Critically, in contrast to the sustained difference seen for PD, MS effects were localized in time, specifically during periods when demands on auditory attention were greatest. These results demonstrate that auditory selective attention interfaces with the mechanisms controlling MS generation, establishing MS as an informative measure, complementary to PD, with which to quantify the temporal dynamics of auditory attentional processing under effortful listening conditions.SIGNIFICANCE STATEMENT Listening effort, reflecting the "cognitive bandwidth" deployed to effectively process sound in adverse environments, contributes critically to listening success. Understanding listening effort and the processes involved in its allocation is a major challenge in auditory neuroscience. Here, we demonstrate that microsaccade rate can be used to index a specific subcomponent of listening effort, the allocation of instantaneous auditory attention, that is distinct from the modulation of arousal indexed by pupil dilation (currently the dominant measure of listening effort). These results reveal the push-pull process through which auditory attention interfaces with the (visual) attention network that controls microsaccades, establishing microsaccades as a powerful tool for measuring auditory attention and its deficits.


Assuntos
Pupila , Percepção da Fala , Masculino , Feminino , Humanos , Percepção Auditiva , Ruído , Nível de Alerta
10.
Neuroimage ; : 120875, 2024 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-39341475

RESUMO

In speech perception, low-frequency cortical activity tracks hierarchical linguistic units (e.g., syllables, phrases, and sentences) on top of acoustic features (e.g., speech envelope). Since the fluctuation of speech envelope typically corresponds to the syllabic boundaries, one common interpretation is that the acoustic envelope underlies the extraction of discrete syllables from continuous speech for subsequent linguistic processing. However, it remains unclear whether and how cortical activity encodes linguistic information when the speech envelope does not provide acoustic correlates of syllables. To address the issue, we introduced a frequency-tagging speech stream where the syllabic rhythm was obscured by echoic envelopes and investigated neural encoding of hierarchical linguistic information using electroencephalography (EEG). When listeners attended to the echoic speech, cortical activity showed reliable tracking of syllable, phrase, and sentence levels, among which the higher-level linguistic units elicited more robust neural responses. When attention was diverted from the echoic speech, reliable neural tracking of the syllable level was also observed in contrast to deteriorated neural tracking of the phrase and sentence levels. Further analyses revealed that the envelope aligned with the syllabic rhythm could be recovered from the echoic speech through a neural adaptation model, and the reconstructed envelope yielded higher predictive power for the neural tracking responses than either the original echoic envelope or anechoic envelope. Taken together, these results suggest that neural adaptation and attentional modulation jointly contribute to neural encoding of linguistic information in distorted speech where the syllabic rhythm is obscured by echoes.

11.
Neuroimage ; 289: 120544, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38365164

RESUMO

Natural poetic speeches (i.e., proverbs, nursery rhymes, and commercial ads) with strong prosodic regularities are easily memorized by children and the harmonious acoustic patterns are suggested to facilitate their integrated sentence processing. Do children have specific neural pathways for perceiving such poetic utterances, and does their speech development benefit from it? We recorded the task-induced hemodynamic changes of 94 children aged 2 to 12 years using functional near-infrared spectroscopy (fNIRS) while they listened to poetic and non-poetic natural sentences. Seventy-three adult as controls were recruited to investigate the developmental specificity of children group. The results indicated that poetic sentences perceiving is a highly integrated process featured by a lower brain workload in both groups. However, an early activated large-scale network was induced only in the child group, coordinated by hubs for connectivity diversity. Additionally, poetic speeches evoked activation in the phonological encoding regions in the children's group rather than adult controls which decreases with children's ages. The neural responses to poetic speeches were positively linked to children's speech communication performance, especially the fluency and semantic aspects. These results reveal children's neural sensitivity to integrated speech perception which facilitate early speech development by strengthening more sophisticated language networks and the perception-production circuit.


Assuntos
Percepção da Fala , Fala , Criança , Adulto , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Idioma , Encéfalo/fisiologia , Semântica , Desenvolvimento da Linguagem
12.
Neuroimage ; 297: 120696, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38909761

RESUMO

How is information processed in the cerebral cortex? In most cases, recorded brain activity is averaged over many (stimulus) repetitions, which erases the fine-structure of the neural signal. However, the brain is obviously a single-trial processor. Thus, we here demonstrate that an unsupervised machine learning approach can be used to extract meaningful information from electro-physiological recordings on a single-trial basis. We use an auto-encoder network to reduce the dimensions of single local field potential (LFP) events to create interpretable clusters of different neural activity patterns. Strikingly, certain LFP shapes correspond to latency differences in different recording channels. Hence, LFP shapes can be used to determine the direction of information flux in the cerebral cortex. Furthermore, after clustering, we decoded the cluster centroids to reverse-engineer the underlying prototypical LFP event shapes. To evaluate our approach, we applied it to both extra-cellular neural recordings in rodents, and intra-cranial EEG recordings in humans. Finally, we find that single channel LFP event shapes during spontaneous activity sample from the realm of possible stimulus evoked event shapes. A finding which so far has only been demonstrated for multi-channel population coding.


Assuntos
Aprendizado Profundo , Eletroencefalografia , Humanos , Animais , Eletroencefalografia/métodos , Córtex Cerebral/fisiologia , Masculino , Aprendizado de Máquina não Supervisionado , Ratos , Adulto , Feminino
13.
Eur J Neurosci ; 59(3): 394-414, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38151889

RESUMO

Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.


Assuntos
Percepção da Fala , Fala , Humanos , Estimulação Acústica , Acústica
14.
Eur J Neurosci ; 59(8): 1918-1932, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37990611

RESUMO

The unconscious integration of vocal and facial cues during speech perception facilitates face-to-face communication. Recent studies have provided substantial behavioural evidence concerning impairments in audiovisual (AV) speech perception in schizophrenia. However, the specific neurophysiological mechanism underlying these deficits remains unknown. Here, we investigated activities and connectivities centered on the auditory cortex during AV speech perception in schizophrenia. Using magnetoencephalography, we recorded and analysed event-related fields in response to auditory (A: voice), visual (V: face) and AV (voice-face) stimuli in 23 schizophrenia patients (13 males) and 22 healthy controls (13 males). The functional connectivity associated with the subadditive response to AV stimulus (i.e., [AV] < [A] + [V]) was also compared between the two groups. Within the healthy control group, [AV] activity was smaller than the sum of [A] and [V] at latencies of approximately 100 ms in the posterior ramus of the lateral sulcus in only the left hemisphere, demonstrating a subadditive N1m effect. Conversely, the schizophrenia group did not show such a subadditive response. Furthermore, weaker functional connectivity from the posterior ramus of the lateral sulcus of the left hemisphere to the fusiform gyrus of the right hemisphere was observed in schizophrenia. Notably, this weakened connectivity was associated with the severity of negative symptoms. These results demonstrate abnormalities in connectivity between speech- and face-related cortical areas in schizophrenia. This aberrant subadditive response and connectivity deficits for integrating speech and facial information may be the neural basis of social communication dysfunctions in schizophrenia.


Assuntos
Córtex Auditivo , Esquizofrenia , Percepção da Fala , Masculino , Humanos , Percepção da Fala/fisiologia , Magnetoencefalografia , Fala/fisiologia , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos
15.
Hum Brain Mapp ; 45(13): e70023, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39268584

RESUMO

The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.


Assuntos
Percepção da Fala , Fala , Humanos , Percepção da Fala/fisiologia , Fala/fisiologia , Mapeamento Encefálico , Funções Verossimilhança , Córtex Motor/fisiologia , Córtex Cerebral/fisiologia , Córtex Cerebral/diagnóstico por imagem
16.
Anim Cogn ; 27(1): 34, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38625429

RESUMO

Humans have an impressive ability to comprehend signal-degraded speech; however, the extent to which comprehension of degraded speech relies on human-specific features of speech perception vs. more general cognitive processes is unknown. Since dogs live alongside humans and regularly hear speech, they can be used as a model to differentiate between these possibilities. One often-studied type of degraded speech is noise-vocoded speech (sometimes thought of as cochlear-implant-simulation speech). Noise-vocoded speech is made by dividing the speech signal into frequency bands (channels), identifying the amplitude envelope of each individual band, and then using these envelopes to modulate bands of noise centered over the same frequency regions - the result is a signal with preserved temporal cues, but vastly reduced frequency information. Here, we tested dogs' recognition of familiar words produced in 16-channel vocoded speech. In the first study, dogs heard their names and unfamiliar dogs' names (foils) in vocoded speech as well as natural speech. In the second study, dogs heard 16-channel vocoded speech only. Dogs listened longer to their vocoded name than vocoded foils in both experiments, showing that they can comprehend a 16-channel vocoded version of their name without prior exposure to vocoded speech, and without immediate exposure to the natural-speech version of their name. Dogs' name recognition in the second study was mediated by the number of phonemes in the dogs' name, suggesting that phonological context plays a role in degraded speech comprehension.


Assuntos
Percepção da Fala , Fala , Humanos , Animais , Cães , Sinais (Psicologia) , Audição , Linguística
17.
Dev Sci ; 27(1): e13412, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37219071

RESUMO

Literacy acquisition is a complex process with genetic and environmental factors influencing cognitive and neural processes associated with reading. Previous research identified factors that predict word reading fluency (WRF), including phonological awareness (PA), rapid automatized naming (RAN), and speech-in-noise perception (SPIN). Recent theoretical accounts suggest dynamic interactions between these factors and reading, but direct investigations of such dynamics are lacking. Here, we investigated the dynamic effect of phonological processing and speech perception on WRF. More specifically, we evaluated the dynamic influence of PA, RAN, and SPIN measured in kindergarten (the year prior to formal reading instruction), first grade (the first year of formal reading instruction) and second grade on WRF in second and third grade. We also assessed the effect of an indirect proxy of family risk for reading difficulties using a parental questionnaire (Adult Reading History Questionnaire, ARHQ). We applied path modeling in a longitudinal sample of 162 Dutch-speaking children of whom the majority was selected to have an increased family and/or cognitive risk for dyslexia. We showed that parental ARHQ had a significant effect on WRF, RAN and SPIN, but unexpectedly not on PA. We also found effects of RAN and PA directly on WRF that were limited to first and second grade respectively, in contrast to previous research reporting pre-reading PA effects and prolonged RAN effects throughout reading acquisition. Our study provides important new insights into early prediction of later word reading abilities and into the optimal time window to target a specific reading-related subskill during intervention.


Assuntos
Dislexia , Leitura , Criança , Humanos , Fonética , Idioma , Cognição
18.
Dev Sci ; 27(1): e13420, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37350014

RESUMO

Auditory selective attention forms an important foundation of children's learning by enabling the prioritisation and encoding of relevant stimuli. It may also influence reading development, which relies on metalinguistic skills including the awareness of the sound structure of spoken language. Reports of attentional impairments and speech perception difficulties in noisy environments in dyslexic readers are also suggestive of the putative contribution of auditory attention to reading development. To date, it is unclear whether non-speech selective attention and its underlying neural mechanisms are impaired in children with dyslexia and to which extent these deficits relate to individual reading and speech perception abilities in suboptimal listening conditions. In this EEG study, we assessed non-speech sustained auditory selective attention in 106 7-to-12-year-old children with and without dyslexia. Children attended to one of two tone streams, detecting occasional sequence repeats in the attended stream, and performed a speech-in-speech perception task. Results show that when children directed their attention to one stream, inter-trial-phase-coherence at the attended rate increased in fronto-central sites; this, in turn, was associated with better target detection. Behavioural and neural indices of attention did not systematically differ as a function of dyslexia diagnosis. However, behavioural indices of attention did explain individual differences in reading fluency and speech-in-speech perception abilities: both these skills were impaired in dyslexic readers. Taken together, our results show that children with dyslexia do not show group-level auditory attention deficits but these deficits may represent a risk for developing reading impairments and problems with speech perception in complex acoustic environments. RESEARCH HIGHLIGHTS: Non-speech sustained auditory selective attention modulates EEG phase coherence in children with/without dyslexia Children with dyslexia show difficulties in speech-in-speech perception Attention relates to dyslexic readers' speech-in-speech perception and reading skills Dyslexia diagnosis is not linked to behavioural/EEG indices of auditory attention.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Leitura , Som , Fala , Distúrbios da Fala , Fonética
19.
Dev Sci ; : e13551, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39036879

RESUMO

Test-retest reliability-establishing that measurements remain consistent across multiple testing sessions-is critical to measuring, understanding, and predicting individual differences in infant language development. However, previous attempts to establish measurement reliability in infant speech perception tasks are limited, and reliability of frequently used infant measures is largely unknown. The current study investigated the test-retest reliability of infants' preference for infant-directed speech over adult-directed speech in a large sample (N = 158) in the context of the ManyBabies1 collaborative research project. Labs were asked to bring in participating infants for a second appointment retesting infants on their preference for infant-directed speech. This approach allowed us to estimate test-retest reliability across three different methods used to investigate preferential listening in infancy: the head-turn preference procedure, central fixation, and eye-tracking. Overall, we found no consistent evidence of test-retest reliability in measures of infants' speech preference (overall r = 0.09, 95% CI [-0.06,0.25]). While increasing the number of trials that infants needed to contribute for inclusion in the analysis revealed a numeric growth in test-retest reliability, it also considerably reduced the study's effective sample size. Therefore, future research on infant development should take into account that not all experimental measures may be appropriate for assessing individual differences between infants. RESEARCH HIGHLIGHTS: We assessed test-retest reliability of infants' preference for infant-directed over adult-directed speech in a large pre-registered sample (N = 158). There was no consistent evidence of test-retest reliability in measures of infants' speech preference. Applying stricter criteria for the inclusion of participants may lead to higher test-retest reliability, but at the cost of substantial decreases in sample size. Developmental research relying on stable individual differences should consider the underlying reliability of its measures.

20.
Dev Sci ; 27(1): e13431, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37403418

RESUMO

As reading is inherently a multisensory, audiovisual (AV) process where visual symbols (i.e., letters) are connected to speech sounds, the question has been raised whether individuals with reading difficulties, like children with developmental dyslexia (DD), have broader impairments in multisensory processing. This question has been posed before, yet it remains unanswered due to (a) the complexity and contentious etiology of DD along with (b) lack of consensus on developmentally appropriate AV processing tasks. We created an ecologically valid task for measuring multisensory AV processing by leveraging the natural phenomenon that speech perception improves when listeners are provided visual information from mouth movements (particularly when the auditory signal is degraded). We designed this AV processing task with low cognitive and linguistic demands such that children with and without DD would have equal unimodal (auditory and visual) performance. We then collected data in a group of 135 children (age 6.5-15) with an AV speech perception task to answer the following questions: (1) How do AV speech perception benefits manifest in children, with and without DD? (2) Do children all use the same perceptual weights to create AV speech perception benefits, and (3) what is the role of phonological processing in AV speech perception? We show that children with and without DD have equal AV speech perception benefits on this task, but that children with DD rely less on auditory processing in more difficult listening situations to create these benefits and weigh both incoming information streams differently. Lastly, any reported differences in speech perception in children with DD might be better explained by differences in phonological processing than differences in reading skills. RESEARCH HIGHLIGHTS: Children with versus without developmental dyslexia have equal audiovisual speech perception benefits, regardless of their phonological awareness or reading skills. Children with developmental dyslexia rely less on auditory performance to create audiovisual speech perception benefits. Individual differences in speech perception in children might be better explained by differences in phonological processing than differences in reading skills.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Adolescente , Dislexia/psicologia , Leitura , Fonética , Conscientização
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA