Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(49): e2309166120, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38032934

RESUMO

Neural speech tracking has advanced our understanding of how our brains rapidly map an acoustic speech signal onto linguistic representations and ultimately meaning. It remains unclear, however, how speech intelligibility is related to the corresponding neural responses. Many studies addressing this question vary the level of intelligibility by manipulating the acoustic waveform, but this makes it difficult to cleanly disentangle the effects of intelligibility from underlying acoustical confounds. Here, using magnetoencephalography recordings, we study neural measures of speech intelligibility by manipulating intelligibility while keeping the acoustics strictly unchanged. Acoustically identical degraded speech stimuli (three-band noise-vocoded, ~20 s duration) are presented twice, but the second presentation is preceded by the original (nondegraded) version of the speech. This intermediate priming, which generates a "pop-out" percept, substantially improves the intelligibility of the second degraded speech passage. We investigate how intelligibility and acoustical structure affect acoustic and linguistic neural representations using multivariate temporal response functions (mTRFs). As expected, behavioral results confirm that perceived speech clarity is improved by priming. mTRFs analysis reveals that auditory (speech envelope and envelope onset) neural representations are not affected by priming but only by the acoustics of the stimuli (bottom-up driven). Critically, our findings suggest that segmentation of sounds into words emerges with better speech intelligibility, and most strongly at the later (~400 ms latency) word processing stage, in prefrontal cortex, in line with engagement of top-down mechanisms associated with priming. Taken together, our results show that word representations may provide some objective measures of speech comprehension.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Inteligibilidade da Fala/fisiologia , Estimulação Acústica/métodos , Fala/fisiologia , Ruído , Acústica , Magnetoencefalografia/métodos , Percepção da Fala/fisiologia
2.
Cereb Cortex ; 34(2)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38236741

RESUMO

The superior temporal and the Heschl's gyri of the human brain play a fundamental role in speech processing. Neurons synchronize their activity to the amplitude envelope of the speech signal to extract acoustic and linguistic features, a process known as neural tracking/entrainment. Electroencephalography has been extensively used in language-related research due to its high temporal resolution and reduced cost, but it does not allow for a precise source localization. Motivated by the lack of a unified methodology for the interpretation of source reconstructed signals, we propose a method based on modularity and signal complexity. The procedure was tested on data from an experiment in which we investigated the impact of native language on tracking to linguistic rhythms in two groups: English natives and Spanish natives. In the experiment, we found no effect of native language but an effect of language rhythm. Here, we compare source projected signals in the auditory areas of both hemispheres for the different conditions using nonparametric permutation tests, modularity, and a dynamical complexity measure. We found increasing values of complexity for decreased regularity in the stimuli, giving us the possibility to conclude that languages with less complex rhythms are easier to track by the auditory cortex.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Eletroencefalografia/métodos , Córtex Auditivo/fisiologia , Encéfalo/fisiologia , Linguística , Estimulação Acústica
3.
J Neurosci ; 43(40): 6779-6795, 2023 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-37607822

RESUMO

Communication difficulties are one of the core criteria in diagnosing autism spectrum disorder (ASD), and are often characterized by speech reception difficulties, whose biological underpinnings are not yet identified. This deficit could denote atypical neuronal ensemble activity, as reflected by neural oscillations. Atypical cross-frequency oscillation coupling, in particular, could disrupt the joint tracking and prediction of dynamic acoustic stimuli, a dual process that is essential for speech comprehension. Whether such oscillatory anomalies already exist in very young children with ASD, and with what specificity they relate to individual language reception capacity is unknown. We collected neural activity data using electroencephalography (EEG) in 64 very young children with and without ASD (mean age 3; 17 females, 47 males) while they were exposed to naturalistic-continuous speech. EEG power of frequency bands typically associated with phrase-level chunking (δ, 1-3 Hz), phonemic encoding (low-γ, 25-35 Hz), and top-down control (ß, 12-20 Hz) were markedly reduced in ASD relative to typically developing (TD) children. Speech neural tracking by δ and θ (4-8 Hz) oscillations was also weaker in ASD compared with TD children. After controlling gaze-pattern differences, we found that the classical θ/γ coupling was replaced by an atypical ß/γ coupling in children with ASD. This anomaly was the single most specific predictor of individual speech reception difficulties in ASD children. These findings suggest that early interventions (e.g., neurostimulation) targeting the disruption of ß/γ coupling and the upregulation of θ/γ coupling could improve speech processing coordination in young children with ASD and help them engage in oral interactions.SIGNIFICANCE STATEMENT Very young children already present marked alterations of neural oscillatory activity in response to natural speech at the time of autism spectrum disorder (ASD) diagnosis. Hierarchical processing of phonemic-range and syllabic-range information (θ/γ coupling) is disrupted in ASD children. Abnormal bottom-up (low-γ) and top-down (low-ß) coordination specifically predicts speech reception deficits in very young ASD children, and no other cognitive deficit.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Masculino , Feminino , Humanos , Criança , Pré-Escolar , Fala/fisiologia , Transtorno do Espectro Autista/diagnóstico , Eletroencefalografia , Estimulação Acústica
4.
Neuroimage ; 300: 120875, 2024 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-39341475

RESUMO

In speech perception, low-frequency cortical activity tracks hierarchical linguistic units (e.g., syllables, phrases, and sentences) on top of acoustic features (e.g., speech envelope). Since the fluctuation of speech envelope typically corresponds to the syllabic boundaries, one common interpretation is that the acoustic envelope underlies the extraction of discrete syllables from continuous speech for subsequent linguistic processing. However, it remains unclear whether and how cortical activity encodes linguistic information when the speech envelope does not provide acoustic correlates of syllables. To address the issue, we introduced a frequency-tagging speech stream where the syllabic rhythm was obscured by echoic envelopes and investigated neural encoding of hierarchical linguistic information using electroencephalography (EEG). When listeners attended to the echoic speech, cortical activity showed reliable tracking of syllable, phrase, and sentence levels, among which the higher-level linguistic units elicited more robust neural responses. When attention was diverted from the echoic speech, reliable neural tracking of the syllable level was also observed in contrast to deteriorated neural tracking of the phrase and sentence levels. Further analyses revealed that the envelope aligned with the syllabic rhythm could be recovered from the echoic speech through a neural adaptation model, and the reconstructed envelope yielded higher predictive power for the neural tracking responses than either the original echoic envelope or anechoic envelope. Taken together, these results suggest that neural adaptation and attentional modulation jointly contribute to neural encoding of linguistic information in distorted speech where the syllabic rhythm is obscured by echoes.


Assuntos
Eletroencefalografia , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Masculino , Feminino , Eletroencefalografia/métodos , Adulto Jovem , Adulto , Córtex Cerebral/fisiologia , Linguística , Estimulação Acústica
5.
Eur J Neurosci ; 59(3): 394-414, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38151889

RESUMO

Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.


Assuntos
Percepção da Fala , Fala , Humanos , Estimulação Acústica , Acústica
6.
Eur J Neurosci ; 60(6): 5381-5399, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39188179

RESUMO

While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1-1.75 and 2.5-3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.


Assuntos
Sinais (Psicologia) , Eletroencefalografia , Percepção da Fala , Fala , Humanos , Lactente , Masculino , Feminino , Percepção da Fala/fisiologia , Eletroencefalografia/métodos , Fala/fisiologia , Percepção Visual/fisiologia , Encéfalo/fisiologia
7.
Hum Brain Mapp ; 45(8): e26676, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38798131

RESUMO

Aphasia is a communication disorder that affects processing of language at different levels (e.g., acoustic, phonological, semantic). Recording brain activity via Electroencephalography while people listen to a continuous story allows to analyze brain responses to acoustic and linguistic properties of speech. When the neural activity aligns with these speech properties, it is referred to as neural tracking. Even though measuring neural tracking of speech may present an interesting approach to studying aphasia in an ecologically valid way, it has not yet been investigated in individuals with stroke-induced aphasia. Here, we explored processing of acoustic and linguistic speech representations in individuals with aphasia in the chronic phase after stroke and age-matched healthy controls. We found decreased neural tracking of acoustic speech representations (envelope and envelope onsets) in individuals with aphasia. In addition, word surprisal displayed decreased amplitudes in individuals with aphasia around 195 ms over frontal electrodes, although this effect was not corrected for multiple comparisons. These results show that there is potential to capture language processing impairments in individuals with aphasia by measuring neural tracking of continuous speech. However, more research is needed to validate these results. Nonetheless, this exploratory study shows that neural tracking of naturalistic, continuous speech presents a powerful approach to studying aphasia.


Assuntos
Afasia , Eletroencefalografia , Acidente Vascular Cerebral , Humanos , Afasia/fisiopatologia , Afasia/etiologia , Afasia/diagnóstico por imagem , Masculino , Feminino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Idoso , Percepção da Fala/fisiologia , Adulto , Fala/fisiologia
8.
Neuroimage ; 282: 120404, 2023 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-37806465

RESUMO

Despite the distortion of speech signals caused by unavoidable noise in daily life, our ability to comprehend speech in noisy environments is relatively stable. However, the neural mechanisms underlying reliable speech-in-noise comprehension remain to be elucidated. The present study investigated the neural tracking of acoustic and semantic speech information during noisy naturalistic speech comprehension. Participants listened to narrative audio recordings mixed with spectrally matched stationary noise at three signal-to-ratio (SNR) levels (no noise, 3 dB, -3 dB), and 60-channel electroencephalography (EEG) signals were recorded. A temporal response function (TRF) method was employed to derive event-related-like responses to the continuous speech stream at both the acoustic and the semantic levels. Whereas the amplitude envelope of the naturalistic speech was taken as the acoustic feature, word entropy and word surprisal were extracted via the natural language processing method as two semantic features. Theta-band frontocentral TRF responses to the acoustic feature were observed at around 400 ms following speech fluctuation onset over all three SNR levels, and the response latencies were more delayed with increasing noise. Delta-band frontal TRF responses to the semantic feature of word entropy were observed at around 200 to 600 ms leading to speech fluctuation onset over all three SNR levels. The response latencies became more leading with increasing noise and decreasing speech comprehension and intelligibility. While the following responses to speech acoustics were consistent with previous studies, our study revealed the robustness of leading responses to speech semantics, which suggests a possible predictive mechanism at the semantic level for maintaining reliable speech comprehension in noisy environments.


Assuntos
Compreensão , Percepção da Fala , Humanos , Compreensão/fisiologia , Semântica , Fala/fisiologia , Percepção da Fala/fisiologia , Eletroencefalografia , Acústica , Estimulação Acústica
9.
Neuroimage ; 267: 119841, 2023 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-36584758

RESUMO

BACKGROUND: Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms of bottom-up acoustic analysis and top-down generation of linguistic-based predictions. We studied natural speech processing across the adult lifespan via electroencephalography (EEG) measurements of neural tracking. GOALS: Our goals are to analyze the unique contribution of linguistic speech processing across the adult lifespan using natural speech, while controlling for the influence of acoustic processing. Moreover, we also studied acoustic processing across age. In particular, we focus on changes in spatial and temporal activation patterns in response to natural speech across the lifespan. METHODS: 52 normal-hearing adults between 17 and 82 years of age listened to a naturally spoken story while the EEG signal was recorded. We investigated the effect of age on acoustic and linguistic processing of speech. Because age correlated with hearing capacity and measures of cognition, we investigated whether the observed age effect is mediated by these factors. Furthermore, we investigated whether there is an effect of age on hemisphere lateralization and on spatiotemporal patterns of the neural responses. RESULTS: Our EEG results showed that linguistic speech processing declines with advancing age. Moreover, as age increased, the neural response latency to certain aspects of linguistic speech processing increased. Also acoustic neural tracking (NT) decreased with increasing age, which is at odds with the literature. In contrast to linguistic processing, older subjects showed shorter latencies for early acoustic responses to speech. No evidence was found for hemispheric lateralization in neither younger nor older adults during linguistic speech processing. Most of the observed aging effects on acoustic and linguistic processing were not explained by age-related decline in hearing capacity or cognition. However, our results suggest that the effect of decreasing linguistic neural tracking with advancing age at word-level is also partially due to an age-related decline in cognition than a robust effect of age. CONCLUSION: Spatial and temporal characteristics of the neural responses to continuous speech change across the adult lifespan for both acoustic and linguistic speech processing. These changes may be traces of structural and/or functional change that occurs with advancing age.


Assuntos
Percepção da Fala , Fala , Humanos , Idoso , Fala/fisiologia , Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Eletroencefalografia/métodos , Linguística , Acústica
10.
Hum Brain Mapp ; 44(17): 6149-6172, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37818940

RESUMO

The brain tracks and encodes multi-level speech features during spoken language processing. It is evident that this speech tracking is dominant at low frequencies (<8 Hz) including delta and theta bands. Recent research has demonstrated distinctions between delta- and theta-band tracking but has not elucidated how they differentially encode speech across linguistic levels. Here, we hypothesised that delta-band tracking encodes prediction errors (enhanced processing of unexpected features) while theta-band tracking encodes neural sharpening (enhanced processing of expected features) when people perceive speech with different linguistic contents. EEG responses were recorded when normal-hearing participants attended to continuous auditory stimuli that contained different phonological/morphological and semantic contents: (1) real-words, (2) pseudo-words and (3) time-reversed speech. We employed multivariate temporal response functions to measure EEG reconstruction accuracies in response to acoustic (spectrogram), phonetic and phonemic features with the partialling procedure that singles out unique contributions of individual features. We found higher delta-band accuracies for pseudo-words than real-words and time-reversed speech, especially during encoding of phonetic features. Notably, individual time-lag analyses showed that significantly higher accuracies for pseudo-words than real-words started at early processing stages for phonetic encoding (<100 ms post-feature) and later stages for acoustic and phonemic encoding (>200 and 400 ms post-feature, respectively). Theta-band accuracies, on the other hand, were higher when stimuli had richer linguistic content (real-words > pseudo-words > time-reversed speech). Such effects also started at early stages (<100 ms post-feature) during encoding of all individual features or when all features were combined. We argue these results indicate that delta-band tracking may play a role in predictive coding leading to greater tracking of pseudo-words due to the presence of unexpected/unpredicted semantic information, while theta-band tracking encodes sharpened signals caused by more expected phonological/morphological and semantic contents. Early presence of these effects reflects rapid computations of sharpening and prediction errors. Moreover, by measuring changes in EEG alpha power, we did not find evidence that the observed effects can be solitarily explained by attentional demands or listening efforts. Finally, we used directed information analyses to illustrate feedforward and feedback information transfers between prediction errors and sharpening across linguistic levels, showcasing how our results fit with the hierarchical Predictive Coding framework. Together, we suggest the distinct roles of delta and theta neural tracking for sharpening and predictive coding of multi-level speech features during spoken language processing.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Fala/fisiologia , Eletroencefalografia/métodos , Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Córtex Auditivo/fisiologia
11.
Neuroimage ; 252: 119049, 2022 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-35248707

RESUMO

Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant's subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants' neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context.


Assuntos
Música , Canto , Percepção Auditiva , Humanos , Reconhecimento Psicológico , Fala
12.
Neuroimage ; 264: 119698, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36270622

RESUMO

Working memory load can modulate speech perception. However, since speech perception and working memory are both complex functions, it remains elusive how each component of the working memory system interacts with each speech processing stage. To investigate this issue, we concurrently measure how the working memory load modulates neural activity tracking three levels of linguistic units, i.e., syllables, phrases, and sentences, using a multiscale frequency-tagging approach. Participants engage in a sentence comprehension task and the working memory load is manipulated by asking them to memorize either auditory verbal sequences or visual patterns. It is found that verbal and visual working memory load modulate speech processing in similar manners: Higher working memory load attenuates neural activity tracking of phrases and sentences but enhances neural activity tracking of syllables. Since verbal and visual WM load similarly influence the neural responses to speech, such influences may derive from the domain-general component of WM system. More importantly, working memory load asymmetrically modulates lower-level auditory encoding and higher-level linguistic processing of speech, possibly reflecting reallocation of attention induced by mnemonic load.


Assuntos
Memória de Curto Prazo , Percepção da Fala , Humanos , Memória de Curto Prazo/fisiologia , Fala/fisiologia , Linguística , Percepção da Fala/fisiologia , Idioma
13.
Eur J Neurosci ; 55(6): 1671-1690, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35263814

RESUMO

We investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers. Compared with their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both populations; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed. Our results support the idea that the neural response latency indicates the efficiency of neural speech processing: More or different brain regions are involved in processing speech, which causes longer communication pathways in the brain. These longer communication pathways hamper the information integration among these brain regions, reflected in longer processing times. Altogether, this suggests decreased neural speech processing efficiency in HI listeners as more time and more or different brain regions are required to process speech. Our results suggest that this reduction in neural speech processing efficiency occurs gradually as hearing deteriorates. From our results, it is apparent that sound amplification does not solve hearing loss. Even when listening to speech in silence at a comfortable loudness, hearing-impaired listeners process speech less efficiently.


Assuntos
Surdez , Perda Auditiva Neurossensorial , Perda Auditiva , Percepção da Fala , Adulto , Humanos , Ruído , Fala , Percepção da Fala/fisiologia
14.
Eur J Neurosci ; 52(5): 3375-3393, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32306466

RESUMO

When listening to natural speech, our brain activity tracks the slow amplitude modulations of speech, also called the speech envelope. Moreover, recent research has demonstrated that this neural envelope tracking can be affected by top-down processes. The present study was designed to examine if neural envelope tracking is modulated by the effort that a person expends during listening. Five measures were included to quantify listening effort: two behavioral measures based on a novel dual-task paradigm, a self-report effort measure and two neural measures related to phase synchronization and alpha power. Electroencephalography responses to sentences, presented at a wide range of subject-specific signal-to-noise ratios, were recorded in thirteen young, normal-hearing adults. A comparison of the five measures revealed different effects of listening effort as a function of speech understanding. Reaction times on the primary task and self-reported effort decreased with increasing speech understanding. In contrast, reaction times on the secondary task and alpha power showed a peak-shaped behavior with highest effort at intermediate speech understanding levels. With regard to neural envelope tracking, we found that the reaction times on the secondary task and self-reported effort explained a small part of the variability in theta-band envelope tracking. Speech understanding was found to strongly modulate neural envelope tracking. More specifically, our results demonstrated a robust increase in envelope tracking with increasing speech understanding. The present study provides new insights in the relations among different effort measures and highlights the potential of neural envelope tracking to objectively measure speech understanding in young, normal-hearing adults.


Assuntos
Percepção da Fala , Adulto , Percepção Auditiva , Humanos , Tempo de Reação , Autorrelato , Fala
15.
Eur J Neurosci ; 51(2): 641-650, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31430411

RESUMO

In a complex auditory scene, speech comprehension involves several stages: for example segregating the target from the background, recognizing syllables and integrating syllables into linguistic units (e.g., words). Although speech segregation is robust as shown by invariant neural tracking to target speech envelope, whether neural tracking to linguistic units is also robust and how this robustness is achieved remain unknown. To investigate these questions, we concurrently recorded neural responses tracking a rhythmic speech stream at its syllabic and word rates, using electroencephalography. Human participants listened to that target speech under a speech or noise distractor at varying signal-to-noise ratios. Neural tracking at the word rate was not as robust as neural tracking at the syllabic rate. Robust neural tracking to target's words was only observed under the speech distractor but not under the noise distractor. Moreover, this robust word tracking correlated with a successful suppression of distractor tracking. Critically, both word tracking and distractor suppression correlated with behavioural comprehension accuracy. In sum, our results suggest that a robust neural tracking of higher-level linguistic units relates to not only the target tracking, but also the distractor suppression.


Assuntos
Percepção da Fala , Compreensão , Eletroencefalografia , Humanos , Linguística , Fala
16.
J Neurophysiol ; 122(2): 601-615, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31141449

RESUMO

When we grow older, understanding speech in noise becomes more challenging. Research has demonstrated the role of auditory temporal and cognitive deficits in these age-related speech-in-noise difficulties. To better understand the underlying neural mechanisms, we recruited young, middle-aged, and older normal-hearing adults and investigated the interplay between speech understanding, cognition, and neural tracking of the speech envelope using electroencephalography. The stimuli consisted of natural speech masked by speech-weighted noise or a competing talker and were presented at several subject-specific speech understanding levels. In addition to running speech, we recorded auditory steady-state responses at low modulation frequencies to assess the effect of age on nonspeech sounds. The results show that healthy aging resulted in a supralinear increase in the speech reception threshold, i.e., worse speech understanding, most pronounced for the competing talker. Similarly, advancing age was associated with a supralinear increase in envelope tracking, with a pronounced enhancement for older adults. Additionally, envelope tracking was found to increase with speech understanding, most apparent for older adults. Because we found that worse cognitive scores were associated with enhanced envelope tracking, our results support the hypothesis that enhanced envelope tracking in older adults is the result of a higher activation of brain regions for processing speech, compared with younger adults. From a cognitive perspective, this could reflect the inefficient use of cognitive resources, often observed in behavioral studies. Interestingly, the opposite effect of age was found for auditory steady-state responses, suggesting a complex interplay of different neural mechanisms with advancing age.NEW & NOTEWORTHY We measured neural tracking of the speech envelope across the adult lifespan and found a supralinear increase in envelope tracking with age. Using a more ecologically valid approach than auditory steady-state responses, we found that young and older, as well as middle-aged, normal-hearing adults showed an increase in envelope tracking with increasing speech understanding and that this association is stronger for older adults.


Assuntos
Envelhecimento/fisiologia , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Mascaramento Perceptivo/fisiologia , Psicolinguística , Adulto Jovem
17.
J Neurophysiol ; 117(1): 18-27, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-27707813

RESUMO

Hearing loss manifests as a reduced ability to understand speech, particularly in multitalker situations. In these situations, younger normal-hearing listeners' brains are known to track attended speech through phase-locking of neural activity to the slow-varying envelope of the speech. This study investigates how hearing loss, compensated by hearing aids, affects the neural tracking of the speech-onset envelope in elderly participants with varying degree of hearing loss (n = 27, 62-86 yr; hearing thresholds 11-73 dB hearing level). In an active listening task, a to-be-attended audiobook (signal) was presented either in quiet or against a competing to-be-ignored audiobook (noise) presented at three individualized signal-to-noise ratios (SNRs). The neural tracking of the to-be-attended and to-be-ignored speech was quantified through the cross-correlation of the electroencephalogram (EEG) and the temporal envelope of speech. We primarily investigated the effects of hearing loss and SNR on the neural envelope tracking. First, we found that elderly hearing-impaired listeners' neural responses reliably track the envelope of to-be-attended speech more than to-be-ignored speech. Second, hearing loss relates to the neural tracking of to-be-ignored speech, resulting in a weaker differential neural tracking of to-be-attended vs. to-be-ignored speech in listeners with worse hearing. Third, neural tracking of to-be-attended speech increased with decreasing background noise. Critically, the beneficial effect of reduced noise on neural speech tracking decreased with stronger hearing loss. In sum, our results show that a common sensorineural processing deficit, i.e., hearing loss, interacts with central attention mechanisms and reduces the differential tracking of attended and ignored speech. NEW & NOTEWORTHY: The present study investigates the effect of hearing loss in older listeners on the neural tracking of competing speech. Interestingly, we observed that whereas internal degradation (hearing loss) relates to the neural tracking of ignored speech, external sound degradation (ratio between attended and ignored speech; signal-to-noise ratio) relates to tracking of attended speech. This provides the first evidence for hearing loss affecting the ability to neurally track speech.


Assuntos
Atenção/fisiologia , Mapeamento Encefálico , Encéfalo/fisiopatologia , Perda Auditiva/patologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Limiar Auditivo/fisiologia , Feminino , Perda Auditiva/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Psicoacústica , Razão Sinal-Ruído
18.
eNeuro ; 11(9)2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39266328

RESUMO

Studies employing EEG to measure somatosensory responses have been typically optimized to compute event-related potentials in response to discrete events. However, tactile interactions involve continuous processing of nonstationary inputs that change in location, duration, and intensity. To fill this gap, this study aims to demonstrate the possibility of measuring the neural tracking of continuous and unpredictable tactile information. Twenty-seven young adults (females, 15) were continuously and passively stimulated with a random series of gentle brushes on single fingers of each hand, which were covered from view. Thus, tactile stimulations were unique for each participant and stimulated fingers. An encoding model measured the degree of synchronization between brain activity and continuous tactile input, generating a temporal response function (TRF). Brain topographies associated with the encoding of each finger stimulation showed a contralateral response at central sensors starting at 50 ms and peaking at ∼140 ms of lag, followed by a bilateral response at ∼240 ms. A series of analyses highlighted that reliable tactile TRF emerged after just 3 min of stimulation. Strikingly, topographical patterns of the TRF allowed discriminating digit lateralization across hands and digit representation within each hand. Our results demonstrated for the first time the possibility of using EEG to measure the neural tracking of a naturalistic, continuous, and unpredictable stimulation in the somatosensory domain. Crucially, this approach allows the study of brain activity following individualized, idiosyncratic tactile events to the fingers.


Assuntos
Eletroencefalografia , Estimulação Física , Percepção do Tato , Humanos , Masculino , Feminino , Adulto Jovem , Eletroencefalografia/métodos , Percepção do Tato/fisiologia , Adulto , Encéfalo/fisiologia , Dedos/fisiologia , Tato/fisiologia , Potenciais Somatossensoriais Evocados/fisiologia , Mapeamento Encefálico , Lateralidade Funcional/fisiologia
19.
eNeuro ; 10(7)2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37451862

RESUMO

Speech comprehension is a complex neural process on which relies on activation and integration of multiple brain regions. In the current study, we evaluated whether speech comprehension can be investigated by neural tracking. Neural tracking is the phenomenon in which the brain responses time-lock to the rhythm of specific features in continuous speech. These features can be acoustic, i.e., acoustic tracking, or derived from the content of the speech using language properties, i.e., language tracking. We evaluated whether neural tracking of speech differs between a comprehensible story, an incomprehensible story, and a word list. We evaluated the neural responses to speech of 19 participants (six men). No significant difference regarding acoustic tracking was found. However, significant language tracking was only found for the comprehensible story. The most prominent effect was visible to word surprisal, a language feature at the word level. The neural response to word surprisal showed a prominent negativity between 300 and 400 ms, similar to the N400 in evoked response paradigms. This N400 was significantly more negative when the story was comprehended, i.e., when words could be integrated in the context of previous words. These results show that language tracking can capture the effect of speech comprehension.


Assuntos
Eletroencefalografia , Percepção da Fala , Humanos , Masculino , Feminino , Eletroencefalografia/métodos , Compreensão/fisiologia , Potenciais Evocados/fisiologia , Idioma , Audição , Percepção da Fala/fisiologia
20.
Hear Res ; 434: 108785, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37172414

RESUMO

Behavioral tests are currently the gold standard in measuring speech intelligibility. However, these tests can be difficult to administer in young children due to factors such as motivation, linguistic knowledge and cognitive skills. It has been shown that measures of neural envelope tracking can be used to predict speech intelligibility and overcome these issues. However, its potential as an objective measure for speech intelligibility in noise remains to be investigated in preschool children. Here, we evaluated neural envelope tracking as a function of signal-to-noise ratio (SNR) in 14 5-year-old children. We examined EEG responses to natural, continuous speech presented at different SNRs ranging from -8 (very difficult) to 8 dB SNR (very easy). As expected delta band (0.5-4 Hz) tracking increased with increasing stimulus SNR. However, this increase was not strictly monotonic as neural tracking reached a plateau between 0 and 4 dB SNR, similarly to the behavioral speech intelligibility outcomes. These findings indicate that neural tracking in the delta band remains stable, as long as the acoustical degradation of the speech signal does not reflect significant changes in speech intelligibility. Theta band tracking (4-8 Hz), on the other hand, was found to be drastically reduced and more easily affected by noise in children, making it less reliable as a measure of speech intelligibility. By contrast, neural envelope tracking in the delta band was directly associated with behavioral measures of speech intelligibility. This suggests that neural envelope tracking in the delta band is a valuable tool for evaluating speech-in-noise intelligibility in preschoolers, highlighting its potential as an objective measure of speech in difficult-to-test populations.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Pré-Escolar , Humanos , Percepção da Fala/fisiologia , Ruído/efeitos adversos , Razão Sinal-Ruído , Teste do Limiar de Recepção da Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA