Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 81
Filtrar
1.
bioRxiv ; 2024 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-38352332

RESUMO

When we listen to speech, our brain's neurophysiological responses "track" its acoustic features, but it is less well understood how these auditory responses are modulated by linguistic content. Here, we recorded magnetoencephalography (MEG) responses while subjects listened to four types of continuous-speech-like passages: speech-envelope modulated noise, English-like non-words, scrambled words, and narrative passage. Temporal response function (TRF) analysis provides strong neural evidence for the emergent features of speech processing in cortex, from acoustics to higher-level linguistics, as incremental steps in neural speech processing. Critically, we show a stepwise hierarchical progression of progressively higher order features over time, reflected in both bottom-up (early) and top-down (late) processing stages. Linguistically driven top-down mechanisms take the form of late N400-like responses, suggesting a central role of predictive coding mechanisms at multiple levels. As expected, the neural processing of lower-level acoustic feature responses is bilateral or right lateralized, with left lateralization emerging only for lexical-semantic features. Finally, our results identify potential neural markers of the computations underlying speech perception and comprehension.

2.
Front Neurosci ; 17: 1264453, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38156264

RESUMO

Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of ~40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.

3.
Proc Natl Acad Sci U S A ; 120(49): e2309166120, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38032934

RESUMO

Neural speech tracking has advanced our understanding of how our brains rapidly map an acoustic speech signal onto linguistic representations and ultimately meaning. It remains unclear, however, how speech intelligibility is related to the corresponding neural responses. Many studies addressing this question vary the level of intelligibility by manipulating the acoustic waveform, but this makes it difficult to cleanly disentangle the effects of intelligibility from underlying acoustical confounds. Here, using magnetoencephalography recordings, we study neural measures of speech intelligibility by manipulating intelligibility while keeping the acoustics strictly unchanged. Acoustically identical degraded speech stimuli (three-band noise-vocoded, ~20 s duration) are presented twice, but the second presentation is preceded by the original (nondegraded) version of the speech. This intermediate priming, which generates a "pop-out" percept, substantially improves the intelligibility of the second degraded speech passage. We investigate how intelligibility and acoustical structure affect acoustic and linguistic neural representations using multivariate temporal response functions (mTRFs). As expected, behavioral results confirm that perceived speech clarity is improved by priming. mTRFs analysis reveals that auditory (speech envelope and envelope onset) neural representations are not affected by priming but only by the acoustics of the stimuli (bottom-up driven). Critically, our findings suggest that segmentation of sounds into words emerges with better speech intelligibility, and most strongly at the later (~400 ms latency) word processing stage, in prefrontal cortex, in line with engagement of top-down mechanisms associated with priming. Taken together, our results show that word representations may provide some objective measures of speech comprehension.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Inteligibilidade da Fala/fisiologia , Estimulação Acústica/métodos , Fala/fisiologia , Ruído , Acústica , Magnetoencefalografia/métodos , Percepção da Fala/fisiologia
4.
Elife ; 122023 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-38018501

RESUMO

Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group-level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: (1) Is there a significant neural representation corresponding to this predictor variable? And if so, (2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.


Assuntos
Eletroencefalografia , Percepção da Fala , Humanos , Eletroencefalografia/métodos , Encéfalo/fisiologia , Fala/fisiologia , Mapeamento Encefálico/métodos , Percepção da Fala/fisiologia
5.
bioRxiv ; 2023 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-37546895

RESUMO

Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of approximately 40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.

6.
bioRxiv ; 2023 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-37292644

RESUMO

Neural speech tracking has advanced our understanding of how our brains rapidly map an acoustic speech signal onto linguistic representations and ultimately meaning. It remains unclear, however, how speech intelligibility is related to the corresponding neural responses. Many studies addressing this question vary the level of intelligibility by manipulating the acoustic waveform, but this makes it difficult to cleanly disentangle effects of intelligibility from underlying acoustical confounds. Here, using magnetoencephalography (MEG) recordings, we study neural measures of speech intelligibility by manipulating intelligibility while keeping the acoustics strictly unchanged. Acoustically identical degraded speech stimuli (three-band noise vocoded, ~20 s duration) are presented twice, but the second presentation is preceded by the original (non-degraded) version of the speech. This intermediate priming, which generates a 'pop-out' percept, substantially improves the intelligibility of the second degraded speech passage. We investigate how intelligibility and acoustical structure affects acoustic and linguistic neural representations using multivariate Temporal Response Functions (mTRFs). As expected, behavioral results confirm that perceived speech clarity is improved by priming. TRF analysis reveals that auditory (speech envelope and envelope onset) neural representations are not affected by priming, but only by the acoustics of the stimuli (bottom-up driven). Critically, our findings suggest that segmentation of sounds into words emerges with better speech intelligibility, and most strongly at the later (~400 ms latency) word processing stage, in prefrontal cortex (PFC), in line with engagement of top-down mechanisms associated with priming. Taken together, our results show that word representations may provide some objective measures of speech comprehension.

7.
Brain Commun ; 5(3): fcad149, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37288315

RESUMO

Cortical ischaemic strokes result in cognitive deficits depending on the area of the affected brain. However, we have demonstrated that difficulties with attention and processing speed can occur even with small subcortical infarcts. Symptoms appear independent of lesion location, suggesting they arise from generalized disruption of cognitive networks. Longitudinal studies evaluating directional measures of functional connectivity in this population are lacking. We evaluated six patients with minor stroke exhibiting cognitive impairment 6-8 weeks post-infarct and four age-similar controls. Resting-state magnetoencephalography data were collected. Clinical and imaging evaluations of both groups were repeated 6- and 12 months later. Network Localized Granger Causality was used to determine differences in directional connectivity between groups and across visits, which were correlated with clinical performance. Directional connectivity patterns remained stable across visits for controls. After the stroke, inter-hemispheric connectivity between the frontoparietal cortex and the non-frontoparietal cortex significantly increased between visits 1 and 2, corresponding to uniform improvement in reaction times and cognitive scores. Initially, the majority of functional links originated from non-frontal areas contralateral to the lesion, connecting to ipsilesional brain regions. By visit 2, inter-hemispheric connections, directed from the ipsilesional to the contralesional cortex significantly increased. At visit 3, patients demonstrating continued favourable cognitive recovery showed less reliance on these inter-hemispheric connections. These changes were not observed in those without continued improvement. Our findings provide supporting evidence that the neural basis of early post-stroke cognitive dysfunction occurs at the network level, and continued recovery correlates with the evolution of inter-hemispheric connectivity.

8.
J Neurophysiol ; 129(6): 1359-1377, 2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37096924

RESUMO

Understanding speech in a noisy environment is crucial in day-to-day interactions and yet becomes more challenging with age, even for healthy aging. Age-related changes in the neural mechanisms that enable speech-in-noise listening have been investigated previously; however, the extent to which age affects the timing and fidelity of encoding of target and interfering speech streams is not well understood. Using magnetoencephalography (MEG), we investigated how continuous speech is represented in auditory cortex in the presence of interfering speech in younger and older adults. Cortical representations were obtained from neural responses that time-locked to the speech envelopes with speech envelope reconstruction and temporal response functions (TRFs). TRFs showed three prominent peaks corresponding to auditory cortical processing stages: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). Older adults showed exaggerated speech envelope representations compared with younger adults. Temporal analysis revealed both that the age-related exaggeration starts as early as ∼50 ms and that older adults needed a substantially longer integration time window to achieve their better reconstruction of the speech envelope. As expected, with increased speech masking envelope reconstruction for the attended talker decreased and all three TRF peaks were delayed, with aging contributing additionally to the reduction. Interestingly, for older adults the late peak was delayed, suggesting that this late peak may receive contributions from multiple sources. Together these results suggest that there are several mechanisms at play compensating for age-related temporal processing deficits at several stages but which are not able to fully reestablish unimpaired speech perception.NEW & NOTEWORTHY We observed age-related changes in cortical temporal processing of continuous speech that may be related to older adults' difficulty in understanding speech in noise. These changes occur in both timing and strength of the speech representations at different cortical processing stages and depend on both noise condition and selective attention. Critically, their dependence on noise condition changes dramatically among the early, middle, and late cortical processing stages, underscoring how aging differentially affects these stages.


Assuntos
Percepção da Fala , Fala , Fala/fisiologia , Percepção Auditiva , Ruído , Percepção da Fala/fisiologia , Estimulação Acústica/métodos
9.
Transl Psychiatry ; 13(1): 13, 2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36653335

RESUMO

Aberrant gamma frequency neural oscillations in schizophrenia have been well demonstrated using auditory steady-state responses (ASSR). However, the neural circuits underlying 40 Hz ASSR deficits in schizophrenia remain poorly understood. Sixty-six patients with schizophrenia spectrum disorders and 85 age- and gender-matched healthy controls completed one electroencephalography session measuring 40 Hz ASSR and one imaging session for resting-state functional connectivity (rsFC) assessments. The associations between the normalized power of 40 Hz ASSR and rsFC were assessed via linear regression and mediation models. We found that rsFC among auditory, precentral, postcentral, and prefrontal cortices were positively associated with 40 Hz ASSR in patients and controls separately and in the combined sample. The mediation analysis further confirmed that the deficit of gamma band ASSR in schizophrenia was nearly fully mediated by three of the rsFC circuits between right superior temporal gyrus-left medial prefrontal cortex (MPFC), left MPFC-left postcentral gyrus (PoG), and left precentral gyrus-right PoG. Gamma-band ASSR deficits in schizophrenia may be associated with deficient circuitry level connectivity to support gamma frequency synchronization. Correcting gamma band deficits in schizophrenia may require corrective interventions to normalize these aberrant networks.


Assuntos
Córtex Auditivo , Conectoma , Esquizofrenia , Humanos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia/métodos
10.
IEEE Trans Biomed Eng ; 70(1): 88-96, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35727788

RESUMO

OBJECTIVE: The Temporal Response Function (TRF) is a linear model of neural activity time-locked to continuous stimuli, including continuous speech. TRFs based on speech envelopes typically have distinct components that have provided remarkable insights into the cortical processing of speech. However, current methods may lead to less than reliable estimates of single-subject TRF components. Here, we compare two established methods, in TRF component estimation, and also propose novel algorithms that utilize prior knowledge of these components, bypassing the full TRF estimation. METHODS: We compared two established algorithms, ridge and boosting, and two novel algorithms based on Subspace Pursuit (SP) and Expectation Maximization (EM), which directly estimate TRF components given plausible assumptions regarding component characteristics. Single-channel, multi-channel, and source-localized TRFs were fit on simulations and real magnetoencephalographic data. Performance metrics included model fit and component estimation accuracy. RESULTS: Boosting and ridge have comparable performance in component estimation. The novel algorithms outperformed the others in simulations, but not on real data, possibly due to the plausible assumptions not actually being met. Ridge had slightly better model fits on real data compared to boosting, but also more spurious TRF activity. CONCLUSION: Results indicate that both smooth (ridge) and sparse (boosting) algorithms perform comparably at TRF component estimation. The SP and EM algorithms may be accurate, but rely on assumptions of component characteristics. SIGNIFICANCE: This systematic comparison establishes the suitability of widely used and novel algorithms for estimating robust TRF components, which is essential for improved subject-specific investigations into the cortical processing of speech.


Assuntos
Percepção da Fala , Fala , Algoritmos , Magnetoencefalografia/métodos , Percepção da Fala/fisiologia , Modelos Neurológicos
11.
Front Neurosci ; 16: 1075369, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36570848

RESUMO

Primary auditory cortex is a critical stage in the human auditory pathway, a gateway between subcortical and higher-level cortical areas. Receiving the output of all subcortical processing, it sends its output on to higher-level cortex. Non-invasive physiological recordings of primary auditory cortex using electroencephalography (EEG) and magnetoencephalography (MEG), however, may not have sufficient specificity to separate responses generated in primary auditory cortex from those generated in underlying subcortical areas or neighboring cortical areas. This limitation is important for investigations of effects of top-down processing (e.g., selective-attention-based) on primary auditory cortex: higher-level areas are known to be strongly influenced by top-down processes, but subcortical areas are often assumed to perform strictly bottom-up processing. Fortunately, recent advances have made it easier to isolate the neural activity of primary auditory cortex from other areas. In this perspective, we focus on time-locked responses to stimulus features in the high gamma band (70-150 Hz) and with early cortical latency (∼40 ms), intermediate between subcortical and higher-level areas. We review recent findings from physiological studies employing either repeated simple sounds or continuous speech, obtaining either a frequency following response (FFR) or temporal response function (TRF). The potential roles of top-down processing are underscored, and comparisons with invasive intracranial EEG (iEEG) and animal model recordings are made. We argue that MEG studies employing continuous speech stimuli may offer particular benefits, in that only a few minutes of speech generates robust high gamma responses from bilateral primary auditory cortex, and without measurable interference from subcortical or higher-level areas.

12.
Front Neurosci ; 16: 828546, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36003957

RESUMO

Voice pitch carries linguistic and non-linguistic information. Previous studies have described cortical tracking of voice pitch in clean speech, with responses reflecting both pitch strength and pitch value. However, pitch is also a powerful cue for auditory stream segregation, especially when competing streams have pitch differing in fundamental frequency, as is the case when multiple speakers talk simultaneously. We therefore investigated how cortical speech pitch tracking is affected in the presence of a second, task-irrelevant speaker. We analyzed human magnetoencephalography (MEG) responses to continuous narrative speech, presented either as a single talker in a quiet background or as a two-talker mixture of a male and a female speaker. In clean speech, voice pitch was associated with a right-dominant response, peaking at a latency of around 100 ms, consistent with previous electroencephalography and electrocorticography results. The response tracked both the presence of pitch and the relative value of the speaker's fundamental frequency. In the two-talker mixture, the pitch of the attended speaker was tracked bilaterally, regardless of whether or not there was simultaneously present pitch in the speech of the irrelevant speaker. Pitch tracking for the irrelevant speaker was reduced: only the right hemisphere still significantly tracked pitch of the unattended speaker, and only during intervals in which no pitch was present in the attended talker's speech. Taken together, these results suggest that pitch-based segregation of multiple speakers, at least as measured by macroscopic cortical tracking, is not entirely automatic but strongly dependent on selective attention.

13.
Neuroimage ; 260: 119496, 2022 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-35870697

RESUMO

Identifying the directed connectivity that underlie networked activity between different cortical areas is critical for understanding the neural mechanisms behind sensory processing. Granger causality (GC) is widely used for this purpose in functional magnetic resonance imaging analysis, but there the temporal resolution is low, making it difficult to capture the millisecond-scale interactions underlying sensory processing. Magnetoencephalography (MEG) has millisecond resolution, but only provides low-dimensional sensor-level linear mixtures of neural sources, which makes GC inference challenging. Conventional methods proceed in two stages: First, cortical sources are estimated from MEG using a source localization technique, followed by GC inference among the estimated sources. However, the spatiotemporal biases in estimating sources propagate into the subsequent GC analysis stage, may result in both false alarms and missing true GC links. Here, we introduce the Network Localized Granger Causality (NLGC) inference paradigm, which models the source dynamics as latent sparse multivariate autoregressive processes and estimates their parameters directly from the MEG measurements, integrated with source localization, and employs the resulting parameter estimates to produce a precise statistical characterization of the detected GC links. We offer several theoretical and algorithmic innovations within NLGC and further examine its utility via comprehensive simulations and application to MEG data from an auditory task involving tone processing from both younger and older participants. Our simulation studies reveal that NLGC is markedly robust with respect to model mismatch, network size, and low signal-to-noise ratio, whereas the conventional two-stage methods result in high false alarms and mis-detections. We also demonstrate the advantages of NLGC in revealing the cortical network-level characterization of neural activity during tone processing and resting state by delineating task- and age-related connectivity changes.


Assuntos
Imageamento por Ressonância Magnética , Magnetoencefalografia , Algoritmos , Encéfalo/diagnóstico por imagem , Simulação por Computador , Humanos , Imageamento por Ressonância Magnética/métodos , Magnetoencefalografia/métodos
14.
Front Neurol ; 13: 819603, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35418932

RESUMO

Stroke patients with hemiparesis display decreased beta band (13-25 Hz) rolandic activity, correlating to impaired motor function. However, clinically, patients without significant weakness, with small lesions far from sensorimotor cortex, exhibit bilateral decreased motor dexterity and slowed reaction times. We investigate whether these minor stroke patients also display abnormal beta band activity. Magnetoencephalographic (MEG) data were collected from nine minor stroke patients (NIHSS < 4) without significant hemiparesis, at ~1 and ~6 months postinfarct, and eight age-similar controls. Rolandic relative beta power during matching tasks and resting state, and Beta Event Related (De)Synchronization (ERD/ERS) during button press responses were analyzed. Regardless of lesion location, patients had significantly reduced relative beta power and ERS compared to controls. Abnormalities persisted over visits, and were present in both ipsi- and contra-lesional hemispheres, consistent with bilateral impairments in motor dexterity and speed. Minor stroke patients without severe weakness display reduced rolandic beta band activity in both hemispheres, which may be linked to bilaterally impaired dexterity and processing speed, implicating global connectivity dysfunction affecting sensorimotor cortex independent of lesion location. Findings not only illustrate global network disruption after minor stroke, but suggest rolandic beta band activity may be a potential biomarker and treatment target, even for minor stroke patients with small lesions far from sensorimotor areas.

15.
Elife ; 112022 01 21.
Artigo em Inglês | MEDLINE | ID: mdl-35060904

RESUMO

Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.


Assuntos
Idioma , Linguística , Sensação/fisiologia , Percepção da Fala , Lobo Temporal/fisiologia , Encéfalo/fisiologia , Compreensão , Feminino , Humanos , Modelos Lineares , Magnetoencefalografia , Masculino , Adulto Jovem
16.
J Neurosci ; 41(50): 10316-10329, 2021 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-34732519

RESUMO

When listening to speech, our brain responses time lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic representations might reflect unaccounted acoustic processing rather than language processing. Here, we evaluated the potential of several recently proposed linguistic representations as neural markers of speech comprehension. To do so, we investigated EEG responses to audiobook speech of 29 participants (22 females). We examined whether these representations contribute unique information over and beyond acoustic neural tracking and each other. Indeed, not all of these linguistic representations were significantly tracked after controlling for acoustic properties. However, phoneme surprisal, cohort entropy, word surprisal, and word frequency were all significantly tracked over and beyond acoustic properties. We also tested the generality of the associated responses by training on one story and testing on another. In general, the linguistic representations are tracked similarly across different stories spoken by different readers. These results suggests that these representations characterize the processing of the linguistic content of speech.SIGNIFICANCE STATEMENT For clinical applications, it would be desirable to develop a neural marker of speech comprehension derived from neural responses to continuous speech. Such a measure would allow for behavior-free evaluation of speech understanding; this would open doors toward better quantification of speech understanding in populations from whom obtaining behavioral measures may be difficult, such as young children or people with cognitive impairments, to allow better targeted interventions and better fitting of hearing devices.


Assuntos
Compreensão/fisiologia , Linguística , Acústica da Fala , Percepção da Fala/fisiologia , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Processamento de Sinais Assistido por Computador
17.
J Neurosci ; 41(38): 8023-8039, 2021 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-34400518

RESUMO

Cortical processing of arithmetic and of language rely on both shared and task-specific neural mechanisms, which should also be dissociable from the particular sensory modality used to probe them. Here, spoken arithmetical and non-mathematical statements were employed to investigate neural processing of arithmetic, compared with general language processing, in an attention-modulated cocktail party paradigm. Magnetoencephalography (MEG) data were recorded from 22 human subjects listening to audio mixtures of spoken sentences and arithmetic equations while selectively attending to one of the two speech streams. Short sentences and simple equations were presented diotically at fixed and distinct word/symbol and sentence/equation rates. Critically, this allowed neural responses to acoustics, words, and symbols to be dissociated from responses to sentences and equations. Indeed, the simultaneous neural processing of the acoustics of words and symbols was observed in auditory cortex for both streams. Neural responses to sentences and equations, however, were predominantly to the attended stream, originating primarily from left temporal, and parietal areas, respectively. Additionally, these neural responses were correlated with behavioral performance in a deviant detection task. Source-localized temporal response functions (TRFs) revealed distinct cortical dynamics of responses to sentences in left temporal areas and equations in bilateral temporal, parietal, and motor areas. Finally, the target of attention could be decoded from MEG responses, especially in left superior parietal areas. In short, the neural responses to arithmetic and language are especially well segregated during the cocktail party paradigm, and the correlation with behavior suggests that they may be linked to successful comprehension or calculation.SIGNIFICANCE STATEMENT Neural processing of arithmetic relies on dedicated, modality independent cortical networks that are distinct from those underlying language processing. Using a simultaneous cocktail party listening paradigm, we found that these separate networks segregate naturally when listeners selectively attend to one type over the other. Neural responses in the left temporal lobe were observed for both spoken sentences and equations, but the latter additionally showed bilateral parietal activity consistent with arithmetic processing. Critically, these responses were modulated by selective attention and correlated with task behavior, consistent with reflecting high-level processing for speech comprehension or correct calculations. The response dynamics show task-related differences that were used to reliably decode the attentional target of sentences or equations.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Resolução de Problemas/fisiologia , Compreensão/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Matemática , Percepção da Fala/fisiologia , Adulto Jovem
18.
Schizophr Res ; 228: 262-270, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33493774

RESUMO

Auditory hallucinations are a debilitating symptom of schizophrenia. Effective treatment is limited because the underlying neural mechanisms remain unknown. Our study investigates how local and long-range functional connectivity is associated with auditory perceptual disturbances (APD) in schizophrenia. APD was assessed using the Auditory Perceptual Trait and State Scale. Resting state fMRI data were collected for N=99 patients with schizophrenia. Local functional connectivity was estimated using regional homogeneity (ReHo) analysis; long-range connectivity was estimated using resting state functional connectivity (rsFC) analysis. Mediation analyses tested whether local (ReHo) connectivity significantly mediated associations between long-distance rsFC and APD. Severity of APD was significantly associated with reduced ReHo in left and right putamen, left temporoparietal junction (TPJ), and right hippocampus-pallidum. Higher APD was also associated with reduced rsFC between the right putamen and the contralateral putamen and auditory cortex. Local and long-distance connectivity measures together explained 40.3% of variance in APD (P < 0.001), with the strongest predictor being the left TPJ ReHo (P < 0.001). Additionally, TPJ ReHo significantly mediated the relationship between right putamen - left putamen rsFC and APD (Sobel test, P = 0.001). Our findings suggest that both local and long-range functional connectivity deficits contribute to APD, emphasizing the role of striatum and auditory cortex. Considering the translational impact of these circuit-based findings within the context of prior clinical trials to treat auditory hallucinations, we propose a model in which correction of both local and long-distance functional connectivity deficits may be necessary to treat auditory hallucinations.


Assuntos
Córtex Auditivo , Esquizofrenia , Córtex Auditivo/diagnóstico por imagem , Alucinações/diagnóstico por imagem , Alucinações/etiologia , Humanos , Imageamento por Ressonância Magnética , Esquizofrenia/complicações , Esquizofrenia/diagnóstico por imagem , Lobo Temporal
19.
Proc Natl Acad Sci U S A ; 117(52): 33578-33585, 2020 12 29.
Artigo em Inglês | MEDLINE | ID: mdl-33318200

RESUMO

Stroke patients with small central nervous system infarcts often demonstrate an acute dysexecutive syndrome characterized by difficulty with attention, concentration, and processing speed, independent of lesion size or location. We use magnetoencephalography (MEG) to show that disruption of network dynamics may be responsible. Nine patients with recent minor strokes and eight age-similar controls underwent cognitive screening using the Montreal cognitive assessment (MoCA) and MEG to evaluate differences in cerebral activation patterns. During MEG, subjects participated in a visual picture-word matching task. Task complexity was increased as testing progressed. Cluster-based permutation tests determined differences in activation patterns within the visual cortex, fusiform gyrus, and lateral temporal lobe. At visit 1, MoCA scores were significantly lower for patients than controls (median [interquartile range] = 26.0 [4] versus 29.5 [3], P = 0.005), and patient reaction times were increased. The amplitude of activation was significantly lower after infarct and demonstrated a pattern of temporal dispersion independent of stroke location. Differences were prominent in the fusiform gyrus and lateral temporal lobe. The pattern suggests that distributed network dysfunction may be responsible. Additionally, controls were able to modulate their cerebral activity based on task difficulty. In contrast, stroke patients exhibited the same low-amplitude response to all stimuli. Group differences remained, to a lesser degree, 6 mo later; while MoCA scores and reaction times improved for patients. This study suggests that function is a globally distributed property beyond area-specific functionality and illustrates the need for longer-term follow-up studies to determine whether abnormal activation patterns ultimately resolve or another mechanism underlies continued recovery.


Assuntos
Rede Nervosa/fisiopatologia , Acidente Vascular Cerebral/fisiopatologia , Doença Aguda , Adolescente , Adulto , Idoso , Comportamento , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem , Acidente Vascular Cerebral/diagnóstico por imagem , Síndrome , Análise e Desempenho de Tarefas , Fatores de Tempo , Adulto Jovem
20.
Curr Opin Physiol ; 18: 25-31, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33225119

RESUMO

Speech processing in the human brain is grounded in non-specific auditory processing in the general mammalian brain, but relies on human-specific adaptations for processing speech and language. For this reason, many recent neurophysiological investigations of speech processing have turned to the human brain, with an emphasis on continuous speech. Substantial progress has been made using the phenomenon of "neural speech tracking", in which neurophysiological responses time-lock to the rhythm of auditory (and other) features in continuous speech. One broad category of investigations concerns the extent to which speech tracking measures are related to speech intelligibility, which has clinical applications in addition to its scientific importance. Recent investigations have also focused on disentangling different neural processes that contribute to speech tracking. The two lines of research are closely related, since processing stages throughout auditory cortex contribute to speech comprehension, in addition to subcortical processing and higher order and attentional processes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA