Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 246: 118745, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-34808364

RESUMO

Temporal modulations in the envelope of acoustic waveforms at rates around 4 Hz constitute a strong acoustic cue in speech and other natural sounds. It is often assumed that the ascending auditory pathway is increasingly sensitive to slow amplitude modulation (AM), but sensitivity to AM is typically considered separately for individual stages of the auditory system. Here, we used blood oxygen level dependent (BOLD) fMRI in twenty human subjects (10 male) to measure sensitivity of regional neural activity in the auditory system to 4 Hz temporal modulations. Participants were exposed to AM noise stimuli varying parametrically in modulation depth to characterize modulation-depth effects on BOLD responses. A Bayesian hierarchical modeling approach was used to model potentially nonlinear relations between AM depth and group-level BOLD responses in auditory regions of interest (ROIs). Sound stimulation activated the auditory brainstem and cortex structures in single subjects. BOLD responses to noise exposure in core and belt auditory cortices scaled positively with modulation depth. This finding was corroborated by whole-brain cluster-level inference. Sensitivity to AM depth variations was particularly pronounced in the Heschl's gyrus but also found in higher-order auditory cortical regions. None of the sound-responsive subcortical auditory structures showed a BOLD response profile that reflected the parametric variation in AM depth. The results are compatible with the notion that early auditory cortical regions play a key role in processing low-rate modulation content of sounds in the human auditory system.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Tronco Encefálico/fisiologia , Imageamento por Ressonância Magnética/métodos , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Tronco Encefálico/diagnóstico por imagem , Feminino , Humanos , Masculino , Adulto Jovem
2.
Mov Disord ; 37(3): 479-489, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35114035

RESUMO

BACKGROUND: Parkinson's disease (PD) causes a loss of neuromelanin-positive, noradrenergic neurons in the locus coeruleus (LC), which has been implicated in nonmotor dysfunction. OBJECTIVES: We used "neuromelanin sensitive" magnetic resonance imaging (MRI) to localize structural disintegration in the LC and its association with nonmotor dysfunction in PD. METHODS: A total of 42 patients with PD and 24 age-matched healthy volunteers underwent magnetization transfer weighted (MTw) MRI of the LC. The contrast-to-noise ratio of the MTw signal (CNRMTw ) was used as an index of structural LC integrity. We performed slicewise and voxelwise analyses to map spatial patterns of structural disintegration, complemented by principal component analysis (PCA). We also tested for correlations between regional CNRMTw and severity of nonmotor symptoms. RESULTS: Mean CNRMTw of the right LC was reduced in patients relative to controls. Voxelwise and slicewise analyses showed that the attenuation of CNRMTw was confined to the right mid-caudal LC and linked regional CNRMTw to nonmotor symptoms. CNRMTw attenuation in the left mid-caudal LC was associated with the orthostatic drop in systolic blood pressure, whereas CNRMTw attenuation in the caudal most portion of right LC correlated with apathy ratings. PCA identified a bilateral component that was more weakly expressed in patients. This component was characterized by a gradient in CNRMTw along the rostro-caudal and dorso-ventral axes of the nucleus. The individual expression score of this component reflected the overall severity of nonmotor symptoms. CONCLUSION: A spatially heterogeneous disintegration of LC in PD may determine the individual expression of specific nonmotor symptoms such as orthostatic dysregulation or apathy. © 2022 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson Movement Disorder Society.


Assuntos
Neurônios Adrenérgicos , Doença de Parkinson , Neurônios Adrenérgicos/patologia , Humanos , Locus Cerúleo/metabolismo , Imageamento por Ressonância Magnética/métodos , Movimento , Doença de Parkinson/complicações
3.
J Neurosci ; 40(12): 2562-2572, 2020 03 18.
Artigo em Inglês | MEDLINE | ID: mdl-32094201

RESUMO

When selectively attending to a speech stream in multi-talker scenarios, low-frequency cortical activity is known to synchronize selectively to fluctuations in the attended speech signal. Older listeners with age-related sensorineural hearing loss (presbycusis) often struggle to understand speech in such situations, even when wearing a hearing aid. Yet, it is unclear whether a peripheral hearing loss degrades the attentional modulation of cortical speech tracking. Here, we used psychoacoustics and electroencephalography (EEG) in male and female human listeners to examine potential effects of hearing loss on EEG correlates of speech envelope synchronization in cortex. Behaviorally, older hearing-impaired (HI) listeners showed degraded speech-in-noise recognition and reduced temporal acuity compared with age-matched normal-hearing (NH) controls. During EEG recordings, we used a selective attention task with two spatially separated simultaneous speech streams where NH and HI listeners both showed high speech recognition performance. Low-frequency (<10 Hz) envelope-entrained EEG responses were enhanced in the HI listeners, both for the attended speech, but also for tone sequences modulated at slow rates (4 Hz) during passive listening. Compared with the attended speech, responses to the ignored stream were found to be reduced in both HI and NH listeners, allowing for the attended target to be classified from single-trial EEG data with similar high accuracy in the two groups. However, despite robust attention-modulated speech entrainment, the HI listeners rated the competing speech task to be more difficult. These results suggest that speech-in-noise problems experienced by older HI listeners are not necessarily associated with degraded attentional selection.SIGNIFICANCE STATEMENT People with age-related sensorineural hearing loss often struggle to follow speech in the presence of competing talkers. It is currently unclear whether hearing impairment may impair the ability to use selective attention to suppress distracting speech in situations when the distractor is well segregated from the target. Here, we report amplified envelope-entrained cortical EEG responses to attended speech and to simple tones modulated at speech rates (4 Hz) in listeners with age-related hearing loss. Critically, despite increased self-reported listening difficulties, cortical synchronization to speech mixtures was robustly modulated by selective attention in listeners with hearing loss. This allowed the attended talker to be classified from single-trial EEG responses with high accuracy in both older hearing-impaired listeners and age-matched normal-hearing controls.


Assuntos
Atenção/fisiologia , Sincronização Cortical , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Estimulação Acústica , Idoso , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicoacústica , Desempenho Psicomotor , Reconhecimento Psicológico , Percepção da Fala
4.
Eur J Neurosci ; 51(5): 1279-1289, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-29392835

RESUMO

Neuronal oscillations are thought to play an important role in working memory (WM) and speech processing. Listening to speech in real-life situations is often cognitively demanding but it is unknown whether WM load influences how auditory cortical activity synchronizes to speech features. Here, we developed an auditory n-back paradigm to investigate cortical entrainment to speech envelope fluctuations under different degrees of WM load. We measured the electroencephalogram, pupil dilations and behavioural performance from 22 subjects listening to continuous speech with an embedded n-back task. The speech stimuli consisted of long spoken number sequences created to match natural speech in terms of sentence intonation, syllabic rate and phonetic content. To burden different WM functions during speech processing, listeners performed an n-back task on the speech sequences in different levels of background noise. Increasing WM load at higher n-back levels was associated with a decrease in posterior alpha power as well as increased pupil dilations. Frontal theta power increased at the start of the trial and increased additionally with higher n-back level. The observed alpha-theta power changes are consistent with visual n-back paradigms suggesting general oscillatory correlates of WM processing load. Speech entrainment was measured as a linear mapping between the envelope of the speech signal and low-frequency cortical activity (< 13 Hz). We found that increases in both types of WM load (background noise and n-back level) decreased cortical speech envelope entrainment. Although entrainment persisted under high load, our results suggest a top-down influence of WM processing on cortical speech entrainment.


Assuntos
Córtex Auditivo , Fala , Percepção Auditiva , Eletroencefalografia , Humanos , Memória de Curto Prazo
5.
J Neural Eng ; 18(4)2021 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-33849003

RESUMO

Objective.An auditory stimulus can be related to the brain response that it evokes by a stimulus-response model fit to the data. This offers insight into perceptual processes within the brain and is also of potential use for devices such as brain computer interfaces (BCIs). The quality of the model can be quantified by measuring the fit with a regression problem, or by applying it to a classification task and measuring its performance.Approach.Here we focus on amatch-mismatch(MM) task that entails deciding whether a segment of brain signal matches, via a model, the auditory stimulus that evoked it.Main results. Using these metrics, we describe a range of models of increasing complexity that we compare to methods in the literature, showing state-of-the-art performance. We document in detail one particular implementation, calibrated on a publicly-available database, that can serve as a robust reference to evaluate future developments.Significance.The MM task allows stimulus-response models to be evaluated in the limit of very high model accuracy, making it an attractive alternative to the more commonly used task of auditory attention detection. The MM task does not require class labels, so it is immune to mislabeling, and it is applicable to data recorded in listening scenarios with only one sound source, thus it is cheap to obtain large quantities of training and testing data. Performance metrics from this task, associated with regression accuracy, provide complementary insights into the relation between stimulus and response, as well as information about discriminatory power directly applicable to BCI applications.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Atenção , Percepção Auditiva , Encéfalo
6.
Front Neurosci ; 12: 531, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30131670

RESUMO

The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA