Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
eNeuro ; 10(8)2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37500493

RESUMO

When listening to speech, the low-frequency cortical response below 10 Hz can track the speech envelope. Previous studies have demonstrated that the phase lag between speech envelope and cortical response can reflect the mechanism by which the envelope-tracking response is generated. Here, we analyze whether the mechanism to generate the envelope-tracking response is modulated by the level of consciousness, by studying how the stimulus-response phase lag is modulated by the disorder of consciousness (DoC). It is observed that DoC patients in general show less reliable neural tracking of speech. Nevertheless, the stimulus-response phase lag changes linearly with frequency between 3.5 and 8 Hz, for DoC patients who show reliable cortical tracking to speech, regardless of the consciousness state. The mean phase lag is also consistent across these DoC patients. These results suggest that the envelope-tracking response to speech can be generated by an automatic process that is barely modulated by the consciousness state.


Assuntos
Transtornos da Consciência , Percepção da Fala , Humanos , Estado de Consciência , Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Eletroencefalografia/métodos
2.
Sheng Li Xue Bao ; 71(6): 935-945, 2019 Dec 25.
Artigo em Chinês | MEDLINE | ID: mdl-31879748

RESUMO

Speech comprehension is a central cognitive function of the human brain. In cognitive neuroscience, a fundamental question is to understand how neural activity encodes the acoustic properties of a continuous speech stream and resolves multiple levels of linguistic structures at the same time. This paper reviews the recently developed research paradigms that employ electroencephalography (EEG) or magnetoencephalography (MEG) to capture neural tracking of acoustic features or linguistic structures of continuous speech. This review focuses on two questions in speech processing: (1) The encoding of continuously changing acoustic properties of speech; (2) The representation of hierarchical linguistic units, including syllables, words, phrases and sentences. Studies have found that the low-frequency cortical activity tracks the speech envelope. In addition, the cortical activities on different time scales track multiple levels of linguistic units and constitute a representation of hierarchically organized linguistic units. The article reviewed these studies, which provided new insights into the processes of continuous speech in the human brain.


Assuntos
Eletroencefalografia , Magnetoencefalografia , Fala , Estimulação Acústica , Humanos , Fala/fisiologia , Percepção da Fala
3.
Cereb Cortex ; 29(4): 1561-1571, 2019 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29788144

RESUMO

Segregating concurrent sound streams is a computationally challenging task that requires integrating bottom-up acoustic cues (e.g. pitch) and top-down prior knowledge about sound streams. In a multi-talker environment, the brain can segregate different speakers in about 100 ms in auditory cortex. Here, we used magnetoencephalographic (MEG) recordings to investigate the temporal and spatial signature of how the brain utilizes prior knowledge to segregate 2 speech streams from the same speaker, which can hardly be separated based on bottom-up acoustic cues. In a primed condition, the participants know the target speech stream in advance while in an unprimed condition no such prior knowledge is available. Neural encoding of each speech stream is characterized by the MEG responses tracking the speech envelope. We demonstrate that an effect in bilateral superior temporal gyrus and superior temporal sulcus is much stronger in the primed condition than in the unprimed condition. Priming effects are observed at about 100 ms latency and last more than 600 ms. Interestingly, prior knowledge about the target stream facilitates speech segregation by mainly suppressing the neural tracking of the non-target speech stream. In sum, prior knowledge leads to reliable speech segregation in auditory cortex, even in the absence of reliable bottom-up speech segregation cue.


Assuntos
Córtex Auditivo/fisiologia , Sinais (Psicologia) , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Adulto , Atenção , Feminino , Humanos , Magnetoencefalografia , Masculino , Acústica da Fala , Adulto Jovem
4.
Nat Commun ; 9(1): 5374, 2018 12 18.
Artigo em Inglês | MEDLINE | ID: mdl-30560906

RESUMO

The sensory and motor systems jointly contribute to complex behaviors, but whether motor systems are involved in high-order perceptual tasks such as speech and auditory comprehension remain debated. Here, we show that ocular muscle activity is synchronized to mentally constructed sentences during speech listening, in the absence of any sentence-related visual or prosodic cue. Ocular tracking of sentences is observed in the vertical electrooculogram (EOG), whether the eyes are open or closed, and in eye blinks measured by eyetracking. Critically, the phase of sentence-tracking ocular activity is strongly modulated by temporal attention, i.e., which word in a sentence is attended. Ocular activity also tracks high-level structures in non-linguistic auditory and visual sequences, and captures rapid fluctuations in temporal attention. Ocular tracking of non-visual rhythms possibly reflects global neural entrainment to task-relevant temporal structures across sensory and motor areas, which could serve to implement temporal attention and coordinate cortical networks.


Assuntos
Estimulação Acústica/psicologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Movimentos Oculares/fisiologia , Fala , Adulto , Atenção/fisiologia , Compreensão/fisiologia , Eletronistagmografia , Eletroculografia , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA