Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Neuroimage ; 285: 120476, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38030051

RESUMO

Multimodal stimulation can reverse pathological neural activity and improve symptoms in neuropsychiatric diseases. Recent research shows that multimodal acoustic-electric trigeminal-nerve stimulation (TNS) (i.e., musical stimulation synchronized to electrical stimulation of the trigeminal nerve) can improve consciousness in patients with disorders of consciousness. However, the reliability and mechanism of this novel approach remain largely unknown. We explored the effects of multimodal acoustic-electric TNS in healthy human participants by assessing conscious perception before and after stimulation using behavioral and neural measures in tactile and auditory target-detection tasks. To explore the mechanisms underlying the putative effects of acoustic-electric stimulation, we fitted a biologically plausible neural network model to the neural data using dynamic causal modeling. We observed that (1) acoustic-electric stimulation improves conscious tactile perception without a concomitant change in auditory perception, (2) this improvement is caused by the interplay of the acoustic and electric stimulation rather than any of the unimodal stimulation alone, and (3) the effect of acoustic-electric stimulation on conscious perception correlates with inter-regional connection changes in a recurrent neural processing model. These results provide evidence that acoustic-electric TNS can promote conscious perception. Alterations in inter-regional cortical connections might be the mechanism by which acoustic-electric TNS achieves its consciousness benefits.


Assuntos
Percepção Auditiva , Estado de Consciência , Humanos , Reprodutibilidade dos Testes , Estimulação Elétrica , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Acústica , Nervo Trigêmeo/fisiologia
2.
Ear Hear ; 2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39046790

RESUMO

OBJECTIVES: Identifying target sounds in challenging environments is crucial for daily experiences. It is important to note that it can be enhanced by nonauditory stimuli, for example, through lip-reading in an ongoing conversation. However, how tactile stimuli affect auditory processing is still relatively unclear. Recent studies have shown that brief tactile stimuli can reliably facilitate auditory perception, while studies using longer-lasting audio-tactile stimulation yielded conflicting results. This study aimed to investigate the impact of ongoing pulsating tactile stimulation on basic auditory processing. DESIGN: In experiment 1, the electroencephalogram (EEG) was recorded while 24 participants performed a loudness-discrimination task on a 4-Hz modulated tone-in-noise and received either in-phase, anti-phase, or no 4-Hz electrotactile stimulation above the median nerve. In experiment 2, another 24 participants were presented with the same tactile stimulation as before, but performed a tone-in-noise detection task while their selective auditory attention was manipulated. RESULTS: We found that in-phase tactile stimulation enhanced EEG responses to the tone, whereas anti-phase tactile stimulation suppressed these responses. No corresponding tactile effects on loudness-discrimination performance were observed in experiment 1. Using a yes/no paradigm in experiment 2, we found that in-phase tactile stimulation, but not anti-phase tactile stimulation, improved detection thresholds. Selective attention also improved thresholds but did not modulate the observed benefit from in-phase tactile stimulation. CONCLUSIONS: Our study highlights that ongoing in-phase tactile input can enhance basic auditory processing as reflected in scalp EEG and detection thresholds. This might have implications for the development of hearing enhancement technologies and interventions.

3.
Cereb Cortex ; 33(13): 8748-8758, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-37197766

RESUMO

Research on social threat has shown influences of various factors, such as agent characteristics, proximity, and social interaction on social threat perception. An important, yet understudied aspect of threat exposure concerns the ability to exert control over the threat and its implications for threat perception. In this study, we used a virtual reality (VR) environment showing an approaching avatar that was either angry (threatening body expression) or neutral (neutral body expression) and informed participants to stop avatars from coming closer under five levels of control success (0, 25, 50, 75, or 100%) when they felt uncomfortable. Behavioral results revealed that social threat triggered faster reactions at a greater virtual distance from the participant than the neutral avatar. Event-related potentials (ERPs) revealed that the angry avatar elicited a larger N170/vertex positive potential (VPP) and a smaller N3 than the neutral avatar. The 100% control condition elicited a larger late positive potential (LPP) than the 75% control condition. In addition, we observed enhanced theta power and accelerated heart rate for the angry avatar vs. neutral avatar, suggesting that these measures index threat perception. Our results indicate that perception of social threat takes place in early to middle cortical processing stages, and control ability is associated with cognitive evaluation in middle to late stages.


Assuntos
Controle Comportamental , Realidade Virtual , Humanos , Percepção Social , Eletroencefalografia , Cognição , Eletrocardiografia
4.
Proc Natl Acad Sci U S A ; 118(7)2021 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-33568530

RESUMO

Brain connectivity plays a major role in the encoding, transfer, and integration of sensory information. Interregional synchronization of neural oscillations in the γ-frequency band has been suggested as a key mechanism underlying perceptual integration. In a recent study, we found evidence for this hypothesis showing that the modulation of interhemispheric oscillatory synchrony by means of bihemispheric high-density transcranial alternating current stimulation (HD-TACS) affects binaural integration of dichotic acoustic features. Here, we aimed to establish a direct link between oscillatory synchrony, effective brain connectivity, and binaural integration. We experimentally manipulated oscillatory synchrony (using bihemispheric γ-TACS with different interhemispheric phase lags) and assessed the effect on effective brain connectivity and binaural integration (as measured with functional MRI and a dichotic listening task, respectively). We found that TACS reduced intrahemispheric connectivity within the auditory cortices and antiphase (interhemispheric phase lag 180°) TACS modulated connectivity between the two auditory cortices. Importantly, the changes in intra- and interhemispheric connectivity induced by TACS were correlated with changes in perceptual integration. Our results indicate that γ-band synchronization between the two auditory cortices plays a functional role in binaural integration, supporting the proposed role of interregional oscillatory synchrony in perceptual integration.


Assuntos
Percepção Auditiva , Encéfalo/fisiologia , Lateralidade Funcional , Conectoma , Feminino , Ritmo Gama , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Transcraniana por Corrente Contínua , Adulto Jovem
5.
J Cogn Neurosci ; 35(8): 1262-1278, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37172122

RESUMO

While listening to meaningful speech, auditory input is processed more rapidly near the end (vs. beginning) of sentences. Although several studies have shown such word-to-word changes in auditory input processing, it is still unclear from which processing level these word-to-word dynamics originate. We investigated whether predictions derived from sentential context can result in auditory word-processing dynamics during sentence tracking. We presented healthy human participants with auditory stimuli consisting of word sequences, arranged into either predictable (coherent sentences) or less predictable (unstructured, random word sequences) 42-Hz amplitude-modulated speech, and a continuous 25-Hz amplitude-modulated distractor tone. We recorded RTs and frequency-tagged neuroelectric responses (auditory steady-state responses) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing as evidenced by accelerated RTs and increased auditory steady-state responses to later-occurring words within sentences. These purely top-down contextually driven auditory word-processing dynamics occurred only when listeners focused their attention on the speech and did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that auditory word-processing dynamics during sentence tracking can originate from sentential predictions. The predictions depend on the listeners' attention to the speech, and affect only the processing of the parsed speech, not that of concurrently presented auditory streams.


Assuntos
Percepção da Fala , Processamento de Texto , Humanos , Percepção da Fala/fisiologia , Percepção Auditiva , Idioma , Fonética
6.
Neuroimage ; 274: 120140, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37120042

RESUMO

Auditory perception can benefit from stimuli in non-auditory sensory modalities, as for example in lip-reading. Compared with such visual influences, tactile influences are still poorly understood. It has been shown that single tactile pulses can enhance the perception of auditory stimuli depending on their relative timing, but whether and how such brief auditory enhancements can be stretched in time with more sustained, phase-specific periodic tactile stimulation is still unclear. To address this question, we presented tactile stimulation that fluctuated coherently and continuously at 4 Hz with an auditory noise (either in-phase or anti-phase) and assessed its effect on the cortical processing and perception of an auditory signal embedded in that noise. Scalp-electroencephalography recordings revealed an enhancing effect of in-phase tactile stimulation on cortical responses phase-locked to the noise and a suppressive effect of anti-phase tactile stimulation on responses evoked by the auditory signal. Although these effects appeared to follow well-known principles of multisensory integration of discrete audio-tactile events, they were not accompanied by corresponding effects on behavioral measures of auditory signal perception. Our results indicate that continuous periodic tactile stimulation can enhance cortical processing of acoustically-induced fluctuations and mask cortical responses to an ongoing auditory signal. They further suggest that such sustained cortical effects can be insufficient for inducing sustained bottom-up auditory benefits.


Assuntos
Potenciais Evocados Auditivos , Tato , Humanos , Potenciais Evocados Auditivos/fisiologia , Tato/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia , Ruído , Estimulação Acústica/métodos
7.
Neuroimage ; 276: 120172, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37230207

RESUMO

In brain-based communication, voluntarily modulated brain signals (instead of motor output) are utilized to interact with the outside world. The possibility to circumvent the motor system constitutes an important alternative option for severely paralyzed. Most communication brain-computer interface (BCI) paradigms require intact visual capabilities and impose a high cognitive load, but for some patients, these requirements are not given. In these situations, a better-suited, less cognitively demanding information-encoding approach may exploit auditorily-cued selective somatosensory attention to vibrotactile stimulation. Here, we propose, validate and optimize a novel communication-BCI paradigm using differential fMRI activation patterns evoked by selective somatosensory attention to tactile stimulation of the right hand or left foot. Using cytoarchitectonic probability maps and multi-voxel pattern analysis (MVPA), we show that the locus of selective somatosensory attention can be decoded from fMRI-signal patterns in (especially primary) somatosensory cortex with high accuracy and reliability, with the highest classification accuracy (85.93%) achieved when using Brodmann area 2 (SI-BA2) at a probability level of 0.2. Based on this outcome, we developed and validated a novel somatosensory attention-based yes/no communication procedure and demonstrated its high effectiveness even when using only a limited amount of (MVPA) training data. For the BCI user, the paradigm is straightforward, eye-independent, and requires only limited cognitive functioning. In addition, it is BCI-operator friendly given its objective and expertise-independent procedure. For these reasons, our novel communication paradigm has high potential for clinical applications.


Assuntos
Interfaces Cérebro-Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Eletroencefalografia/métodos , Encéfalo/diagnóstico por imagem , Mãos , Córtex Somatossensorial/diagnóstico por imagem , Córtex Somatossensorial/fisiologia
8.
Neuroimage ; 258: 119375, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35700949

RESUMO

Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners' syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica/métodos , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Percepção Auditiva , Audição , Humanos , Fonética , Fala/fisiologia , Percepção da Fala/fisiologia
9.
Neuroimage ; 254: 119142, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35342007

RESUMO

Developmental dyslexia is often accompanied by altered phonological processing of speech. Underlying neural changes have typically been characterized in terms of stimulus- and/or task-related responses within individual brain regions or their functional connectivity. Less is known about potential changes in the more global functional organization of brain networks. Here we recorded electroencephalography (EEG) in typical and dyslexic readers while they listened to (a) a random sequence of syllables and (b) a series of tri-syllabic real words. The network topology of the phase synchronization of evoked cortical oscillations was investigated in four frequency bands (delta, theta, alpha and beta) using minimum spanning tree graphs. We found that, compared to syllable tracking, word tracking triggered a shift toward a more integrated network topology in the theta band in both groups. Importantly, this change was significantly stronger in the dyslexic readers, who also showed increased reliance on a right frontal cluster of electrodes for word tracking. The current findings point towards an altered effect of word-level processing on the functional brain network organization that may be associated with less efficient phonological and reading skills in dyslexia.


Assuntos
Dislexia , Percepção da Fala , Percepção Auditiva , Encéfalo , Eletroencefalografia , Humanos , Leitura , Fala , Percepção da Fala/fisiologia
10.
Neuroimage ; 228: 117670, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359352

RESUMO

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Adulto Jovem
11.
Eur J Neurosci ; 54(10): 7626-7641, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34697833

RESUMO

Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non-speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non-speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech-to-environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech-to-environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech-to-environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech-specific top-down and bottom-up mechanisms activated during speech perception that are needed for tracking speech in real-life-like auditory environments.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica , Animais , Percepção Auditiva , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Fonética , Fala
12.
J Cogn Neurosci ; 32(8): 1428-1437, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32427072

RESUMO

Recent neuroimaging evidence suggests that the frequency of entrained oscillations in auditory cortices influences the perceived duration of speech segments, impacting word perception [Kösem, A., Bosker, H. R., Takashima, A., Meyer, A., Jensen, O., & Hagoort, P. Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875, 2018]. We further tested the causal influence of neural entrainment frequency during speech processing, by manipulating entrainment with continuous transcranial alternating current stimulation (tACS) at distinct oscillatory frequencies (3 and 5.5 Hz) above the auditory cortices. Dutch participants listened to speech and were asked to report their percept of a target Dutch word, which contained a vowel with an ambiguous duration. Target words were presented either in isolation (first experiment) or at the end of spoken sentences (second experiment). We predicted that the tACS frequency would influence neural entrainment and therewith how speech is perceptually sampled, leading to a perceptual overestimation or underestimation of the vowel's duration. Whereas results from Experiment 1 did not confirm this prediction, results from Experiment 2 suggested a small effect of tACS frequency on target word perception: Faster tACS leads to more long-vowel word percepts, in line with the previous neuroimaging findings. Importantly, the difference in word perception induced by the different tACS frequencies was significantly larger in Experiment 1 versus Experiment 2, suggesting that the impact of tACS is dependent on the sensory context. tACS may have a stronger effect on spoken word perception when the words are presented in continuous speech as compared to when they are isolated, potentially because prior (stimulus-induced) entrainment of brain oscillations might be a prerequisite for tACS to be effective.


Assuntos
Córtex Auditivo , Estimulação Transcraniana por Corrente Contínua , Percepção Auditiva , Audição , Humanos , Fala
13.
J Cogn Neurosci ; 32(7): 1242-1250, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31682569

RESUMO

Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.


Assuntos
Córtex Auditivo , Estimulação Transcraniana por Corrente Contínua , Percepção Auditiva , Humanos , Fonética , Fala
14.
Neuroimage ; 202: 116134, 2019 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-31470124

RESUMO

Viewing a speaker's lip movements can improve the brain's ability to 'track' the amplitude envelope of the auditory speech signal and facilitate intelligibility. Whether such neurobehavioral benefits can also arise from tactually sensing the speech envelope on the skin is unclear. We hypothesized that tactile speech envelopes can improve neural tracking of auditory speech and thereby facilitate intelligibility. To test this, we applied continuous auditory speech and vibrotactile speech-envelope-shaped stimulation at various asynchronies to the ears and index fingers of normally-hearing human listeners while simultaneously assessing speech-recognition performance and cortical speech-envelope tracking with electroencephalography. Results indicate that tactile speech-shaped envelopes improve the cortical tracking, but not intelligibility, of degraded auditory speech. The cortical speech-tracking benefit occurs for tactile input leading the auditory input by 100 m s or less, emerges in the EEG during an early time window (~0-150 m s), and in particular involves cortical activity in the delta (1-4 Hz) range. These characteristics hint at a predictive mechanism for multisensory integration of complex slow time-varying inputs that might play a role in tactile speech communication.


Assuntos
Córtex Cerebral/fisiologia , Ritmo Delta/fisiologia , Eletroencefalografia , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Percepção do Tato/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Física , Fatores de Tempo , Adulto Jovem
15.
Neuroimage ; 202: 116175, 2019 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-31499178

RESUMO

Research on whether perception or other processes depend on the phase of neural oscillations is rapidly gaining popularity. However, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect such an effect on dichotomous (e.g., "hit" vs "miss") and continuous (e.g., scalp potentials) response variables. We manipulated parameters that characterise the phase effect or define the experimental approach to test for this effect. For each parameter combination and response variable, we identified an optimal method. We found that methods regressing single-trial responses on circular (sine and cosine) predictors perform best for all of the simulated parameters, regardless of the nature of the response variable (dichotomous or continuous). In sum, our study lays a foundation for optimized experimental designs and analyses in future studies investigating the role of phase for neural and behavioural responses. We provide MATLAB code for the statistical methods tested.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Percepção/fisiologia , Simulação por Computador , Interpretação Estatística de Dados , Eletroencefalografia , Humanos , Magnetoencefalografia , Estimulação Transcraniana por Corrente Contínua
16.
Neuroimage ; 173: 472-483, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29518569

RESUMO

Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
17.
Neuroimage ; 181: 617-626, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30048749

RESUMO

In everyday life, we process mixtures of a variety of sounds. This processing involves the segregation of auditory input and the attentive selection of the stream that is most relevant to current goals. For natural scenes with multiple irrelevant sounds, however, it is unclear how the human auditory system represents all the unattended sounds. In particular, it remains elusive whether the sensory input to the human auditory cortex of unattended sounds biases the cortical integration/segregation of these sounds in a similar way as for attended sounds. In this study, we tested this by asking participants to selectively listen to one of two speakers or music in an ongoing 1-min sound mixture while their cortical neural activity was measured with EEG. Using a stimulus reconstruction approach, we find better reconstruction of mixed unattended sounds compared to individual unattended sounds at two early cortical stages (70 ms and 150 ms) of the auditory processing hierarchy. Crucially, at the earlier processing stage (70 ms), this cortical bias to represent unattended sounds as integrated rather than segregated increases with increasing similarity of the unattended sounds. Our results reveal an important role of acoustical properties for the cortical segregation of unattended auditory streams in natural listening situations. They further corroborate the notion that selective attention contributes functionally to cortical stream segregation. These findings highlight that a common, acoustics-based grouping principle governs the cortical representation of auditory streams not only inside but also outside the listener's focus of attention.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Neuroimagem Funcional/métodos , Música , Percepção da Fala/fisiologia , Adolescente , Adulto , Córtex Auditivo/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
18.
Neuroimage ; 174: 274-287, 2018 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29571712

RESUMO

Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Colículos Inferiores/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
19.
Eur J Neurosci ; 48(8): 2849-2856, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29430753

RESUMO

Interruptions in auditory input can be perceptually restored if they coincide with a masking sound, resulting in a continuity illusion. Previous studies have shown that this continuity illusion is associated with reduced low-frequency neural oscillations in the auditory cortex. However, the precise contribution of oscillatory amplitude changes and phase alignment to auditory restoration remains unclear. Using electroencephalography, we investigated induced power changes and phase locking in response to 3 Hz amplitude-modulated tones during the interval of an interrupting noise. We experimentally manipulated both the physical continuity of the tone (continuous vs. interrupted) and the masking potential of the noise (notched vs. full). We observed an attenuation of 3 Hz power during continuity illusions in comparison with both continuous tones and veridically perceived interrupted tones. This illusion-related suppression of low-frequency oscillations likely reflects a blurring of auditory object boundaries that supports continuity perception. We further observed increased 3 Hz phase locking during fully masked continuous tones compared with the other conditions. This low-frequency phase alignment may reflect the neural registration of the interrupting noise as a newly appearing object, whereas during continuity illusions, a spectral portion of this noise is delegated to filling the interruption. Taken together, our findings suggest that the suppression of slow cortical oscillations in both the power and phase domains supports perceptual restoration of interruptions in auditory input.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Ilusões/fisiologia , Mascaramento Perceptivo/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
20.
Cereb Cortex ; 27(5): 3002-3014, 2017 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-27230215

RESUMO

A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Lobo Temporal/diagnóstico por imagem , Estimulação Acústica , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Julgamento , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicoacústica , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa