Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Eur J Neurosci ; 54(10): 7626-7641, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34697833

RESUMO

Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non-speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non-speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech-to-environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech-to-environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech-to-environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech-specific top-down and bottom-up mechanisms activated during speech perception that are needed for tracking speech in real-life-like auditory environments.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica , Animais , Percepção Auditiva , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Fonética , Fala
2.
Neuropsychologia ; 158: 107889, 2021 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-33991561

RESUMO

Statistical learning, or the ability to extract statistical regularities from the sensory environment, plays a critical role in language acquisition and reading development. Here we employed electroencephalography (EEG) with frequency-tagging measures to track the temporal evolution of speech-structure learning in individuals with reading difficulties due to developmental dyslexia and in typical readers. We measured EEG while participants listened to (a) a structured stream of repeated tri-syllabic pseudowords, (b) a random stream of the same isochronous syllables, and (c) a series of tri-syllabic real Dutch words. Participants' behavioral learning outcome (pseudoword recognition) was measured after training. We found that syllable-rate tracking was comparable between the two groups and stable across both the random and structured streams of syllables. More importantly, we observed a gradual emergence of the tracking of tri-syllabic pseudoword structures in both groups. Compared to the typical readers, however, in the dyslexic readers this implicit speech structure learning seemed to build up at a slower pace. A brain-behavioral correlation analysis showed that slower learners (i.e., participants who were slower in establishing the neural tracking of pseudowords) were less skilled in phonological awareness. Moreover, those who showed stronger neural tracking of real words tended to be less fluent in the visual-verbal conversion of linguistic symbols. Taken together, our study provides an online neurophysiological approach to track the progression of implicit learning processes and gives insights into the learning difficulties associated with dyslexia from a dynamic perspective.


Assuntos
Dislexia , Fala , Humanos , Idioma , Aprendizagem , Leitura
4.
Proc Natl Acad Sci U S A ; 118(7)2021 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-33568530

RESUMO

Brain connectivity plays a major role in the encoding, transfer, and integration of sensory information. Interregional synchronization of neural oscillations in the γ-frequency band has been suggested as a key mechanism underlying perceptual integration. In a recent study, we found evidence for this hypothesis showing that the modulation of interhemispheric oscillatory synchrony by means of bihemispheric high-density transcranial alternating current stimulation (HD-TACS) affects binaural integration of dichotic acoustic features. Here, we aimed to establish a direct link between oscillatory synchrony, effective brain connectivity, and binaural integration. We experimentally manipulated oscillatory synchrony (using bihemispheric γ-TACS with different interhemispheric phase lags) and assessed the effect on effective brain connectivity and binaural integration (as measured with functional MRI and a dichotic listening task, respectively). We found that TACS reduced intrahemispheric connectivity within the auditory cortices and antiphase (interhemispheric phase lag 180°) TACS modulated connectivity between the two auditory cortices. Importantly, the changes in intra- and interhemispheric connectivity induced by TACS were correlated with changes in perceptual integration. Our results indicate that γ-band synchronization between the two auditory cortices plays a functional role in binaural integration, supporting the proposed role of interregional oscillatory synchrony in perceptual integration.


Assuntos
Percepção Auditiva , Encéfalo/fisiologia , Lateralidade Funcional , Conectoma , Feminino , Ritmo Gama , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Transcraniana por Corrente Contínua , Adulto Jovem
5.
Neuroimage ; 228: 117670, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359352

RESUMO

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Adulto Jovem
6.
Sci Rep ; 10(1): 11872, 2020 07 17.
Artigo em Inglês | MEDLINE | ID: mdl-32681138

RESUMO

Patients with schizophrenia (ScZ) often show impairments in auditory information processing. These impairments have been related to clinical symptoms, such as auditory hallucinations. Some researchers have hypothesized that aberrant low-frequency oscillations contribute to auditory information processing deficits in ScZ. A paradigm for which modulations in low-frequency oscillations are consistently found in healthy individuals is the auditory continuity illusion (ACI), in which restoration processes lead to a perceptual grouping of tone fragments and a mask, so that a physically interrupted sound is perceived as continuous. We used the ACI paradigm to test the hypothesis that low-frequency oscillations play a role in aberrant auditory information processing in patients with ScZ (N = 23). Compared with healthy control participants we found that patients with ScZ show elevated continuity illusions of interrupted, partially-masked tones. Electroencephalography data demonstrate that this elevated continuity perception is reflected by diminished 3 Hz power. This suggests that reduced low-frequency oscillations relate to elevated restoration processes in ScZ. Our findings support the hypothesis that aberrant low-frequency oscillations contribute to altered perception-related auditory information processing in ScZ.


Assuntos
Alucinações , Ilusões/psicologia , Esquizofrenia/diagnóstico , Psicologia do Esquizofrênico , Estimulação Acústica , Análise de Dados , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino
7.
Front Neurosci ; 14: 362, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32351361

RESUMO

Auditory perception is facilitated by prior knowledge about the statistics of the acoustic environment. Predictions about upcoming auditory stimuli are processed at various stages along the human auditory pathway, including the cortex and midbrain. Whether such auditory predictions are processed also at hierarchically lower stages-in the peripheral auditory system-is unclear. To address this question, we assessed outer hair cell (OHC) activity in response to isochronous tone sequences and varied the predictability and behavioral relevance of the individual tones (by manipulating tone-to-tone probabilities and the human participants' task, respectively). We found that predictability alters the amplitude of distortion-product otoacoustic emissions (DPOAEs, a measure of OHC activity) in a manner that depends on the behavioral relevance of the tones. Simultaneously recorded cortical responses showed a significant effect of both predictability and behavioral relevance of the tones, indicating that their experimental manipulations were effective in central auditory processing stages. Our results provide evidence for a top-down effect on the processing of auditory predictability in the human peripheral auditory system, in line with previous studies showing peripheral effects of auditory attention.

8.
Front Hum Neurosci ; 14: 113, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32351371

RESUMO

"Locked-in" patients lose their ability to communicate naturally due to motor system dysfunction. Brain-computer interfacing offers a solution for their inability to communicate by enabling motor-independent communication. Straightforward and convenient in-session communication is essential in clinical environments. The present study introduces a functional near-infrared spectroscopy (fNIRS)-based binary communication paradigm that requires limited preparation time and merely nine optodes. Eighteen healthy participants performed two mental imagery tasks, mental drawing and spatial navigation, to answer yes/no questions during one of two auditorily cued time windows. Each of the six questions was answered five times, resulting in five trials per answer. This communication paradigm thus combines both spatial (two different mental imagery tasks, here mental drawing for "yes" and spatial navigation for "no") and temporal (distinct time windows for encoding a "yes" and "no" answer) fNIRS signal features for information encoding. Participants' answers were decoded in simulated real-time using general linear model analysis. Joint analysis of all five encoding trials resulted in an average accuracy of 66.67 and 58.33% using the oxygenated (HbO) and deoxygenated (HbR) hemoglobin signal respectively. For half of the participants, an accuracy of 83.33% or higher was reached using either the HbO signal or the HbR signal. For four participants, effective communication with 100% accuracy was achieved using either the HbO or HbR signal. An explorative analysis investigated the differentiability of the two mental tasks based solely on spatial fNIRS signal features. Using multivariate pattern analysis (MVPA) group single-trial accuracies of 58.33% (using 20 training trials per task) and 60.56% (using 40 training trials per task) could be obtained. Combining the five trials per run using a majority voting approach heightened these MVPA accuracies to 62.04 and 75%. Additionally, an fNIRS suitability questionnaire capturing participants' physical features was administered to explore its predictive value for evaluating general data quality. Obtained questionnaire scores correlated significantly (r = -0.499) with the signal-to-noise of the raw light intensities. While more work is needed to further increase decoding accuracy, this study shows the potential of answer encoding using spatiotemporal fNIRS signal features or spatial fNIRS signal features only.

9.
J Cogn Neurosci ; 32(8): 1428-1437, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32427072

RESUMO

Recent neuroimaging evidence suggests that the frequency of entrained oscillations in auditory cortices influences the perceived duration of speech segments, impacting word perception [Kösem, A., Bosker, H. R., Takashima, A., Meyer, A., Jensen, O., & Hagoort, P. Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875, 2018]. We further tested the causal influence of neural entrainment frequency during speech processing, by manipulating entrainment with continuous transcranial alternating current stimulation (tACS) at distinct oscillatory frequencies (3 and 5.5 Hz) above the auditory cortices. Dutch participants listened to speech and were asked to report their percept of a target Dutch word, which contained a vowel with an ambiguous duration. Target words were presented either in isolation (first experiment) or at the end of spoken sentences (second experiment). We predicted that the tACS frequency would influence neural entrainment and therewith how speech is perceptually sampled, leading to a perceptual overestimation or underestimation of the vowel's duration. Whereas results from Experiment 1 did not confirm this prediction, results from Experiment 2 suggested a small effect of tACS frequency on target word perception: Faster tACS leads to more long-vowel word percepts, in line with the previous neuroimaging findings. Importantly, the difference in word perception induced by the different tACS frequencies was significantly larger in Experiment 1 versus Experiment 2, suggesting that the impact of tACS is dependent on the sensory context. tACS may have a stronger effect on spoken word perception when the words are presented in continuous speech as compared to when they are isolated, potentially because prior (stimulus-induced) entrainment of brain oscillations might be a prerequisite for tACS to be effective.

10.
J Cogn Neurosci ; 32(7): 1242-1250, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31682569

RESUMO

Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.

11.
Neuroimage ; 202: 116175, 2019 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-31499178

RESUMO

Research on whether perception or other processes depend on the phase of neural oscillations is rapidly gaining popularity. However, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect such an effect on dichotomous (e.g., "hit" vs "miss") and continuous (e.g., scalp potentials) response variables. We manipulated parameters that characterise the phase effect or define the experimental approach to test for this effect. For each parameter combination and response variable, we identified an optimal method. We found that methods regressing single-trial responses on circular (sine and cosine) predictors perform best for all of the simulated parameters, regardless of the nature of the response variable (dichotomous or continuous). In sum, our study lays a foundation for optimized experimental designs and analyses in future studies investigating the role of phase for neural and behavioural responses. We provide MATLAB code for the statistical methods tested.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Percepção/fisiologia , Simulação por Computador , Interpretação Estatística de Dados , Eletroencefalografia , Humanos , Magnetoencefalografia , Estimulação Transcraniana por Corrente Contínua
12.
Neuroimage ; 202: 116134, 2019 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-31470124

RESUMO

Viewing a speaker's lip movements can improve the brain's ability to 'track' the amplitude envelope of the auditory speech signal and facilitate intelligibility. Whether such neurobehavioral benefits can also arise from tactually sensing the speech envelope on the skin is unclear. We hypothesized that tactile speech envelopes can improve neural tracking of auditory speech and thereby facilitate intelligibility. To test this, we applied continuous auditory speech and vibrotactile speech-envelope-shaped stimulation at various asynchronies to the ears and index fingers of normally-hearing human listeners while simultaneously assessing speech-recognition performance and cortical speech-envelope tracking with electroencephalography. Results indicate that tactile speech-shaped envelopes improve the cortical tracking, but not intelligibility, of degraded auditory speech. The cortical speech-tracking benefit occurs for tactile input leading the auditory input by 100 m s or less, emerges in the EEG during an early time window (~0-150 m s), and in particular involves cortical activity in the delta (1-4 Hz) range. These characteristics hint at a predictive mechanism for multisensory integration of complex slow time-varying inputs that might play a role in tactile speech communication.


Assuntos
Córtex Cerebral/fisiologia , Ritmo Delta/fisiologia , Eletroencefalografia , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Percepção do Tato/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Física , Fatores de Tempo , Adulto Jovem
15.
Neuroimage ; 181: 617-626, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30048749

RESUMO

In everyday life, we process mixtures of a variety of sounds. This processing involves the segregation of auditory input and the attentive selection of the stream that is most relevant to current goals. For natural scenes with multiple irrelevant sounds, however, it is unclear how the human auditory system represents all the unattended sounds. In particular, it remains elusive whether the sensory input to the human auditory cortex of unattended sounds biases the cortical integration/segregation of these sounds in a similar way as for attended sounds. In this study, we tested this by asking participants to selectively listen to one of two speakers or music in an ongoing 1-min sound mixture while their cortical neural activity was measured with EEG. Using a stimulus reconstruction approach, we find better reconstruction of mixed unattended sounds compared to individual unattended sounds at two early cortical stages (70 ms and 150 ms) of the auditory processing hierarchy. Crucially, at the earlier processing stage (70 ms), this cortical bias to represent unattended sounds as integrated rather than segregated increases with increasing similarity of the unattended sounds. Our results reveal an important role of acoustical properties for the cortical segregation of unattended auditory streams in natural listening situations. They further corroborate the notion that selective attention contributes functionally to cortical stream segregation. These findings highlight that a common, acoustics-based grouping principle governs the cortical representation of auditory streams not only inside but also outside the listener's focus of attention.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Neuroimagem Funcional/métodos , Música , Percepção da Fala/fisiologia , Adolescente , Adulto , Córtex Auditivo/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
16.
Neuroimage ; 173: 472-483, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29518569

RESUMO

Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
17.
Neuroimage ; 174: 274-287, 2018 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29571712

RESUMO

Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Colículos Inferiores/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
18.
Eur J Neurosci ; 48(8): 2849-2856, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29430753

RESUMO

Interruptions in auditory input can be perceptually restored if they coincide with a masking sound, resulting in a continuity illusion. Previous studies have shown that this continuity illusion is associated with reduced low-frequency neural oscillations in the auditory cortex. However, the precise contribution of oscillatory amplitude changes and phase alignment to auditory restoration remains unclear. Using electroencephalography, we investigated induced power changes and phase locking in response to 3 Hz amplitude-modulated tones during the interval of an interrupting noise. We experimentally manipulated both the physical continuity of the tone (continuous vs. interrupted) and the masking potential of the noise (notched vs. full). We observed an attenuation of 3 Hz power during continuity illusions in comparison with both continuous tones and veridically perceived interrupted tones. This illusion-related suppression of low-frequency oscillations likely reflects a blurring of auditory object boundaries that supports continuity perception. We further observed increased 3 Hz phase locking during fully masked continuous tones compared with the other conditions. This low-frequency phase alignment may reflect the neural registration of the interrupting noise as a newly appearing object, whereas during continuity illusions, a spectral portion of this noise is delegated to filling the interruption. Taken together, our findings suggest that the suppression of slow cortical oscillations in both the power and phase domains supports perceptual restoration of interruptions in auditory input.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Ilusões/fisiologia , Mascaramento Perceptivo/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
19.
Curr Biol ; 28(2): 161-169.e5, 2018 01 22.
Artigo em Inglês | MEDLINE | ID: mdl-29290557

RESUMO

Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and acoustic speech signal, listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented so far clarifying whether speech-brain entrainment contributes functionally to (i.e., causes) speech intelligibility or is merely an epiphenomenon of it. To address this long-standing issue, we experimentally manipulated speech-brain entrainment without concomitant acoustic and task-related variations, using a brain stimulation approach that enables modulating listeners' neural activity with transcranial currents carrying speech-envelope information. Results from two experiments involving a cocktail-party-like scenario and a listening situation devoid of aural speech-amplitude envelope input reveal consistent effects on listeners' speech-recognition performance, demonstrating a causal role of speech-brain entrainment in speech intelligibility. Our findings imply that speech-brain entrainment is critical for auditory speech comprehension and suggest that transcranial stimulation with speech-envelope-shaped currents can be utilized to modulate speech comprehension in impaired listening conditions.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Adulto , Feminino , Humanos , Masculino , Países Baixos , Adulto Jovem
20.
J Cogn Neurosci ; 29(6): 980-990, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28129050

RESUMO

In many everyday listening situations, an otherwise audible sound may go unnoticed amid multiple other sounds. This auditory phenomenon, called informational masking (IM), is sensitive to visual input and involves early (50-250 msec) activity in the auditory cortex (the so-called awareness-related negativity). It is still unclear whether and how the timing of visual input influences the neural correlates of IM in auditory cortex. To address this question, we obtained simultaneous behavioral and neural measures of IM from human listeners in the presence of a visual input stream and varied the asynchrony between the visual stream and the rhythmic auditory target stream (in-phase, antiphase, or random). Results show effects of cross-modal asynchrony on both target detectability (RT and sensitivity) and the awareness-related negativity measured with EEG, which were driven primarily by antiphasic audiovisual stimuli. The neural effect was limited to the interval shortly before listeners' behavioral report of the target. Our results indicate that the relative timing of visual input can influence the IM of a target sound in the human auditory cortex. They further show that this audiovisual influence occurs early during the perceptual buildup of the target sound. In summary, these findings provide novel insights into the interaction of IM and multisensory interaction in the human brain.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...