Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 126
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
J Cogn Neurosci ; 36(1): 128-142, 2024 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-37977156

RESUMEN

Visual speech plays a powerful role in facilitating auditory speech processing and has been a publicly noticed topic with the wide usage of face masks during the COVID-19 pandemic. In a previous magnetoencephalography study, we showed that occluding the mouth area significantly impairs neural speech tracking. To rule out the possibility that this deterioration is because of degraded sound quality, in the present follow-up study, we presented participants with audiovisual (AV) and audio-only (A) speech. We further independently manipulated the trials by adding a face mask and a distractor speaker. Our results clearly show that face masks only affect speech tracking in AV conditions, not in A conditions. This shows that face masks indeed primarily impact speech processing by blocking visual speech and not by acoustic degradation. We can further highlight how the spectrogram, lip movements and lexical units are tracked on a sensor level. We can show visual benefits for tracking the spectrogram especially in the multi-speaker condition. While lip movements only show additional improvement and visual benefit over tracking of the spectrogram in clear speech conditions, lexical units (phonemes and word onsets) do not show visual enhancement at all. We hypothesize that in young normal hearing individuals, information from visual input is less used for specific feature extraction, but acts more as a general resource for guiding attention.


Asunto(s)
Percepción del Habla , Humanos , Habla , Percepción Visual , Estudios de Seguimiento , Pandemias , Estimulación Acústica
2.
Eur J Neurosci ; 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38711271

RESUMEN

Regularities in our surroundings lead to predictions about upcoming events. Previous research has shown that omitted sounds during otherwise regular tone sequences elicit frequency-specific neural activity related to the upcoming but omitted tone. We tested whether this neural response is depending on the unpredictability of the omission. Therefore, we recorded magnetencephalography (MEG) data while participants listened to ordered or random tone sequences with omissions occurring either ordered or randomly. Using multivariate pattern analysis shows that the frequency-specific neural pattern during omission within ordered tone sequences occurs independent of the regularity of the omissions. These results suggest that the auditory predictions based on sensory experiences are not immediately updated by violations of those expectations.

3.
Psychophysiology ; 61(1): e14435, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37691098

RESUMEN

Predictive processing theories, which model the brain as a "prediction machine", explain a wide range of cognitive functions, including learning, perception and action. Furthermore, it is increasingly accepted that aberrant prediction tendencies play a crucial role in psychiatric disorders. Given this explanatory value for clinical psychiatry, prediction tendencies are often implicitly conceptualized as individual traits or as tendencies that generalize across situations. As this has not yet explicitly been shown, in the current study, we quantify to what extent the individual tendency to anticipate sensory features of high probability generalizes across modalities. Using magnetoencephalography (MEG), we recorded brain activity while participants were presented with a sequence of four different (either visual or auditory) stimuli, which changed according to predefined transitional probabilities of two entropy levels: ordered vs. random. Our results show that, on a group-level, under conditions of low entropy, stimulus features of high probability are preactivated in the auditory but not in the visual modality. Crucially, the magnitude of the individual tendency to predict sensory events seems not to correlate between the two modalities. Furthermore, reliability statistics indicate poor internal consistency, suggesting that the measures from the different modalities are unlikely to reflect a single, common cognitive process. In sum, our findings suggest that quantification and interpretation of individual prediction tendencies cannot be generalized across modalities.


Asunto(s)
Percepción Auditiva , Percepción Visual , Humanos , Reproducibilidad de los Resultados , Encéfalo , Magnetoencefalografía , Estimulación Acústica
4.
Cereb Cortex ; 33(7): 3478-3489, 2023 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-35972419

RESUMEN

Spatially selective modulation of alpha power (8-14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention.


Asunto(s)
Percepción Auditiva , Localización de Sonidos , Humanos , Percepción Auditiva/fisiología , Ritmo alfa/fisiología , Encéfalo/fisiología , Localización de Sonidos/fisiología , Sonido
5.
Cereb Cortex ; 33(11): 6608-6619, 2023 05 24.
Artículo en Inglés | MEDLINE | ID: mdl-36617790

RESUMEN

Listening can be conceptualized as a process of active inference, in which the brain forms internal models to integrate auditory information in a complex interaction of bottom-up and top-down processes. We propose that individuals vary in their "prediction tendency" and that this variation contributes to experiential differences in everyday listening situations and shapes the cortical processing of acoustic input such as speech. Here, we presented tone sequences of varying entropy level, to independently quantify auditory prediction tendency (as the tendency to anticipate low-level acoustic features) for each individual. This measure was then used to predict cortical speech tracking in a multi speaker listening task, where participants listened to audiobooks narrated by a target speaker in isolation or interfered by 1 or 2 distractors. Furthermore, semantic violations were introduced into the story, to also examine effects of word surprisal during speech processing. Our results show that cortical speech tracking is related to prediction tendency. In addition, we find interactions between prediction tendency and background noise as well as word surprisal in disparate brain regions. Our findings suggest that individual prediction tendencies are generalizable across different listening situations and may serve as a valuable element to explain interindividual differences in natural listening situations.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Habla , Estimulación Acústica/métodos , Ruido
6.
J Neurosci ; 42(7): 1343-1351, 2022 02 16.
Artículo en Inglés | MEDLINE | ID: mdl-34980637

RESUMEN

The architecture of the efferent auditory system enables prioritization of strongly overlapping spatiotemporal cochlear activation patterns elicited by relevant and irrelevant inputs. So far, attempts at finding such attentional modulations of cochlear activity delivered indirect insights in humans or required direct recordings in animals. The extent to which spiral ganglion cells forming the human auditory nerve are sensitive to selective attention remains largely unknown. We investigated this question by testing the effects of attending to either the auditory or visual modality in human cochlear implant (CI) users (3 female, 13 male). Auditory nerve activity was directly recorded with standard CIs during a silent (anticipatory) cue-target interval. When attending the upcoming auditory input, ongoing auditory nerve activity within the theta range (5-8 Hz) was enhanced. Crucially, using the broadband signal (4-25 Hz), a classifier was even able to decode the attended modality from single-trial data. Follow-up analysis showed that the effect was not driven by a narrow frequency in particular. Using direct cochlear recordings from deaf individuals, our findings suggest that cochlear spiral ganglion cells are sensitive to top-down attentional modulations. Given the putatively broad hair-cell degeneration of these individuals, the effects are likely mediated by alternative efferent pathways compared with previous studies using otoacoustic emissions. Successful classification of single-trial data could additionally have a significant impact on future closed-loop CI developments that incorporate real-time optimization of CI parameters based on the current mental state of the user.SIGNIFICANCE STATEMENT The efferent auditory system in principle allows top-down modulation of auditory nerve activity; however, evidence for this is lacking in humans. Using cochlear recordings in participants performing an audiovisual attention task, we show that ongoing auditory nerve activity in the silent cue-target period is directly modulated by selective attention. Specifically, ongoing auditory nerve activity is enhanced within the theta range when attending upcoming auditory input. Furthermore, over a broader frequency range, the attended modality can be decoded from single-trial data. Demonstrating this direct top-down influence on auditory nerve activity substantially extends previous works that focus on outer hair cell activity. Generally, our work could promote the use of standard cochlear implant electrodes to study cognitive neuroscientific questions.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Cóclea/fisiología , Implantes Cocleares , Nervio Coclear/fisiología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ritmo Teta
7.
J Cogn Neurosci ; 35(4): 588-602, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36626349

RESUMEN

It is widely established that sensory perception is a rhythmic process as opposed to a continuous one. In the context of auditory perception, this effect is only established on a cortical and behavioral level. Yet, the unique architecture of the auditory sensory system allows its primary sensory cortex to modulate the processes of its sensory receptors at the cochlear level. Previously, we could demonstrate the existence of a genuine cochlear theta (∼6-Hz) rhythm that is modulated in amplitude by intermodal selective attention. As the study's paradigm was not suited to assess attentional effects on the oscillatory phase of cochlear activity, the question of whether attention can also affect the temporal organization of the cochlea's ongoing activity remained open. The present study utilizes an interaural attention paradigm to investigate ongoing otoacoustic activity during a stimulus-free cue-target interval and an omission period of the auditory target in humans. We were able to replicate the existence of the cochlear theta rhythm. Importantly, we found significant phase opposition between the two ears and attention conditions of anticipatory as well as cochlear oscillatory activity during target presentation. Yet, the amplitude was unaffected by interaural attention. These results are the first to demonstrate that intermodal and interaural attention deploy different aspects of excitation and inhibition at the first level of auditory processing. Whereas intermodal attention modulates the level of cochlear activity, interaural attention modulates the timing.


Asunto(s)
Percepción Auditiva , Cóclea , Humanos , Inhibición Psicológica , Ritmo Teta
8.
Neuroimage ; 268: 119894, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36693596

RESUMEN

Listening to speech with poor signal quality is challenging. Neural speech tracking of degraded speech has been used to advance the understanding of how brain processes and speech intelligibility are interrelated. However, the temporal dynamics of neural speech tracking and their relation to speech intelligibility are not clear. In the present MEG study, we exploited temporal response functions (TRFs), which has been used to describe the time course of speech tracking on a gradient from intelligible to unintelligible degraded speech. In addition, we used inter-related facets of neural speech tracking (e.g., speech envelope reconstruction, speech-brain coherence, and components of broadband coherence spectra) to endorse our findings in TRFs. Our TRF analysis yielded marked temporally differential effects of vocoding: ∼50-110 ms (M50TRF), ∼175-230 ms (M200TRF), and ∼315-380 ms (M350TRF). Reduction of intelligibility went along with large increases of early peak responses M50TRF, but strongly reduced responses in M200TRF. In the late responses M350TRF, the maximum response occurred for degraded speech that was still comprehensible then declined with reduced intelligibility. Furthermore, we related the TRF components to our other neural "tracking" measures and found that M50TRF and M200TRF play a differential role in the shifting center frequency of the broadband coherence spectra. Overall, our study highlights the importance of time-resolved computation of neural speech tracking and decomposition of coherence spectra and provides a better understanding of degraded speech processing.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Humanos , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Encéfalo/fisiología , Percepción Auditiva , Cognición , Estimulación Acústica
9.
BMC Med ; 21(1): 283, 2023 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-37533027

RESUMEN

BACKGROUND: Tinnitus affects 10 to 15% of the population, but its underlying causes are not yet fully understood. Hearing loss has been established as the most important risk factor. Ageing is also known to accompany increased prevalence; however, the risk is normally seen in context with (age-related) hearing loss. Whether ageing per se is a risk factor has not yet been established. We specifically focused on the effect of ageing and the relationship between age, hearing loss, and tinnitus. METHODS: We used two samples for our analyses. The first, exploratory analyses comprised 2249 Austrian individuals. The second included data from 16,008 people, drawn from a publicly available dataset (NHANES). We used logistic regressions to investigate the effect of age on tinnitus. RESULTS: In both samples, ageing per se was found to be a significant predictor of tinnitus. In the more decisive NHANES sample, there was an additional interaction effect between age and hearing loss. Odds ratio analyses show that per unit increase of hearing loss, the odds of reporting tinnitus is higher in older people (1.06 vs 1.03). CONCLUSIONS: Expanding previous findings of hearing loss as the main risk factor for tinnitus, we established ageing as a risk factor in its own right. Underlying mechanisms remain unclear, and this work calls for urgent research efforts to link biological ageing processes, hearing loss, and tinnitus. We therefore suggest a novel working hypothesis that integrates these aspects from an ageing brain viewpoint.


Asunto(s)
Pérdida Auditiva , Acúfeno , Humanos , Anciano , Acúfeno/epidemiología , Acúfeno/etiología , Encuestas Nutricionales , Pérdida Auditiva/epidemiología , Envejecimiento , Factores de Riesgo
10.
Psychophysiology ; 60(11): e14362, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37350379

RESUMEN

The most prominent acoustic features in speech are intensity modulations, represented by the amplitude envelope of speech. Synchronization of neural activity with these modulations supports speech comprehension. As the acoustic modulation of speech is related to the production of syllables, investigations of neural speech tracking commonly do not distinguish between lower-level acoustic (envelope modulation) and higher-level linguistic (syllable rate) information. Here we manipulated speech intelligibility using noise-vocoded speech and investigated the spectral dynamics of neural speech processing, across two studies at cortical and subcortical levels of the auditory hierarchy, using magnetoencephalography. Overall, cortical regions mostly track the syllable rate, whereas subcortical regions track the acoustic envelope. Furthermore, with less intelligible speech, tracking of the modulation rate becomes more dominant. Our study highlights the importance of distinguishing between envelope modulation and syllable rate and provides novel possibilities to better understand differences between auditory processing and speech/language processing disorders.


Asunto(s)
Percepción del Habla , Habla , Humanos , Magnetoencefalografía , Ruido , Cognición , Estimulación Acústica , Inteligibilidad del Habla
11.
Cereb Cortex ; 32(21): 4818-4833, 2022 10 20.
Artículo en Inglés | MEDLINE | ID: mdl-35062025

RESUMEN

The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.


Asunto(s)
Percepción del Habla , Humanos , Estimulación Acústica , Labio , Habla , Movimiento
12.
Proc Natl Acad Sci U S A ; 117(13): 7437-7446, 2020 03 31.
Artículo en Inglés | MEDLINE | ID: mdl-32184331

RESUMEN

An increasing number of studies highlight common brain regions and processes in mediating conscious sensory experience. While most studies have been performed in the visual modality, it is implicitly assumed that similar processes are involved in other sensory modalities. However, the existence of supramodal neural processes related to conscious perception has not been convincingly shown so far. Here, we aim to directly address this issue by investigating whether neural correlates of conscious perception in one modality can predict conscious perception in a different modality. In two separate experiments, we presented participants with successive blocks of near-threshold tasks involving subjective reports of tactile, visual, or auditory stimuli during the same magnetoencephalography (MEG) acquisition. Using decoding analysis in the poststimulus period between sensory modalities, our first experiment uncovered supramodal spatiotemporal neural activity patterns predicting conscious perception of the feeble stimulation. Strikingly, these supramodal patterns included activity in primary sensory regions not directly relevant to the task (e.g., neural activity in visual cortex predicting conscious perception of auditory near-threshold stimulation). We carefully replicate our results in a control experiment that furthermore show that the relevant patterns are independent of the type of report (i.e., whether conscious perception was reported by pressing or withholding a button press). Using standard paradigms for probing neural correlates of conscious perception, our findings reveal a common signature of conscious access across sensory modalities and illustrate the temporally late and widespread broadcasting of neural representations, even into task-unrelated primary sensory processing regions.


Asunto(s)
Estado de Conciencia/fisiología , Percepción/fisiología , Estimulación Acústica/métodos , Adulto , Percepción Auditiva/fisiología , Encéfalo/fisiología , Mapeo Encefálico/métodos , Femenino , Humanos , Magnetoencefalografía/métodos , Masculino , Análisis Multivariante , Estimulación Luminosa/métodos , Estimulación Física/métodos , Tacto/fisiología , Percepción del Tacto/fisiología , Percepción Visual/fisiología
13.
J Cogn Neurosci ; 34(6): 1001-1014, 2022 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-35258573

RESUMEN

Ongoing fluctuations in neural excitability and connectivity influence whether or not a stimulus is seen. Do they also influence which stimulus is seen? We recorded magnetoencephalography data while 21 human participants viewed face or house stimuli, either one at a time or under bistable conditions induced through binocular rivalry. Multivariate pattern analysis revealed common neural substrates for rivalrous versus nonrivalrous stimuli with an additional delay of ∼36 msec for the bistable stimulus, and poststimulus signals were source-localized to the fusiform face area. Before stimulus onset followed by a face versus house report, fusiform face area showed stronger connectivity to primary visual cortex and to the rest of the cortex in the alpha frequency range (8-13 Hz), but there were no differences in local oscillatory alpha power. The prestimulus connectivity metrics predicted the accuracy of poststimulus decoding and the delay associated with rivalry disambiguation suggesting that perceptual content is shaped by ongoing neural network states.


Asunto(s)
Reconocimiento Facial , Sesgo , Cara , Humanos , Magnetoencefalografía , Estimulación Luminosa , Visión Binocular , Percepción Visual
14.
Neuroimage ; 248: 118813, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34923130

RESUMEN

Tinnitus is hypothesised to be a predictive coding problem. Previous research indicates lower sensitivity to prediction errors (PEs) in tinnitus patients while processing auditory deviants corresponding to tinnitus-specific stimuli. However, based on research with patients with hallucinations and no psychosis we hypothesise tinnitus patients may be more sensitive to PEs produced by auditory stimuli that are not related to tinnitus characteristics. Specifically in patients with minimal to no hearing loss, we hypothesise a more top-down subtype of tinnitus that may be driven by maladaptive changes in an auditory predictive coding network. To test this, we use an auditory oddball paradigm with omission of global deviants, a measure that is previously shown to empirically characterise hierarchical prediction errors (PEs). We observe: (1) increased predictions characterised by increased pre-stimulus response and increased alpha connectivity between the parahippocampus, dorsal anterior cingulate cortex and parahippocampus, pregenual anterior cingulate cortex and posterior cingulate cortex; (2) increased PEs characterised by increased P300 amplitude and gamma activity and increased theta connectivity between auditory cortices, parahippocampus and dorsal anterior cingulate cortex in the tinnitus group; (3) increased overall feed-forward connectivity in theta from the auditory cortex and parahippocampus to the dorsal anterior cingulate cortex; (4) correlations of pre-stimulus theta activity to tinnitus loudness and alpha activity to tinnitus distress. These results provide empirical evidence of maladaptive changes in a hierarchical predictive coding network in a subgroup of tinnitus patients with minimal to no hearing loss. The changes in pre-stimulus activity and connectivity to non-tinnitus specific stimuli suggest that tinnitus patients not only produce strong predictions about upcoming stimuli but also may be predisposed to stimulus a-specific PEs in the auditory domain. Correlations with tinnitus-related characteristics may be a biomarker for maladaptive changes in auditory predictive coding.


Asunto(s)
Percepción Auditiva , Corteza Cerebral/fisiopatología , Conectoma , Acúfeno/fisiopatología , Adulto , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino
15.
Neuroimage ; 252: 119044, 2022 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-35240298

RESUMEN

Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In the context of speech, when confronted with a degraded acoustic signal, congruent visual inputs promote comprehension. When this input is masked, speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels of speech processing are affected under which circumstances by occluding the mouth area. To answer this question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. In half of the trials, the target speaker wore a (surgical) face mask, while we measured the brain activity of normal hearing participants via magnetoencephalography (MEG). We additionally added a distractor speaker in half of the trials in order to create an ecologically difficult listening situation. A decoding model on the clear AV speech was trained and used to reconstruct crucial speech features in each condition. We found significant main effects of face masks on the reconstruction of acoustic features, such as the speech envelope and spectral speech features (i.e. pitch and formant frequencies), while reconstruction of higher level features of speech segmentation (phoneme and word onsets) were especially impaired through masks in difficult listening situations. As we used surgical face masks in our study, which only show mild effects on speech acoustics, we interpret our findings as the result of the missing visual input. Our findings extend previous behavioural results, by demonstrating the complex contextual effects of occluding relevant visual information on speech processing.


Asunto(s)
Percepción del Habla , Habla , Estimulación Acústica , Acústica , Humanos , Boca , Percepción Visual
16.
Eur J Neurosci ; 55(11-12): 3178-3190, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-33539589

RESUMEN

Ongoing oscillatory neural activity before stimulus onset influences subsequent visual perception. Specifically, both the power and the phase of oscillations in the alpha-frequency band (9-13 Hz) have been reported to predict the detection of visual stimuli. Up to now, the functional mechanisms underlying pre-stimulus power and phase effects on upcoming visual percepts are debated. Here, we used magnetoencephalography recordings together with a near-threshold visual detection task to investigate the neural generators of pre-stimulus power and phase and their impact on subsequent visual-evoked responses. Pre-stimulus alpha-band power and phase opposition effects were found consistent with previous reports. Source localization suggested clearly distinct neural generators for these pre-stimulus effects: Power effects were mainly found in occipital-temporal regions, whereas phase effects also involved prefrontal areas. In order to be functionally relevant, the pre-stimulus correlates should influence post-stimulus processing. Using a trial-sorting approach, we observed that only pre-stimulus power modulated the Hits versus Misses difference in the evoked response, a well-established post-stimulus neural correlate of near-threshold perception, such that trials with stronger pre-stimulus power effect showed greater post-stimulus difference. By contrast, no influence of pre-stimulus phase effects were found. In sum, our study shows distinct generators for two pre-stimulus neural patterns predicting visual perception, and that only alpha power impacts the post-stimulus correlate of conscious access. This underlines the functional relevance of prestimulus alpha power on perceptual awareness, while questioning the role of alpha phase.


Asunto(s)
Magnetoencefalografía , Percepción Visual , Ritmo alfa/fisiología , Estado de Conciencia , Electroencefalografía , Potenciales Evocados Visuales , Lóbulo Occipital/fisiología , Estimulación Luminosa , Percepción Visual/fisiología
17.
Eur J Neurosci ; 55(11-12): 3288-3302, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-32687616

RESUMEN

Making sense of a poor auditory signal can pose a challenge. Previous attempts to quantify speech intelligibility in neural terms have usually focused on one of two measures, namely low-frequency speech-brain synchronization or alpha power modulations. However, reports have been mixed concerning the modulation of these measures, an issue aggravated by the fact that they have normally been studied separately. We present two MEG studies analyzing both measures. In study 1, participants listened to unimodal auditory speech with three different levels of degradation (original, 7-channel and 3-channel vocoding). Intelligibility declined with declining clarity, but speech was still intelligible to some extent even for the lowest clarity level (3-channel vocoding). Low-frequency (1-7 Hz) speech tracking suggested a U-shaped relationship with strongest effects for the medium-degraded speech (7-channel) in bilateral auditory and left frontal regions. To follow up on this finding, we implemented three additional vocoding levels (5-channel, 2-channel and 1-channel) in a second MEG study. Using this wider range of degradation, the speech-brain synchronization showed a similar pattern as in study 1, but further showed that when speech becomes unintelligible, synchronization declines again. The relationship differed for alpha power, which continued to decrease across vocoding levels reaching a floor effect for 5-channel vocoding. Predicting subjective intelligibility based on models either combining both measures or each measure alone showed superiority of the combined model. Our findings underline that speech tracking and alpha power are modified differently by the degree of degradation of continuous speech but together contribute to the subjective speech understanding.


Asunto(s)
Percepción del Habla , Encéfalo , Mapeo Encefálico , Humanos , Inteligibilidad del Habla
18.
Proc Natl Acad Sci U S A ; 116(32): 16056-16061, 2019 08 06.
Artículo en Inglés | MEDLINE | ID: mdl-31332019

RESUMEN

Ongoing fluctuations in neural excitability and in networkwide activity patterns before stimulus onset have been proposed to underlie variability in near-threshold stimulus detection paradigms-that is, whether or not an object is perceived. Here, we investigated the impact of prestimulus neural fluctuations on the content of perception-that is, whether one or another object is perceived. We recorded neural activity with magnetoencephalography (MEG) before and while participants briefly viewed an ambiguous image, the Rubin face/vase illusion, and required them to report their perceived interpretation in each trial. Using multivariate pattern analysis, we showed robust decoding of the perceptual report during the poststimulus period. Applying source localization to the classifier weights suggested early recruitment of primary visual cortex (V1) and ∼160-ms recruitment of the category-sensitive fusiform face area (FFA). These poststimulus effects were accompanied by stronger oscillatory power in the gamma frequency band for face vs. vase reports. In prestimulus intervals, we found no differences in oscillatory power between face vs. vase reports in V1 or in FFA, indicating similar levels of neural excitability. Despite this, we found stronger connectivity between V1 and FFA before face reports for low-frequency oscillations. Specifically, the strength of prestimulus feedback connectivity (i.e., Granger causality) from FFA to V1 predicted not only the category of the upcoming percept but also the strength of poststimulus neural activity associated with the percept. Our work shows that prestimulus network states can help shape future processing in category-sensitive brain regions and in this way bias the content of visual experiences.


Asunto(s)
Sesgo , Retroalimentación , Percepción Visual/fisiología , Intervalos de Confianza , Toma de Decisiones , Humanos , Magnetoencefalografía
19.
BMC Biol ; 19(1): 48, 2021 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-33726746

RESUMEN

BACKGROUND: A long-standing debate concerns where in the processing hierarchy of the central nervous system (CNS) selective attention takes effect. In the auditory system, cochlear processes can be influenced via direct and mediated (by the inferior colliculus) projections from the auditory cortex to the superior olivary complex (SOC). Studies illustrating attentional modulations of cochlear responses have so far been limited to sound-evoked responses. The aim of the present study is to investigate intermodal (audiovisual) selective attention in humans simultaneously at the cortical and cochlear level during a stimulus-free cue-target interval. RESULTS: We found that cochlear activity in the silent cue-target intervals was modulated by a theta-rhythmic pattern (~ 6 Hz). While this pattern was present independently of attentional focus, cochlear theta activity was clearly enhanced when attending to the upcoming auditory input. On a cortical level, classical posterior alpha and beta power enhancements were found during auditory selective attention. Interestingly, participants with a stronger release of inhibition in auditory brain regions show a stronger attentional modulation of cochlear theta activity. CONCLUSIONS: These results hint at a putative theta-rhythmic sampling of auditory input at the cochlear level. Furthermore, our results point to an interindividual variable engagement of efferent pathways in an attentional context that are linked to processes within and beyond processes in auditory cortical regions.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Cóclea/fisiología , Señales (Psicología) , Adulto , Femenino , Humanos , Masculino , Adulto Joven
20.
J Neurosci ; 40(36): 6927-6937, 2020 09 02.
Artículo en Inglés | MEDLINE | ID: mdl-32753515

RESUMEN

The visual system uses two complimentary strategies to process multiple objects simultaneously within a scene and update their spatial positions in real time. It either uses selective attention to individuate a complex, dynamic scene into a few focal objects (i.e., object individuation), or it represents multiple objects as an ensemble by distributing attention more globally across the scene (i.e., ensemble grouping). Neural oscillations may be a key signature for focal object individuation versus distributed ensemble grouping, because they are thought to regulate neural excitability over visual areas through inhibitory control mechanisms. We recorded whole-head MEG data during a multiple-object tracking paradigm, in which human participants (13 female, 11 male) switched between different instructions for object individuation and ensemble grouping on different trials. The stimuli, responses, and the demand to keep track of multiple spatial locations over time were held constant between the two conditions. We observed increased α-band power (9-13 Hz) packed into oscillatory bursts in bilateral inferior parietal cortex during multiple-object processing. Single-trial analysis revealed greater burst occurrences on object individuation versus ensemble grouping trials. By contrast, we found no differences using standard analyses on across-trials averaged α-band power. Moreover, the bursting effects occurred only below/at, but not above, the typical capacity limits for multiple-object processing (at ∼4 objects). Our findings reveal the real-time neural correlates underlying the dynamic processing of multiple-object scenarios, which are modulated by grouping strategies and capacity. They support a rhythmic, α-pulsed organization of dynamic attention to multiple objects and ensembles.SIGNIFICANCE STATEMENT Dynamic multiple-object scenarios are an important problem in real-world and computer vision. They require keeping track of multiple objects as they move through space and time. Such problems can be solved in two ways: One can individuate a scene object by object, or alternatively group objects into ensembles. We observed greater occurrences of α-oscillatory burst events in parietal cortex for processing objects versus ensembles and below/at versus above processing capacity. These results demonstrate a unique top-down mechanism by which the brain dynamically adjusts its computational level between objects and ensembles. They help to explain how the brain copes with its capacity limitations in real-time environments and may lead the way to technological innovations for time-critical video analysis in computer vision.


Asunto(s)
Ritmo alfa , Atención , Lóbulo Parietal/fisiología , Percepción Visual , Adulto , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA