RESUMEN
Auditory distractions are recognized to considerably challenge the quality of information encoding during speech comprehension. This study explores electroencephalography (EEG) microstate dynamics in ecologically valid, noisy settings, aiming to uncover how these auditory distractions influence the process of information encoding during speech comprehension. We examined three listening scenarios: (1) speech perception with background noise (LA), (2) focused attention on the background noise (BA), and (3) intentional disregard of the background noise (BUA). Our findings showed that microstate complexity and unpredictability increased when attention was directed towards speech compared with tasks without speech (LA > BA & BUA). Notably, the time elapsed between the recurrence of microstates increased significantly in LA compared with both BA and BUA. This suggests that coping with background noise during speech comprehension demands more sustained cognitive effort. Additionally, a two-stage time course for both microstate complexity and alpha-to-theta power ratio was observed. Specifically, in the early epochs, a lower level was observed, which gradually increased and eventually reached a steady level in the later epochs. The findings suggest that the initial stage is primarily driven by sensory processes and information gathering, while the second stage involves higher level cognitive engagement, including mnemonic binding and memory encoding.
Asunto(s)
Percepción del Habla , Habla , Electroencefalografía , Ruido , AtenciónRESUMEN
Pre-stimulus electroencephalogram (EEG) oscillations, especially in the alpha range (8-13 Hz), can affect the sensitivity to temporal lags between modalities in multisensory perception. The effects of alpha power are often explained in terms of alpha's inhibitory functions, whereas effects of alpha frequency have bolstered theories of discrete perceptual cycles, where the length of a cycle, or window of integration, is determined by alpha frequency. Such studies typically employ visual detection paradigms with near-threshold or even illusory stimuli. It is unclear whether such results generalize to above-threshold stimuli. Here, we recorded EEG, while measuring temporal discrimination sensitivity in a temporal-order judgement task using above-threshold auditory and visual stimuli. We tested whether the power and instantaneous frequency of pre-stimulus oscillations predict audiovisual temporal discrimination sensitivity on a trial-by-trial basis. By applying a jackknife procedure to link single-trial pre-stimulus oscillatory power and instantaneous frequency to psychometric measures, we identified a posterior cluster where lower alpha power was associated with higher temporal sensitivity of audiovisual discrimination. No statistically significant relationship between instantaneous alpha frequency and temporal sensitivity was found. These results suggest that temporal sensitivity for above-threshold multisensory stimuli fluctuates from moment to moment and is indexed by modulations in alpha power.
Asunto(s)
Ilusiones , Percepción Visual , Estimulación Acústica , Percepción Auditiva , Electroencefalografía/métodos , Humanos , Juicio , Estimulación Luminosa/métodosRESUMEN
There is accumulating evidence for auditory dysfunctions in patients with Parkinson's disease (PD). Moreover, a possible relationship has been suggested between altered auditory intensity processing and the hypophonic speech characteristics in PD. Nonetheless, further insight into the neurophysiological correlates of auditory intensity processing in patients with PD is needed primarily. In the present study, high-density EEG recordings were used to investigate intensity dependence of auditory evoked potentials (IDAEPs) in 14 patients with PD and 14 age- and gender-matched healthy control participants (HCs). Patients with PD were evaluated in both the on- and off-medication states. HCs were also evaluated twice. Significantly increased IDAEP of the N1/P2 was demonstrated in patients with PD evaluated in the on-medication state compared to HCs. Distinctive results were found for the N1 and P2 component. Regarding the N1 component, no differences in latency or amplitude were shown between patients with PD and HCs regardless of the medication state. In contrast, increased P2 amplitude was demonstrated in patients with PD evaluated in the on-medication state compared to the off-medication state and HCs. In addition to a dopaminergic deficiency, deficits in serotonergic neurotransmission in PD were shown based on increased IDAEP. Due to specific alterations of the N1-P2 complex, the current results suggest deficiencies in early-attentive inhibitory processing of auditory input in PD. This interpretation is consistent with the involvement of the basal ganglia and the role of dopaminergic and serotonergic neurotransmission in auditory gating.
Asunto(s)
Corteza Auditiva , Enfermedad de Parkinson , Estimulación Acústica , Atención , Percepción Auditiva , Electroencefalografía , Potenciales Evocados Auditivos , Humanos , Enfermedad de Parkinson/complicaciones , Transmisión SinápticaRESUMEN
Efficiently avoiding inappropriate actions in a changing environment is central to cognitive control. One mechanism contributing to this ability is the deliberate slowing down of responses in contexts where full response cancellation might occasionally be required, referred to as proactive response inhibition. The present electroencephalographic (EEG) study investigated the role of attentional processes in proactive response inhibition in humans. To this end, we compared data from a standard stop-signal task, in which stop signals required response cancellation ('stop-relevant'), to data where possible stop signals were task-irrelevant ('stop-irrelevant'). Behavioral data clearly indicated the presence of proactive slowing in the standard stop-signal task. A novel single-trial analysis was used to directly model the relationship between response time and the EEG data of the go-trials in both contexts within a multilevel linear models framework. We found a relationship between response time and amplitude of the attention-related N1 component in stop-relevant blocks, a characteristic that was fully absent in stop-irrelevant blocks. Specifically, N1 amplitudes were lower the slower the response time, suggesting that attentional resources were being strategically down-regulated to control response speed. Drift diffusion modeling of the behavioral data indicated that multiple parameters differed across the two contexts, likely suggesting the contribution from independent brain mechanisms to proactive slowing. Hence, the attentional mechanism of proactive response control we report here might coexist with known mechanisms that are more directly tied to motoric response inhibition. As such, our study opens up new research avenues also concerning clinical conditions that feature deficits in proactive response inhibition.
Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Inhibición Proactiva , Adolescente , Adulto , Mapeo Encefálico , Regulación hacia Abajo , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Desempeño Psicomotor , Tiempo de Reacción/fisiología , Adulto JovenRESUMEN
To recognize individuals, the brain often integrates audiovisual information from familiar or unfamiliar faces, voices, and auditory names. To date, the effects of the semantic familiarity of stimuli on audiovisual integration remain unknown. In this functional magnetic resonance imaging (fMRI) study, we used familiar/unfamiliar facial images, auditory names, and audiovisual face-name pairs as stimuli to determine the influence of semantic familiarity on audiovisual integration. First, we performed a general linear model analysis using fMRI data and found that audiovisual integration occurred for familiar congruent and unfamiliar face-name pairs but not for familiar incongruent pairs. Second, we decoded the familiarity categories of the stimuli (familiar vs. unfamiliar) from the fMRI data and calculated the reproducibility indices of the brain patterns that corresponded to familiar and unfamiliar stimuli. The decoding accuracy rate was significantly higher for familiar congruent versus unfamiliar face-name pairs (83.2%) than for familiar versus unfamiliar faces (63.9%) and for familiar versus unfamiliar names (60.4%). This increase in decoding accuracy was not observed for familiar incongruent versus unfamiliar pairs. Furthermore, compared with the brain patterns associated with facial images or auditory names, the reproducibility index was significantly improved for the brain patterns of familiar congruent face-name pairs but not those of familiar incongruent or unfamiliar pairs. Our results indicate the modulatory effect that semantic familiarity has on audiovisual integration. Specifically, neural representations were enhanced for familiar congruent face-name pairs compared with visual-only faces and auditory-only names, whereas this enhancement effect was not observed for familiar incongruent or unfamiliar pairs. Hum Brain Mapp 37:4333-4348, 2016. © 2016 Wiley Periodicals, Inc.
Asunto(s)
Encéfalo/fisiología , Reconocimiento Facial/fisiología , Nombres , Reconocimiento en Psicología/fisiología , Semántica , Percepción del Habla/fisiología , Adulto , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Femenino , Humanos , Modelos Lineales , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Reproducibilidad de los ResultadosRESUMEN
Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts.
Asunto(s)
Atención/fisiología , Mapeo Encefálico , Potenciales Evocados/fisiología , Movimientos Sacádicos/fisiología , Vías Visuales/fisiología , Adulto , Análisis de Varianza , Electroencefalografía , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto JovenRESUMEN
Cognitive control allows to flexibly guide behaviour in a complex and ever-changing environment. It is supported by theta band (4-7 Hz) neural oscillations that coordinate distant neural populations. However, little is known about the precise neural mechanisms permitting such flexible control. Most research has focused on theta amplitude, showing that it increases when control is needed, but a second essential aspect of theta oscillations, their peak frequency, has mostly been overlooked. Here, using computational modelling and behavioural and electrophysiological recordings, in three independent datasets, we show that theta oscillations adaptively shift towards optimal frequency depending on task demands. We provide evidence that theta frequency balances reliable set-up of task representation and gating of task-relevant sensory and motor information and that this frequency shift predicts behavioural performance. Our study presents a mechanism supporting flexible control and calls for a reevaluation of the mechanistic role of theta oscillations in adaptive behaviour.
Asunto(s)
Cognición , Ritmo Teta , Cognición/fisiología , Humanos , Ritmo Teta/fisiologíaRESUMEN
In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment.
Asunto(s)
Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Adulto , Atención/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Cognición/fisiología , Electroencefalografía , Potenciales Evocados/fisiología , Movimientos Oculares/fisiología , Femenino , Fijación Ocular , Lateralidad Funcional/fisiología , Humanos , Masculino , Memoria a Corto Plazo/fisiología , Estimulación Luminosa , Desempeño Psicomotor/fisiología , Campos Visuales/fisiología , Adulto JovenRESUMEN
It has recently been suggested that visual working memory capacity may vary depending on the type of material that has to be memorized. Here, we use a delayed match-to-sample paradigm and event-related potentials (ERP) to investigate the neural correlates that are linked to these changes in capacity. A variable number of stimuli (1-4) were presented in each visual hemifield. Participants were required to selectively memorize the stimuli presented in one hemifield. Following memorization, a test stimulus was presented that had to be matched against the memorized item(s). Two types of stimuli were used: one set consisting of discretely different objects (discrete stimuli) and one set consisting of more continuous variations along a single dimension (continuous stimuli). Behavioral results indicate that memory capacity was much larger for the discrete stimuli, when compared with the continuous stimuli. This behavioral effect correlated with an increase in a contralateral negative slow wave ERP component that is known to be involved in memorization. We therefore conclude that the larger working memory capacity for discrete stimuli can be directly related to an increase in activity in visual areas and propose that this increase in visual activity is due to interactions with other, non-visual representations.
Asunto(s)
Encéfalo/fisiología , Memoria a Corto Plazo/fisiología , Percepción Visual/fisiología , Adulto , Análisis de Varianza , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiologíaRESUMEN
It has recently been shown that spatially uninformative sounds can cause a visual stimulus to pop out from an array of similar distractor stimuli when that sound is presented in temporal proximity to a feature change in the visual stimulus. Until now, this effect has predominantly been demonstrated by using stationary stimuli. Here, we extended these results by showing that auditory stimuli can also improve the sensitivity of visual motion change detection. To accomplish this, we presented moving visual stimuli (small dots) on a computer screen. At a random moment during a trial, one of these stimuli could abruptly move in an orthogonal direction. Participants' task was to indicate whether such an abrupt motion change occurred or not by making a corresponding button press. If a sound (a short 1,000 Hz tone pip) co-occurred with the abrupt motion change, participants were able to detect this motion change more frequently than when the sound was not present. Using measures derived from signal detection theory, we were able to demonstrate that the effect on accuracy was due to increased sensitivity rather than to changes in response bias.
Asunto(s)
Percepción de Movimiento/fisiología , Orientación/fisiología , Desempeño Psicomotor/fisiología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Estimulación Acústica/métodos , Adolescente , Percepción Auditiva/fisiología , Umbral Auditivo/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto JovenRESUMEN
It is still debated to what degree top-down and bottom-up driven attentional control processes are subserved by shared or by separate mechanisms. Interactions between these attentional control forms were investigated using a rapid event-related fMRI design, using an attentional search task. Following a prestimulus mask, target stimuli (consisting of a letter C or a mirror image of the C, enclosed in a diamond outline) were presented either at one unique location among three nontarget items (consisting of a random letter, enclosed in a circle outline; 50% probability), or at all four possible target locations (also 50% probability). On half the trials, irrelevant color singletons were presented, consisting of a color change of one of the four prestimulus masks, just prior to target appearance. Participants were required to search for a target letter inside the diamond and report its orientation. Results indicate that, in addition to a common network of parietal areas, medial frontal cortex is uniquely involved in top-down orienting, whereas bottom-up control is mainly subserved by a network of occipital and parietal areas. Additionally, we found that participants who were better able to suppress orienting to the color singleton showed middle frontal gyrus activation, and that the degree of top-down control correlated with insular activity. We conclude that, in addition to a common set of parietal areas, separate brain areas are involved in top-down and bottom-up driven attentional control, and that frontal areas play a role in the suppression of attentional capture by an irrelevant color singleton.
Asunto(s)
Atención/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Percepción de Color/fisiología , Discriminación en Psicología/fisiología , Potenciales Evocados Visuales/fisiología , Adolescente , Adulto , Análisis de Varianza , Encéfalo/irrigación sanguínea , Electroencefalografía/métodos , Movimientos Oculares/fisiología , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Oxígeno/sangre , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Factores de Tiempo , Adulto JovenRESUMEN
How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms.
RESUMEN
Research on auditory processing in Parkinson's disease (PD) has recently made substantial progress. At present, evidence has been found for altered auditory processing in the clinical stage of PD. The auditory alterations in PD have been demonstrated with low-cost and non-invasive assessments that are already used in routine clinical practice. Since auditory alterations have been reported early in disease progression, it would be highly relevant to investigate whether auditory markers could be provided in the prodromal stage of PD. In addition, auditory alterations in early stage PD might be modulated by dopaminergic medication. Therefore, the aim of this review is (1) to summarize the literature on auditory processing in PD with a specific focus on the early disease stages, (2) to give future perspectives on which audiological and electrophysiological measurements could be useful in the prodromal stage of PD and (3) to assess the effect of dopaminergic medication on potential auditory markers in the prodromal stage of PD.
RESUMEN
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125-75 ms, by 75-25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.
Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Análisis Multivariante , Pruebas Neuropsicológicas , Estimulación Luminosa , Tiempo de Reacción , Procesamiento de Señales Asistido por Computador , Análisis y Desempeño de Tareas , Factores de TiempoRESUMEN
Purpose Alterations in primary auditory functioning have been reported in patients with Parkinson's disease (PD). Despite the current findings, the pathophysiological mechanisms underlying these alterations remain unclear, and the effect of dopaminergic medication on auditory functioning in PD has been explored insufficiently. Therefore, this study aimed to systematically investigate primary auditory functioning in patients with PD by using both subjective and objective audiological measurements. Method In this case-control study, 25 patients with PD and 25 age-, gender-, and education-matched healthy controls underwent an audiological test battery consisting of tonal audiometry, short increment sensitivity index, otoacoustic emissions (OAEs), and speech audiometry. Patients with PD were tested in the on- and off-medication states. Results Increased OAE amplitudes were found when patients with PD were tested without dopaminergic medication. In addition, speech audiometry in silence and multitalker babble noise demonstrated higher phoneme scores for patients with PD in the off-medication condition. The results showed no differences in auditory functioning between patients with PD in the on-medication condition and healthy controls. No effect of disease stage or motor score was evident. Conclusions This study provides evidence for a top-down involvement in auditory processing in PD at both central and peripheral levels. Most important, the increase in OAE amplitude in the off-medication condition in PD is hypothesized to be linked to a dysfunction of the olivocochlear efferent system, which is known to have an inhibitory effect on outer hair cell functioning. Future studies may clarify whether OAEs may facilitate an early diagnosis of PD.
Asunto(s)
Audiometría del Habla/métodos , Trastornos de la Percepción Auditiva/diagnóstico , Emisiones Otoacústicas Espontáneas/fisiología , Enfermedad de Parkinson/fisiopatología , Anciano , Trastornos de la Percepción Auditiva/etiología , Umbral Auditivo/fisiología , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Núcleo Olivar/fisiopatología , Enfermedad de Parkinson/complicaciones , Sensibilidad y EspecificidadRESUMEN
An ongoing debate in visual working memory research is concentrated on whether visual working memory capacity is determined solely by the number of objects to be memorized, or additionally by the number of relevant features contained within objects. Using a novel change detection task that contained multi-feature objects we examined the effect of both object number and feature number on visual working memory capacity, change detection sensitivity, and posterior slow wave event-related brain potential (ERP) activity. Behaviorally, working memory capacity and sensitivity were modulated as a function of both the number of objects and the number of features memorized per object. However, the Contralateral Delay Activity (CDA) was only sensitive to the number of objects, but not to the number of features. This suggests that while both objects and features contribute to limitations in visual working memory capacity, the CDA is an insufficient mechanism to account for feature level representations.
Asunto(s)
Atención , Encéfalo/fisiología , Potenciales Evocados/fisiología , Memoria a Corto Plazo/fisiología , Percepción Visual , Adolescente , Electroencefalografía , Femenino , Lateralidad Funcional , Humanos , Masculino , Adulto JovenRESUMEN
Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.
Asunto(s)
Aprendizaje , Recompensa , Adulto , Femenino , Humanos , Masculino , Adulto JovenRESUMEN
It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.
RESUMEN
The synchronous occurrence of the unisensory components of a multisensory stimulus contributes to their successful merging into a coherent perceptual representation. Oscillatory gamma-band responses (GBRs, 30-80 Hz) have been linked to feature integration mechanisms and to multisensory processing, suggesting they may also be sensitive to the temporal alignment of multisensory stimulus components. Here we examined the effects on early oscillatory GBR brain activity of varying the precision of the temporal synchrony of the unisensory components of an audio-visual stimulus. Audio-visual stimuli were presented with stimulus onset asynchronies ranging from -125 to +125 ms. Randomized streams of auditory (A), visual (V), and audio-visual (AV) stimuli were presented centrally while subjects attended to either the auditory or visual modality to detect occasional targets. GBRs to auditory and visual components of multisensory AV stimuli were extracted for five subranges of asynchrony (e.g., A preceded by V by 100+/-25 ms, by 50+/-25 ms, etc.) and compared with GBRs to unisensory control stimuli. Robust multisensory interactions were observed in the early GBRs when the auditory and visual stimuli were presented with the closest synchrony. These effects were found over medial-frontal brain areas after 30-80 ms and over occipital brain areas after 60-120 ms. A second integration effect, possibly reflecting the perceptual separation of the two sensory inputs, was found over occipital areas when auditory inputs preceded visual by 100+/-25 ms. No significant interactions were observed for the other subranges of asynchrony. These results show that the precision of temporal synchrony can have an impact on early cross-modal interactions in human cortex.