Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Philos Trans R Soc Lond B Biol Sci ; 378(1886): 20220346, 2023 09 25.
Artículo en Inglés | MEDLINE | ID: mdl-37545310

RESUMEN

To form coherent multisensory perceptual representations, the brain must solve a causal inference problem: to decide if two sensory cues originated from the same event and should be combined, or if they came from different events and should be processed independently. According to current models of multisensory integration, during this process, the integrated (common cause) and segregated (different causes) internal perceptual models are entertained. In the present study, we propose that the causal inference process involves competition between these alternative perceptual models that engages the brain mechanisms of conflict processing. To test this hypothesis, we conducted two experiments, measuring reaction times (RTs) and electroencephalography, using an audiovisual ventriloquist illusion paradigm with varying degrees of intersensory disparities. Consistent with our hypotheses, incongruent trials led to slower RTs and higher fronto-medial theta power, both indicative of conflict. We also predicted that intermediate disparities would yield slower RTs and higher theta power when compared to congruent stimuli and to large disparities, owing to the steeper competition between causal models. Although this prediction was only validated in the RT study, both experiments displayed the anticipated trend. In conclusion, our findings suggest a potential involvement of the conflict mechanisms in multisensory integration of spatial information. This article is part of the theme issue 'Decision and control processes in multisensory perception'.


Asunto(s)
Percepción Auditiva , Ilusiones , Humanos , Percepción Visual , Encéfalo , Electroencefalografía , Estimulación Luminosa , Estimulación Acústica
2.
Psychophysiology ; 59(11): e14108, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35678104

RESUMEN

Neural entrainment, or the synchronization of endogenous oscillations to exogenous rhythmic events, has been postulated as a powerful mechanism underlying stimulus prediction. Nevertheless, studies that have explored the benefits of neural entrainment on attention, perception, and other cognitive functions have received criticism, which could compromise their theoretical and clinical value. Therefore, the aim of the present study was [1] to confirm the presence of entrainment using a set of pre-established criteria and [2] to establish whether the reported behavioral benefits of entrainment remain when temporal predictability related to target appearance is reduced. To address these points, we adapted a previous neural entrainment paradigm to include: a variable entrainer length and increased target-absent trials, and instructing participants to respond only if they had detected a target, to avoid guessing. Thirty-six right-handed women took part in this study. Our results indicated a significant alignment of neural activity to the external periodicity as well as a persistence of phase alignment beyond the offset of the driving signal. This would appear to indicate that neural entrainment triggers preexisting endogenous oscillations, which cannot simply be explained as a succession of event-related potentials associated with the stimuli, expectation and/or motor response. However, we found no behavioral benefit for targets in-phase with entrainers, which would suggest that the effect of neural entrainment on overt behavior may be more limited than expected. These results help to clarify the mechanistic processes underlying neural entrainment and provide new insights on its applications.


Asunto(s)
Potenciales Evocados , Periodicidad , Estimulación Acústica/métodos , Atención , Percepción Auditiva/fisiología , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos
3.
Eur J Neurosci ; 49(2): 150-164, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30270546

RESUMEN

In everyday life multisensory events, such as a glass crashing on the floor, the different sensory inputs are often experienced as simultaneous, despite the sensory processing of sound and sight within the brain are temporally misaligned. This lack of cross-modal synchrony is the unavoidable consequence of different light and sound speeds, and their different neural transmission times in the corresponding sensory pathways. Hence, cross-modal synchrony must be reconstructed during perception. It has been suggested that spontaneous fluctuations in neural excitability might be involved in the temporal organisation of sensory events during perception and account for variability in behavioural performance. Here, we addressed the relationship between ongoing brain oscillations and the perception of cross-modal simultaneity. Participants performed an audio-visual simultaneity judgement task while their EEG was recorded. We focused on pre-stimulu activity, and found that the phase of neural oscillations at 13 ± 2 Hz 200 ms prior to the stimulus correlated with subjective simultaneity of otherwise identical sound-flash events. Remarkably, the correlation between EEG phase and behavioural report occurred in the absence of concomitant changes in EEG amplitude. The probability of simultaneity perception fluctuated significantly as a function of pre-stimulus phase, with the largest perceptual variation being accounted for phase angles nearly 180º apart. This pattern was strongly reliable for sound-flash pairs but not for flash-sound pairs. Overall, these findings suggest that the phase of ongoing brain activity might underlie internal states of the observer that influence cross-modal temporal organisation between the senses and, in turn, subjective synchrony.


Asunto(s)
Percepción Auditiva/fisiología , Ondas Encefálicas , Encéfalo/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Adulto , Femenino , Humanos , Juicio/fisiología , Masculino , Estimulación Luminosa , Adulto Joven
4.
Neuroimage ; 119: 272-85, 2015 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-26119022

RESUMEN

The interplay between attention and multisensory integration has proven to be a difficult question to tackle. There are almost as many studies showing that multisensory integration occurs independently from the focus of attention as studies implying that attention has a profound effect on integration. Addressing the neural expression of multisensory integration for attended vs. unattended stimuli can help disentangle this apparent contradiction. In the present study, we examine if selective attention to sound pitch influences the expression of audiovisual integration in both behavior and neural activity. Participants were asked to attend to one of two auditory speech streams while watching a pair of talking lips that could be congruent or incongruent with the attended speech stream. We measured behavioral and neural responses (fMRI) to multisensory stimuli under attended and unattended conditions while physical stimulation was kept constant. Our results indicate that participants recognized words more accurately from an auditory stream that was both attended and audiovisually (AV) congruent, thus reflecting a benefit due to AV integration. On the other hand, no enhancement was found for AV congruency when it was unattended. Furthermore, the fMRI results indicated that activity in the superior temporal sulcus (an area known to be related to multisensory integration) was contingent on attention as well as on audiovisual congruency. This attentional modulation extended beyond heteromodal areas to affect processing in areas classically recognized as unisensory, such as the superior temporal gyrus or the extrastriate cortex, and to non-sensory areas such as the motor cortex. Interestingly, attention to audiovisual incongruence triggered responses in brain areas related to conflict processing (i.e., the anterior cingulate cortex and the anterior insula). Based on these results, we hypothesize that AV speech integration can take place automatically only when both modalities are sufficiently processed, and that if a mismatch is detected between the AV modalities, feedback from conflict areas minimizes the influence of this mismatch by reducing the processing of the least informative modality.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Percepción de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Adulto Joven
5.
Exp Brain Res ; 232(6): 1631-8, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24699769

RESUMEN

Crossmodal interaction conferring enhancement in sensory processing is nowadays widely accepted. Such benefit is often exemplified by neural response amplification reported in physiological studies conducted with animals, which parallel behavioural demonstrations of sound-driven improvement in visual tasks in humans. Yet, a good deal of controversy still surrounds the nature and interpretation of these human psychophysical studies. Here, we consider the interpretation of crossmodal enhancement findings under the light of the functional as well as anatomical specialization of magno- and parvocellular visual pathways, whose paramount relevance has been well established in visual research but often overlooked in crossmodal research. We contend that a more explicit consideration of this important visual division may resolve some current controversies and help optimize the design of future crossmodal research.


Asunto(s)
Percepción Auditiva/fisiología , Visión Ocular/fisiología , Vías Visuales/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Humanos , Estimulación Luminosa , Psicofísica
6.
Int J Psychophysiol ; 89(1): 136-47, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23797145

RESUMEN

Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension.


Asunto(s)
Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Adulto , Comprensión , Interpretación Estadística de Datos , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Femenino , Fijación Ocular , Humanos , Masculino , Fonética , Estimulación Luminosa , Psicolingüística , Lectura , Semántica , Adulto Joven
7.
J Neurophysiol ; 109(4): 1065-77, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23221404

RESUMEN

Cross-modal enhancement can be mediated both by higher-order effects due to attention and decision making and by detection-level stimulus-driven interactions. However, the contribution of each of these sources to behavioral improvements has not been conclusively determined and quantified separately. Here, we apply psychophysical analysis based on Piéron functions in order to separate stimulus-dependent changes from those accounted by decision-level contributions. Participants performed a simple visual speeded detection task on Gabor patches of different spatial frequencies and contrast values, presented with and without accompanying sounds. On one hand, we identified an additive cross-modal improvement in mean reaction times across all types of visual stimuli that would be well explained by interactions not strictly based on stimulus-driven modulations (e.g., due to reduction of temporal uncertainty and motor times). On the other hand, we singled out an audio-visual benefit that strongly depended on stimulus features such as frequency and contrast. This particular enhancement was selective to low-visual spatial frequency stimuli, optimized for magnocellular sensitivity. We therefore conclude that interactions at detection stages and at decisional processes in response selection that contribute to audio-visual enhancement can be separated online and express on partly different aspects of visual processing.


Asunto(s)
Percepción Auditiva/fisiología , Sonido , Visión Ocular/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Toma de Decisiones , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción , Detección de Señal Psicológica
8.
Exp Brain Res ; 216(3): 457-62, 2012 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-22105336

RESUMEN

The retinal image of an object does not contain information about its actual size. Size must instead be inferred from extraretinal cues for which distance information makes an essential contribution. Asynchronies in the arrival time across visual and auditory sensory components of an audiovisual event can reliably cue its distance, although this cue has been largely neglected in vision research. Here we demonstrate that audio-visual asynchronies can produce a shift in the apparent size of an object and attribute this shift to a change in perceived distance. In the present study participants were asked to match the perceived size of a test circle paired with an asynchronous sound to a variable-size probe circle paired with a simultaneous sound. The perceived size of the circle increased when the sound followed its onset with delays up to around 100 ms. For longer sound delays and sound leads, no effect was seen. We attribute this selective modulation in perceived visual size to audiovisual timing influences on the intrinsic relationship between size and distance. This previously unsuspected cue to distance reveals a surprisingly interactive system using multisensory information for size/distance perception.


Asunto(s)
Percepción de Distancia/fisiología , Tiempo de Reacción/fisiología , Percepción del Tamaño/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa/métodos , Desempeño Psicomotor , Psicofísica , Factores de Tiempo , Adulto Joven
9.
Exp Brain Res ; 213(2-3): 175-83, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21431431

RESUMEN

A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Detección de Señal Psicológica/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Sensibilidad y Especificidad , Localización de Sonidos/fisiología , Factores de Tiempo , Grabación en Video , Adulto Joven
10.
Brain Res ; 1366: 85-92, 2010 Dec 17.
Artículo en Inglés | MEDLINE | ID: mdl-20940003

RESUMEN

Although it has been previously reported that audiovisual integration can modulate performance on some visual tasks, multisensory interactions have not been explicitly assessed in the context of different visual processing pathways. In the present study, we test auditory influences on visual processing employing a psychophysical paradigm that reveals distinct spatial contrast signatures of magnocellular and parvocellular visual pathways. We found that contrast thresholds are reduced when noninformative sounds are presented with transient, low-frequency Gabor patch stimuli and thus favor the M-system. In contrast, visual thresholds are unaffected by concurrent sounds when detection is primarily attributed to P-pathway processing. These results demonstrate that the visual detection enhancement resulting from multisensory integration is mainly articulated by the magnocellular system, which is most sensitive at low spatial frequencies. Such enhancement may subserve stimulus-driven processes including the orientation of spatial attention and fast, automatic ocular and motor responses. This dissociation helps explain discrepancies between the results of previous studies investigating visual enhancement by sounds.


Asunto(s)
Atención/fisiología , Sensibilidad de Contraste/fisiología , Umbral Sensorial/fisiología , Vías Visuales/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Femenino , Humanos , Masculino , Orientación/fisiología , Estimulación Luminosa , Psicofísica , Tiempo de Reacción/fisiología , Sonido , Adulto Joven
11.
Exp Brain Res ; 183(3): 399-404, 2007 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-17899043

RESUMEN

One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839-843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.


Asunto(s)
Atención/fisiología , Acústica del Lenguaje , Percepción del Habla/fisiología , Tacto/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Habla , Medición de la Producción del Habla
12.
Prog Brain Res ; 155: 273-86, 2006.
Artículo en Inglés | MEDLINE | ID: mdl-17027394

RESUMEN

Recent studies have highlighted the influence of multisensory integration mechanisms in the processing of motion information. One central issue in this research area concerns the extent to which the behavioral correlates of these effects can be attributed to late post-perceptual (i.e., response-related or decisional) processes rather than to perceptual mechanisms of multisensory binding. We investigated the influence of various top-down factors on the phenomenon of crossmodal dynamic capture, whereby the direction of motion in one sensory modality (audition) is strongly influenced by motion presented in another sensory modality (vision). In Experiment 1, we introduced extensive feedback in order to manipulate the motivation level of participants and the extent of their practice with the task. In Experiment 2, we reduced the variability of the irrelevant (visual) distractor stimulus by making its direction predictable beforehand. In Experiment 3, we investigated the effects of changing the stimulus-response mapping (task). None of these manipulations exerted any noticeable influence on the overall pattern of crossmodal dynamic capture that was observed. We therefore conclude that the integration of multisensory motion cues is robust to a number of top-down influences, thereby revealing that the crossmodal dynamic capture effect reflects the relatively automatic integration of multisensory motion information.


Asunto(s)
Percepción Auditiva/fisiología , Movimiento (Física) , Detección de Señal Psicológica/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Análisis de Varianza , Relación Dosis-Respuesta en la Radiación , Retroalimentación , Humanos , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología
13.
Exp Brain Res ; 166(3-4): 548-58, 2005 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16132965

RESUMEN

In this study we investigated the effect of the directional congruency of tactile, visual, or bimodal visuotactile apparent motion distractors on the perception of auditory apparent motion. Participants had to judge the direction in which an auditory apparent motion stream moved (left-to-right or right-to-left) while trying to ignore one of a range of distractor stimuli, including unimodal tactile or visual, bimodal visuotactile, and crossmodal (i.e., composed of one visual and one tactile stimulus) distractors. Significant crossmodal dynamic capture effects (i.e., better performance when the target and distractor stimuli moved in the same direction rather than in opposite directions) were demonstrated in all conditions. Bimodal distractors elicited more crossmodal dynamic capture than unimodal distractors, thus providing the first empirical demonstration of the effect of information presented simultaneously in two irrelevant sensory modalities on the perception of motion in a third (target) sensory modality. The results of a second experiment demonstrated that the capture effect reported in the crossmodal distractor condition was most probably attributable to the combined effect of the individual static distractors (i.e., to ventriloquism) rather than to any emergent property of crossmodal apparent motion.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Percepción de Movimiento/fisiología , Estimulación Acústica , Adulto , Discriminación en Psicología/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa , Estimulación Física , Localización de Sonidos/fisiología
14.
Exp Brain Res ; 165(4): 505-14, 2005 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-15942735

RESUMEN

We report two experiments designed to assess the consequences of posture change on audiotactile spatiotemporal interactions. In Experiment 1, participants had to discriminate the direction of an auditory stream (consisting of the sequential presentation of two tones from different spatial positions) while attempting to ignore a task-irrelevant tactile stream (consisting of the sequential presentation of two vibrations, one to each of the participant's hands). The tactile stream presented to the participants' hands was either spatiotemporally congruent or incongruent with respect to the sounds. A significant decrease in performance in incongruent trials compared with congruent trials was demonstrated when the participants adopted an uncrossed-hands posture but not when their hands were crossed over the midline. In Experiment 2, we investigated the ability of participants to discriminate the direction of two sequentially presented tactile stimuli (one presented to each hand) as a function of the presence of congruent vs incongruent auditory distractors. Here, the crossmodal effect was stronger in the crossed-hands posture than in the uncrossed-hands posture. These results demonstrate the reciprocal nature of audiotactile interactions in spatiotemporal processing, and highlight the important role played by body posture in modulating such crossmodal interactions.


Asunto(s)
Percepción Auditiva/fisiología , Mano/fisiología , Postura/fisiología , Percepción Espacial/fisiología , Percepción del Tiempo/fisiología , Tacto/fisiología , Estimulación Acústica , Adolescente , Adulto , Discriminación en Psicología , Femenino , Humanos , Masculino , Desempeño Psicomotor/fisiología , Localización de Sonidos/fisiología , Vibración
15.
Brain Res Cogn Brain Res ; 14(1): 139-46, 2002 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-12063137

RESUMEN

Integrating dynamic information across the senses is crucial to survival. However, most laboratory studies have only examined sensory integration for static events. Here we demonstrate that strong crossmodal integration can also occur for an emergent attribute of dynamic arrays, specifically the direction of apparent motion. The results of the present study show that the perceived direction of auditory apparent motion is strongly modulated by apparent motion in vision, and that both spatial and temporal factors play a significant role in this crossmodal effect. We also demonstrate that a split-brain patient who does not perceive visual apparent motion across the midline is immune to this audiovisual dynamic capture effect, highlighting the importance of motion being experienced in order for this new multisensory illusion to occur.


Asunto(s)
Percepción Auditiva/fisiología , Ilusiones/fisiología , Percepción de Movimiento/fisiología , Estimulación Acústica/métodos , Análisis de Varianza , Cuerpo Calloso/fisiología , Cuerpo Calloso/cirugía , Lateralidad Funcional/fisiología , Humanos , Masculino , Estimulación Luminosa/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA