Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Nat Commun ; 13(1): 2489, 2022 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-35513362

RESUMEN

Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.


Asunto(s)
Percepción Auditiva , Percepción Visual , Estimulación Acústica , Encéfalo , Humanos , Magnetoencefalografía , Lóbulo Parietal , Estimulación Luminosa
2.
PLoS Comput Biol ; 13(7): e1005546, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28692700

RESUMEN

Sensory information about the state of the world is generally ambiguous. Understanding how the nervous system resolves such ambiguities to infer the actual state of the world is a central quest for sensory neuroscience. However, the computational principles of perceptual disambiguation are still poorly understood: What drives perceptual decision-making between multiple equally valid solutions? Here we investigate how humans gather and combine sensory information-within and across modalities-to disambiguate motion perception in an ambiguous audiovisual display, where two moving stimuli could appear as either streaming through, or bouncing off each other. By combining psychophysical classification tasks with reverse correlation analyses, we identified the particular spatiotemporal stimulus patterns that elicit a stream or a bounce percept, respectively. From that, we developed and tested a computational model for uni- and multi-sensory perceptual disambiguation that tightly replicates human performance. Specifically, disambiguation relies on knowledge of prototypical bouncing events that contain characteristic patterns of motion energy in the dynamic visual display. Next, the visual information is linearly integrated with auditory cues and prior knowledge about the history of recent perceptual interpretations. What is more, we demonstrate that perceptual decision-making with ambiguous displays is systematically driven by noise, whose random patterns not only promote alternation, but also provide signal-like information that biases perception in highly predictable fashion.


Asunto(s)
Percepción Auditiva/fisiología , Toma de Decisiones/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Algoritmos , Biología Computacional , Femenino , Humanos , Masculino , Modelos Psicológicos , Estimulación Luminosa , Psicofísica , Adulto Joven
3.
Nat Commun ; 7: 11543, 2016 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-27265526

RESUMEN

The brain efficiently processes multisensory information by selectively combining related signals across the continuous stream of multisensory inputs. To do so, it needs to detect correlation, lag and synchrony across the senses; optimally integrate related information; and dynamically adapt to spatiotemporal conflicts across the senses. Here we show that all these aspects of multisensory perception can be jointly explained by postulating an elementary processing unit akin to the Hassenstein-Reichardt detector-a model originally developed for visual motion perception. This unit, termed the multisensory correlation detector (MCD), integrates related multisensory signals through a set of temporal filters followed by linear combination. Our model can tightly replicate human perception as measured in a series of empirical studies, both novel and previously published. MCDs provide a unified general theory of multisensory processing, which simultaneously explains a wide spectrum of phenomena with a simple, yet physiologically plausible model.


Asunto(s)
Percepción Auditiva/fisiología , Sensación , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Simulación por Computador , Señales (Psicología) , Femenino , Humanos , Juicio , Masculino , Modelos Neurológicos , Estimulación Luminosa , Reproducibilidad de los Resultados , Factores de Tiempo , Adulto Joven
4.
Multisens Res ; 26(3): 307-16, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23964482

RESUMEN

Humans are equipped with multiple sensory channels that provide both redundant and complementary information about the objects and events in the world around them. A primary challenge for the brain is therefore to solve the 'correspondence problem', that is, to bind those signals that likely originate from the same environmental source, while keeping separate those unisensory inputs that likely belong to different objects/events. Whether multiple signals have a common origin or not must, however, be inferred from the signals themselves through a causal inference process. Recent studies have demonstrated that cross-correlation, that is, the similarity in temporal structure between unimodal signals, represents a powerful cue for solving the correspondence problem in humans. Here we provide further evidence for the role of the temporal correlation between auditory and visual signals in multisensory integration. Capitalizing on the well-known fact that sensitivity to crossmodal conflict is inversely related to the strength of coupling between the signals, we measured sensitivity to crossmodal spatial conflicts as a function of the cross-correlation between the temporal structures of the audiovisual signals. Observers' performance was systematically modulated by the cross-correlation, with lower sensitivity to crossmodal conflict being measured for correlated as compared to uncorrelated audiovisual signals. These results therefore provide support for the claim that cross-correlation promotes multisensory integration. A Bayesian framework is proposed to interpret the present results, whereby stimulus correlation is represented on the prior distribution of expected crossmodal co-occurrence.


Asunto(s)
Percepción Auditiva/fisiología , Teorema de Bayes , Señales (Psicología) , Percepción Visual/fisiología , Estimulación Acústica/métodos , Mapeo Encefálico/métodos , Humanos , Estimulación Luminosa/métodos
5.
J Vis ; 9(12): 7.1-16, 2009 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-20053098

RESUMEN

After exposure to asynchronous sound and light stimuli, perceived audio-visual synchrony changes to compensate for the asynchrony. Here we investigate to what extent this audio-visual recalibration effect transfers to visual-tactile and audio-tactile simultaneity perception in order to infer the mechanisms responsible for temporal recalibration. Results indicate that audio-visual recalibration of simultaneity can transfer to audio-tactile and visual-tactile stimuli depending on the way in which the multisensory stimuli are presented. With presentation of co-located multisensory stimuli, we found a change in the perceptual latency of the visual stimuli. Presenting auditory stimuli through headphones, on the other hand, induced a change in the perceptual latency of the auditory stimuli. We argue that the difference in transfer depends on the relative trust in the auditory and visual estimates. Interestingly, these findings were confirmed by showing that audio-visual recalibration influences simple reaction time to visual and auditory stimuli. Presenting co-located stimuli during asynchronous exposure induced a change in reaction time to visual stimuli, while with headphones the change in reaction time occurred for the auditory stimuli. These results indicate that the perceptual latency is altered with repeated exposure to asynchronous audio-visual stimuli in order to compensate (at least in part) for the presented asynchrony.


Asunto(s)
Adaptación Fisiológica , Percepción Auditiva , Percepción del Tiempo , Percepción del Tacto , Percepción Visual , Estimulación Acústica , Ambiente , Humanos , Estimulación Luminosa , Tiempo de Reacción , Detección de Señal Psicológica , Factores de Tiempo , Vibración
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA