Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Nat Commun ; 13(1): 2489, 2022 05 05.
Article in English | MEDLINE | ID: mdl-35513362

ABSTRACT

Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.


Subject(s)
Auditory Perception , Visual Perception , Acoustic Stimulation , Brain , Humans , Magnetoencephalography , Parietal Lobe , Photic Stimulation
2.
PLoS Comput Biol ; 13(7): e1005546, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28692700

ABSTRACT

Sensory information about the state of the world is generally ambiguous. Understanding how the nervous system resolves such ambiguities to infer the actual state of the world is a central quest for sensory neuroscience. However, the computational principles of perceptual disambiguation are still poorly understood: What drives perceptual decision-making between multiple equally valid solutions? Here we investigate how humans gather and combine sensory information-within and across modalities-to disambiguate motion perception in an ambiguous audiovisual display, where two moving stimuli could appear as either streaming through, or bouncing off each other. By combining psychophysical classification tasks with reverse correlation analyses, we identified the particular spatiotemporal stimulus patterns that elicit a stream or a bounce percept, respectively. From that, we developed and tested a computational model for uni- and multi-sensory perceptual disambiguation that tightly replicates human performance. Specifically, disambiguation relies on knowledge of prototypical bouncing events that contain characteristic patterns of motion energy in the dynamic visual display. Next, the visual information is linearly integrated with auditory cues and prior knowledge about the history of recent perceptual interpretations. What is more, we demonstrate that perceptual decision-making with ambiguous displays is systematically driven by noise, whose random patterns not only promote alternation, but also provide signal-like information that biases perception in highly predictable fashion.


Subject(s)
Auditory Perception/physiology , Decision Making/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Algorithms , Computational Biology , Female , Humans , Male , Models, Psychological , Photic Stimulation , Psychophysics , Young Adult
3.
Nat Commun ; 7: 11543, 2016 06 06.
Article in English | MEDLINE | ID: mdl-27265526

ABSTRACT

The brain efficiently processes multisensory information by selectively combining related signals across the continuous stream of multisensory inputs. To do so, it needs to detect correlation, lag and synchrony across the senses; optimally integrate related information; and dynamically adapt to spatiotemporal conflicts across the senses. Here we show that all these aspects of multisensory perception can be jointly explained by postulating an elementary processing unit akin to the Hassenstein-Reichardt detector-a model originally developed for visual motion perception. This unit, termed the multisensory correlation detector (MCD), integrates related multisensory signals through a set of temporal filters followed by linear combination. Our model can tightly replicate human perception as measured in a series of empirical studies, both novel and previously published. MCDs provide a unified general theory of multisensory processing, which simultaneously explains a wide spectrum of phenomena with a simple, yet physiologically plausible model.


Subject(s)
Auditory Perception/physiology , Sensation , Visual Perception/physiology , Acoustic Stimulation , Adult , Computer Simulation , Cues , Female , Humans , Judgment , Male , Models, Neurological , Photic Stimulation , Reproducibility of Results , Time Factors , Young Adult
4.
Multisens Res ; 26(3): 307-16, 2013.
Article in English | MEDLINE | ID: mdl-23964482

ABSTRACT

Humans are equipped with multiple sensory channels that provide both redundant and complementary information about the objects and events in the world around them. A primary challenge for the brain is therefore to solve the 'correspondence problem', that is, to bind those signals that likely originate from the same environmental source, while keeping separate those unisensory inputs that likely belong to different objects/events. Whether multiple signals have a common origin or not must, however, be inferred from the signals themselves through a causal inference process. Recent studies have demonstrated that cross-correlation, that is, the similarity in temporal structure between unimodal signals, represents a powerful cue for solving the correspondence problem in humans. Here we provide further evidence for the role of the temporal correlation between auditory and visual signals in multisensory integration. Capitalizing on the well-known fact that sensitivity to crossmodal conflict is inversely related to the strength of coupling between the signals, we measured sensitivity to crossmodal spatial conflicts as a function of the cross-correlation between the temporal structures of the audiovisual signals. Observers' performance was systematically modulated by the cross-correlation, with lower sensitivity to crossmodal conflict being measured for correlated as compared to uncorrelated audiovisual signals. These results therefore provide support for the claim that cross-correlation promotes multisensory integration. A Bayesian framework is proposed to interpret the present results, whereby stimulus correlation is represented on the prior distribution of expected crossmodal co-occurrence.


Subject(s)
Auditory Perception/physiology , Bayes Theorem , Cues , Visual Perception/physiology , Acoustic Stimulation/methods , Brain Mapping/methods , Humans , Photic Stimulation/methods
5.
Exp Brain Res ; 220(3-4): 319-33, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22706551

ABSTRACT

A growing body of empirical research on the topic of multisensory perception now shows that even non-synaesthetic individuals experience crossmodal correspondences, that is, apparently arbitrary compatibility effects between stimuli in different sensory modalities. In the present study, we replicated a number of classic results from the literature on crossmodal correspondences and highlight the existence of two new crossmodal correspondences using a modified version of the implicit association test (IAT). Given that only a single stimulus was presented on each trial, these results rule out selective attention and multisensory integration as possible mechanisms underlying the reported compatibility effects on speeded performance. The crossmodal correspondences examined in the present study all gave rise to very similar effect sizes, and the compatibility effect had a very rapid onset, thus speaking to the automatic detection of crossmodal correspondences. These results are further discussed in terms of the advantages of the IAT over traditional techniques for assessing the strength and symmetry of various crossmodal correspondences.


Subject(s)
Association , Attention/physiology , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Photic Stimulation , Reaction Time/physiology , Symbolism
6.
Exp Brain Res ; 214(3): 373-80, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21901453

ABSTRACT

The question of the arbitrariness of language is among the oldest in cognitive sciences, and it relates to the nature of the associations between vocal sounds and their meaning. Growing evidence seems to support sound symbolism, claiming for a naturally constrained mapping of meaning into sounds. Most of such evidence, however, comes from studies based on the interpretation of pseudowords, and to date, there is little empirical evidence that sound symbolism can affect phonatory behavior. In the present study, we asked participants to utter the letter /a/ in response to visual stimuli varying in shape, luminance, and size, and we observed consistent sound symbolic effects on vocalizations. Utterances' loudness was modulated by stimulus shape and luminance. Moreover, stimulus shape consistently modulated the frequency of the third formant (F3). This finding reveals an automatic mapping of specific visual attributes into phonological features of vocalizations. Furthermore, it suggests that sound-meaning associations are reciprocal, affecting active (production) as well as passive (comprehension) linguistic behavior.


Subject(s)
Auditory Perception/physiology , Phonation/physiology , Semantics , Speech Perception/physiology , Symbolism , Visual Perception/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Imagination/physiology , Language Tests/standards , Male , Middle Aged , Photic Stimulation/methods , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL