RESUMEN
Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e. characters/monosyllabic words) and higher-level linguistic structures (i.e. phrases and sentences) across the 3 modalities separately. We found that audio-visual integration occurs in all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.
Asunto(s)
Mapeo Encefálico , Comprensión , Humanos , Comprensión/fisiología , Encéfalo/fisiología , Lingüística , ElectroencefalografíaRESUMEN
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Asunto(s)
Memoria a Largo Plazo , Percepción Visual , Humanos , Cognición , Señales (Psicología) , Reconocimiento en PsicologíaRESUMEN
Humans integrate visual information about their surrounding environment to properly adapt their locomotion to step over or around obstacles in their path. We know that cognition aids in the execution of locomotion and in complex maneuvers such as obstacle avoidance. However, the role of the cognitive system in performing online adjustments to an obstacle avoidance strategy during locomotion has not yet been elucidated. Nineteen young adults instrumented with kinematic markers were asked to step over or circumvent an obstacle to the left or right. In half of these trials, participants were required to adjust this strategy when cued by LED lights two steps prior to obstacle crossing. In 75% of trials, a cognitive task was simultaneously presented (incongruent or congruent auditory Stroop cue, or neutral cue). Center of mass position and velocity was estimated and gait metrics (eg. step length) were calculated to quantify how individuals performed this last-minute direction change and determine how these responses changed when simultaneously performing a cognitive task. Results showed statistically shorter crossing steps, where the trailing limb was placed further from the leading edge and the lead limb was placed closer to the trailing edge when responding to the auditory Stroop task. Performing these avoidance strategy changes also decreased cognitive task performance. Our findings suggest that visually integrating a new stepping pattern to cross an obstacle is a complex locomotor maneuver, and requires the aid of the cognitive system to be performed effectively in a young adult population.
Asunto(s)
Cognición , Fenómenos Biomecánicos , Marcha , Humanos , Locomoción , Desempeño Psicomotor , Caminata , Adulto JovenRESUMEN
While previous research has shown that during mental imagery participants look back to areas visited during encoding it is unclear what happens when information presented during encoding is incongruent. To investigate this research question, we presented 30 participants with incongruent audio-visual associations (e.g. the image of a car paired with the sound of a cat) and later asked them to create a congruent mental representation based on the auditory cue (e.g. to create a mental representation of a cat while hearing the sound of a cat). The results revealed that participants spent more time in the areas where they previously saw the object and that incongruent audio-visual information during encoding did not appear to interfere with the generation and maintenance of mental images. This finding suggests that eye movements can be flexibly employed during mental imagery depending on the demands of the task.
Asunto(s)
Movimientos Oculares , Imágenes en Psicoterapia , Animales , Gatos , Humanos , Sonido , Percepción VisualRESUMEN
The Bluegrass corpus includes sentences from 40 pairs of speakers. Participants from the Bluegrass Region rated one speaker from each pair as having a native North American English accent and the other as having a foreign accent (Experiment 1). Furthermore, speakers within each pair looked very similar in appearance, in that participants rated them similarly likely to speak with a foreign accent (Experiment 2). For each speaker we selected eight sentences based on participants' ratings of difficulty (Experiment 3). The final corpus includes a selection of 640 sentences (80 speakers, 8 stimuli per speaker) freely available through the Open Science Framework. Each sentence can be downloaded in different formats (text, audio, video) so researchers can investigate how audio-visual information influences language processing. Researchers can contribute to the corpus by validating the stimuli with new populations, selecting additional sentences, or finding new TED videos featuring appropriate speakers to answer their research questions.
Asunto(s)
Poa , Percepción del Habla , Humanos , Lenguaje , InvestigadoresRESUMEN
In this opinion essay, I address the perennial binding problem, that is to say of how independently processed visual attributes such as form, colour and motion are brought together to give us a unified and holistic picture of the visual world. A solution to this central issue in neurobiology remains as elusive as ever. No one knows today how it is implemented. The issue is not a new one and, though discussed most commonly in the context of the visual brain, it is not unique to it either. Karl Lashley summarized it well years ago when he wrote that a critical problem for brain studies is to understand how "the specialized areas of the cerebral cortex interact to provide the integration evident in thought and behaviour" (Lashley, 1931).
Asunto(s)
Percepción de Movimiento , Corteza Visual , Encéfalo , Mapeo Encefálico , Movimiento (Física)RESUMEN
The human capacity to integrate sensory signals has been investigated with respect to different sensory modalities. A common denominator of the neural network underlying the integration of sensory clues has yet to be identified. Additionally, brain imaging data from patients with autism spectrum disorder (ASD) do not cover disparities in neuronal sensory processing. In this fMRI study, we compared the underlying neural networks of both olfactory-visual and auditory-visual integration in patients with ASD and a group of matched healthy participants. The aim was to disentangle sensory-specific networks so as to derive a potential (amodal) common source of multisensory integration (MSI) and to investigate differences in brain networks with sensory processing in individuals with ASD. In both groups, similar neural networks were found to be involved in the olfactory-visual and auditory-visual integration processes, including the primary visual cortex, the inferior parietal sulcus (IPS), and the medial and inferior frontal cortices. Amygdala activation was observed specifically during olfactory-visual integration, with superior temporal activation having been seen during auditory-visual integration. A dynamic causal modeling analysis revealed a nonlinear top-down IPS modulation of the connection between the respective primary sensory regions in both experimental conditions and in both groups. Thus, we demonstrate that MSI has shared neural sources across olfactory-visual and audio-visual stimulation in patients and controls. The enhanced recruitment of the IPS to modulate changes between areas is relevant to sensory perception. Our results also indicate that, with respect to MSI processing, adults with ASD do not significantly differ from their healthy counterparts.
Asunto(s)
Percepción Auditiva/fisiología , Trastorno del Espectro Autista/fisiopatología , Mapeo Encefálico , Olfato/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Teorema de Bayes , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Neurológicos , Modelos Psicológicos , Placer , Adulto JovenRESUMEN
The primate brain contains a set of face-selective areas, which are thought to extract the rich social information that faces provide, such as emotional state and personal identity. The nature of this information raises a fundamental question about these face-selective areas: Do they respond to a face purely because of its visual attributes, or because the face embodies a larger social agent? Here, we used functional magnetic resonance imaging to determine whether the macaque face patch system exhibits a whole-agent response above and beyond its responses to individually presented faces and bodies. We found a systematic development of whole-agent preference through the face patches, from subadditive integration of face and body responses in posterior face patches to superadditive integration in anterior face patches. Superadditivity was not observed for faces atop nonbody objects, implying categorical specificity of face-body interaction. Furthermore, superadditivity was robust to visual degradation of facial detail, suggesting whole-agent selectivity does not require prior face recognition. In contrast, even the body patches immediately adjacent to anterior face areas did not exhibit superadditivity. This asymmetry between face- and body-processing systems may explain why observers attribute bodies' social signals to faces, and not vice versa. The development of whole-agent selectivity from posterior to anterior face patches, in concert with the recently described development of natural motion selectivity from ventral to dorsal face patches, identifies a single face patch, AF (anterior fundus), as a likely link between the analysis of facial shape and semantic inferences about other agents.
Asunto(s)
Reconocimiento Facial/fisiología , Animales , Cognición/fisiología , Fijación Ocular/fisiología , Macaca mulatta , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Conducta SocialRESUMEN
Multisensory stimulus combinations trigger shorter reaction times (RTs) than individual single-modality stimuli. It has been suggested that this inter-sensory facilitation effect is found exclusively for semantically congruent stimuli, because incongruity would prevent multisensory integration. Here we provide evidence that the effect of incongruity is due to a change in response caution rather than prevention of stimulus integration. In two experiments, participants performed two-alternative forced-choice decision tasks in which they categorized auditory stimuli, visual stimuli or audio-visual stimulus pairs. The pairs were either semantically congruent (e.g. ambulance image and horn sound) or incongruent (e.g. ambulance image and bell sound). Shorter RTs and violations of the race model inequality on congruent trials are in accordance with previous studies. However, Bayesian hierarchical drift diffusion analyses contradict former co-activation-based explanations of the effects of congruency. Instead, they show that longer RTs on incongruent compared to congruent trials are most likely the result of an incongruity caution effect-more cautious response behaviour in face of semantically incongruent sensory input. Further, they show that response caution can be adjusted on a trial-by-trial basis depending on incoming information. Finally, stimulus modality influenced non-cognitive components of the response. We suggest that the combined stimulus energy from simultaneously presented stimuli reduces encoding time.
Asunto(s)
Percepción Auditiva/fisiología , Semántica , Percepción Visual/fisiología , Estimulación Acústica , Análisis de Varianza , Conducta de Elección/fisiología , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto JovenRESUMEN
This study aimed to investigate the functional connectivity in the brain during the cross-modal integration of polyphonic characters in Chinese audio-visual sentences. The visual sentences were all semantically reasonable and the audible pronunciations of the polyphonic characters in corresponding sentences contexts varied in four conditions. To measure the functional connectivity, correlation, coherence and phase synchronization index (PSI) were used, and then multivariate pattern analysis was performed to detect the consensus functional connectivity patterns. These analyses were confined in the time windows of three event-related potential components of P200, N400 and late positive shift (LPS) to investigate the dynamic changes of the connectivity patterns at different cognitive stages. We found that when differentiating the polyphonic characters with abnormal pronunciations from that with the appreciate ones in audio-visual sentences, significant classification results were obtained based on the coherence in the time window of the P200 component, the correlation in the time window of the N400 component and the coherence and PSI in the time window the LPS component. Moreover, the spatial distributions in these time windows were also different, with the recruitment of frontal sites in the time window of the P200 component, the frontal-central-parietal regions in the time window of the N400 component and the central-parietal sites in the time window of the LPS component. These findings demonstrate that the functional interaction mechanisms are different at different stages of audio-visual integration of polyphonic characters.
Asunto(s)
Pueblo Asiatico/psicología , Mapeo Encefálico , Potenciales Evocados/fisiología , Fonética , Semántica , Estimulación Acústica , Adulto , Electroencefalografía , Electrooculografía , Femenino , Humanos , Masculino , Modelos Neurológicos , Estimulación Luminosa , Tiempo de Reacción/fisiología , Estadística como Asunto , Adulto JovenRESUMEN
Evidence from electrophysiological and imaging studies suggests that audio-visual (AV) stimuli presented in spatial coincidence enhance activity in the subcortical colliculo-dorsal extrastriate pathway. To test whether repetitive AV stimulation might specifically activate this neural circuit underlying multisensory integrative processes, electroencephalographic data were recorded before and after 2 h of AV training, during the execution of two lateralized visual tasks: a motion discrimination task, relying on activity in the colliculo-dorsal MT pathway, and an orientation discrimination task, relying on activity in the striate and early ventral extrastriate cortices. During training, participants were asked to detect and perform a saccade towards AV stimuli that were disproportionally allocated to one hemifield (the trained hemifield). Half of the participants underwent a training in which AV stimuli were presented in spatial coincidence, while the remaining half underwent a training in which AV stimuli were presented in spatial disparity (32°). Participants who received AV training with stimuli in spatial coincidence had a post-training enhancement of the anterior N1 component in the motion discrimination task, but only in response to stimuli presented in the trained hemifield. However, no effect was found in the orientation discrimination task. In contrast, participants who received AV training with stimuli in spatial disparity showed no effects on either task. The observed N1 enhancement might reflect enhanced discrimination for motion stimuli, probably due to increased activity in the colliculo-dorsal MT pathway induced by multisensory training.
Asunto(s)
Percepción Auditiva , Mesencéfalo/fisiología , Percepción de Movimiento , Estimulación Acústica , Adulto , Discriminación en Psicología , Femenino , Humanos , Masculino , Orientación Espacial , Estimulación Luminosa , Movimientos SacádicosRESUMEN
This study examined the intermodal integration of visual-proprioceptive feedback via a novel visual discrimination task of delayed self-generated movement. Participants performed a goal-oriented task in which visual feedback was available only via delayed videos displayed on two monitors-each with different delay durations. During task performance, delay duration was varied for one of the videos in the pair relative to a standard delay, which was held constant. Participants were required to identify and use the video with the lesser delay to perform the task. Visual discrimination of the lesser-delayed video was examined under four conditions in which the standard delay was increased for each condition. A temporal limit for proprioceptive-visual intermodal integration of 3-5s was revealed by subjects' inability to reliably discriminate video pairs.
Asunto(s)
Movimiento/fisiología , Propiocepción/fisiología , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Estudiantes , Tiempo , Universidades , Adulto JovenRESUMEN
The purpose of this study was to test how the sensory modality of rhythmic stimuli affects the production of bimanual coordination patterns. To this aim, participants had to synchronize the taps of their two index fingers with auditory and visual stimuli presented separately (auditory or visual) or simultaneously (audio-visual). This kind of task requires two levels of coordination: (1) sensorimotor coordination, which can be measured by the mean asynchrony between the beat of the stimulus and the corresponding tap and by mean asynchrony stability, and (2) inter-manual coordination, which can be assessed by the accuracy and stability of the relative phase between the right-hand and left-hand taps. Previous studies show that sensorimotor coordination is better during the synchronization with auditory or audio-visual metronomes than with visual metronome, but it is not known whether inter-manual coordination is affected by stimulation modalities. To answer this question, 13 participants were required to tap their index fingers in synchrony with the beat of auditory and/or visual stimuli specifying three coordination patterns: two preferred inphase and antiphase patterns and a non-preferred intermediate pattern. A first main result demonstrated that inphase tapping had the best inter-manual stability, but the worst asynchrony stability. The second main finding revealed that for all patterns, audio-visual stimulation improved the stability of sensorimotor coordination but not of inter-manual coordination. The combination of visual and auditory modalities results in multisensory integration, which improves sensorimotor coordination but not inter-manual coordination. Both results suggest that there is dissociation between processes underlying sensorimotor synchronization (anticipation or reactivity) and processes underlying inter-manual coordination (motor control). This finding opens new perspectives to evaluate separately the possible sensorimotor and inter-manual coordination deficits present in movement disorders.
Asunto(s)
Percepción Auditiva/fisiología , Dedos/inervación , Movimiento/fisiología , Periodicidad , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto JovenRESUMEN
Interest in the perception of the material of objects has been growing. While material perception is a critical ability for animals to properly regulate behavioral interactions with surrounding objects (e.g., eating), little is known about its underlying processing. Vision and audition provide useful information for material perception; using only its visual appearance or impact sound, we can infer what an object is made from. However, what material is perceived when the visual appearance of one material is combined with the impact sound of another, and what are the rules that govern cross-modal integration of material information? We addressed these questions by asking 16 human participants to rate how likely it was that audiovisual stimuli (48 combinations of visual appearances of six materials and impact sounds of eight materials) along with visual-only stimuli and auditory-only stimuli fell into each of 13 material categories. The results indicated strong interactions between audiovisual material perceptions; for example, the appearance of glass paired with a pepper sound is perceived as transparent plastic. Rating material-category likelihoods follow a multiplicative integration rule in that the categories judged to be likely are consistent with both visual and auditory stimuli. On the other hand, rating-material properties, such as roughness and hardness, follow a weighted average rule. Despite a difference in their integration calculations, both rules can be interpreted as optimal Bayesian integration of independent audiovisual estimations for the two types of material judgment, respectively.
Asunto(s)
Percepción Auditiva/fisiología , Percepción de Forma/fisiología , Percepción Visual/fisiología , Adulto , Teorema de Bayes , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Sonido , Encuestas y Cuestionarios , Adulto JovenRESUMEN
Developmental Coordination Disorder (DCD) is a movement disorder in which atypical sensory processing may underly movement atypicality. However, whether altered sensory processing is domain-specific or global in nature, are unanswered questions. Here, we measured for the first time, different aspects of sensory processing and spatiotemporal integration in the same cohort of adult participants with DCD (N = 16), possible DCD (pDCD, N = 12) and neurotypical adults (NT, N = 28). Haptic perception was reduced in both DCD and the extended DCD + pDCD groups when compared to NT adults. Audio-visual integration, measured using the sound-induced double flash illusion, was reduced only in DCD participants, and not the DCD + pDCD extended group. While low-level sensory processing was altered in DCD, the more cognitive, higher-level ability to infer temporal dimensions from spatial information, and vice-versa, as assessed with Tau-Kappa effects, was intact in DCD (and extended DCD + pDCD) participants. Both audio-visual integration and haptic perception difficulties correlated with the degree of self-reported DCD symptoms and were most apparent when comparing DCD and NT groups directly, instead of the expanded DCD + pDCD group. The association of sensory difficulties with DCD symptoms suggests that perceptual differences play a role in motor difficulties in DCD via an underlying internal modelling mechanism.
Asunto(s)
Ilusiones , Trastornos de la Destreza Motora , Adulto , Humanos , Desempeño Psicomotor , Trastornos de la Destreza Motora/psicología , Estereognosis , SensaciónRESUMEN
Perceptual disorders are not part of the diagnosis criteria for schizophrenia. Yet, a considerable amount of work has been conducted, especially on visual perception abnormalities, and there is little doubt that visual perception is altered in patients. There are several reasons why such perturbations are of interest in this pathology. They are observed during the prodromal phase of psychosis, they are related to the pathophysiology (clinical disorganization, disorders of the sense of self), and they are associated with neuronal connectivity disorders. Perturbations occur at different levels of processing and likely affect how patients interact and adapt to their surroundings. The literature has become very large, and here we try to summarize different models that have guided the exploration of perception in patients. We also illustrate several lines of research by showing how perception has been investigated and by discussing the interpretation of the results. In addition to discussing domains such as contrast sensitivity, masking, and visual grouping, we develop more recent fields like processing at the level of the retina, and the timing of perception.
Asunto(s)
Trastornos de la Percepción , Trastornos Psicóticos , Esquizofrenia , Humanos , Percepción Visual , Esquizofrenia/complicaciones , Trastornos Psicóticos/complicacionesRESUMEN
Objective. The primary purpose of this study was to investigate the electrophysiological mechanism underlying different modalities of sensory feedback and multi-sensory integration in typical prosthesis control tasks.Approach. We recruited 15 subjects and developed a closed-loop setup for three prosthesis control tasks which covered typical activities in the practical prosthesis application, i.e. prosthesis finger position control (PFPC), equivalent grasping force control (GFC) and box and block control (BABC). All the three tasks were conducted under tactile feedback (TF), visual feedback (VF) and tactile-visual feedback (TVF), respectively, with a simultaneous electroencephalography (EEG) recording to assess the electroencephalogram (EEG) response underlying different types of feedback. Behavioral and psychophysical assessments were also administered in each feedback condition.Results. EEG results showed that VF played a predominant role in GFC and BABC tasks. It was reflected by a significantly lower somatosensory alpha event-related desynchronization (ERD) in TVF than in TF and no significant difference in visual alpha ERD between TVF and VF. In PFPC task, there was no significant difference in somatosensory alpha ERD between TF and TVF, while a significantly lower visual alpha ERD was found in TVF than in VF, indicating that TF was essential in situations related to proprioceptive position perception. Tactile-visual integration was found when TF and VF were congruently implemented, showing an obvious activation over the premotor cortex in the three tasks. Behavioral and psychophysical results were consistent with EEG evaluations.Significance. Our findings could provide neural evidence for multi-sensory integration and functional roles of tactile and VF in a practical setting of prosthesis control, shedding a multi-dimensional insight into the functional mechanisms of sensory feedback.
Asunto(s)
Miembros Artificiales , Retroalimentación Sensorial , Humanos , Retroalimentación Sensorial/fisiología , Tacto/fisiología , Implantación de Prótesis , Extremidad SuperiorRESUMEN
Neural oscillations subserve a broad range of speech processing and language comprehension functions. Using an electroencephalogram (EEG), we investigated the frequency-specific directed interactions between whole-brain regions while the participants processed Chinese sentences using different modality stimuli (i.e., auditory, visual, and audio-visual). The results indicate that low-frequency responses correspond to the process of information flow aggregation in primary sensory cortices in different modalities. Information flow dominated by high-frequency responses exhibited characteristics of bottom-up flow from left posterior temporal to left frontal regions. The network pattern of top-down information flowing out of the left frontal lobe was presented by the joint dominance of low- and high-frequency rhythms. Overall, our results suggest that the brain may be modality-independent when processing higher-order language information.
Asunto(s)
Comprensión , Percepción del Habla , Humanos , Comprensión/fisiología , Mapeo Encefálico/métodos , Lenguaje , Encéfalo/fisiología , Lóbulo Frontal/fisiología , Percepción del Habla/fisiología , Imagen por Resonancia MagnéticaRESUMEN
Attention and audiovisual integration are crucial subjects in the field of brain information processing. A large number of previous studies have sought to determine the relationship between them through specific experiments, but failed to reach a unified conclusion. The reported studies explored the relationship through the frameworks of early, late, and parallel integration, though network analysis has been employed sparingly. In this study, we employed time-varying network analysis, which offers a comprehensive and dynamic insight into cognitive processing, to explore the relationship between attention and auditory-visual integration. The combination of high spatial resolution functional magnetic resonance imaging (fMRI) and high temporal resolution electroencephalography (EEG) was used. Firstly, a generalized linear model (GLM) was employed to find the task-related fMRI activations, which was selected as regions of interesting (ROIs) for nodes of time-varying network. Then the electrical activity of the auditory-visual cortex was estimated via the normalized minimum norm estimation (MNE) source localization method. Finally, the time-varying network was constructed using the adaptive directed transfer function (ADTF) technology. Notably, Task-related fMRI activations were mainly observed in the bilateral temporoparietal junction (TPJ), superior temporal gyrus (STG), primary visual and auditory areas. And the time-varying network analysis revealed that V1/A1âSTG occurred before TPJâSTG. Therefore, the results supported the theory that auditory-visual integration occurred before attention, aligning with the early integration framework.
RESUMEN
Introduction: Rehabilitation approaches take advantage of vision's important role in kinesthesia, using the mirror paradigm as a means to reduce phantom limb pain or to promote recovery from hemiparesis. Notably, it is currently applied to provide a visual reafferentation of the missing limb to relieve amputees' pain. However, the efficiency of this method is still debated, possibly due to the absence of concomitant coherent proprioceptive feedback. We know that combining congruent visuo-proprioceptive signals at the hand level enhances movement perception in healthy people. However, much less is known about lower limbs, for which actions are far less visually controlled in everyday life than upper limbs. Therefore, the present study aimed to explore, with the mirror paradigm, the benefit of combined visuo-proprioceptive feedback from the lower limbs of healthy participants. Methods: We compared the movement illusions driven by visual or proprioceptive afferents and tested the extent to which adding proprioceptive input to the visual reflection of the leg improved the resulting movement illusion. To this end, 23 healthy adults were exposed to mirror or proprioceptive stimulation and concomitant visuo-proprioceptive stimulation. In the visual conditions, participants were asked to voluntarily move their left leg in extension and look at its reflection in the mirror. In the proprioceptive conditions, a mechanical vibration was applied to the hamstring muscle of the leg hidden behind the mirror to simulate an extension of the leg, either exclusively or concomitantly, to the visual reflection of the leg in the mirror. Results: (i) Visual stimulation evoked leg movement illusions but with a lower velocity than the actual movement reflection on the mirror; (ii) proprioceptive stimulation alone provided more salient illusions than the mirror illusion; and (iii) adding a congruent proprioceptive stimulation improved the saliency, amplitude, and velocity of the illusion. Conclusion: The present findings confirm that visuo-proprioceptive integration occurs efficiently when the mirror paradigm is coupled with mechanical vibration at the lower limbs, thus providing promising new perspectives for rehabilitation.