Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.043
Filtrar
1.
Autism ; : 13623613241286570, 2024 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-39367733

RESUMEN

LAY ABSTRACT: Our study explored how meaningful hand gestures, alongside spoken words, can help autistic individuals to understand speech, especially when the speech quality is poor, such as when there is a lot of noise around. Previous research has suggested that meaningful hand gestures might be processed differently in autistic individuals, and we therefore expected that these hand gestures might aid them less in understanding speech in adverse listening conditions than for non-autistic people. To this end, we asked participants to watch and listen to videos of a woman uttering a Dutch action verb. In these videos, she either made a meaningful gesture while speaking, or not, and speech was clear, or noisy. The task for participants was to identify the verb in the videos. Contrary to what we expected, we found that both autistic and non-autistic individuals use meaningful information from hand gestures when understanding unclear speech. This means that gestural information can aid in communication, especially when communicative settings are suboptimal.

2.
Artículo en Inglés | MEDLINE | ID: mdl-39350457

RESUMEN

INTRODUCTION: Individuals with schizophrenia present anomalies in the extension and plasticity of the peripersonal space (PPS), the section of space surrounding the body, shaped through motor experiences. A weak multisensory integration in PPS would contribute to an impairment of self-embodiment processing, a core feature of the disorder linked to specific subjective experiences. In this exploratory study, we aimed at: (1) testing an association between PPS features, psychopathology, and subjective experiences in schizophrenia; (2) describing the PPS profile in individuals with early-onset schizophrenia. MATERIALS AND METHODS: Twenty-seven individuals with schizophrenia underwent a task measuring the PPS size and boundaries demarcation before and after a motor training with a tool. The Positive And Negative Syndrome Scale (PANSS), the Examination of Anomalous Self Experience scale (EASE) and the Autism Rating Scale (ARS) were used to assess psychopathology. Subsequently, participants were divided into two subgroups, early and adult-onset schizophrenia. The two groups were compared in regard to their PPS and psychopathological profiles. RESULTS: PPS patterns were associated with psychopathology, particularly positively with PANSS negative scale score, and negatively with subjective experiences of existential reorientation (EASE Domain 5 scores) and of social encounters (ARS scores). Only PPS parameters and ARS scores differentiated between early and adult-onset participants. CONCLUSIONS: Our results, although preliminary and exploratory, can suggest a link between PPS patterns, negative symptoms, and disturbances of the subjective experience, particularly in the intersubjective domain, in schizophrenia. Moreover, they seem to suggest that specific PPS profiles and schizophrenic autism traits could be markers of early-onset schizophrenia.

3.
Front Psychol ; 15: 1399084, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39380752

RESUMEN

This review examines how visual information enhances speech perception in individuals with hearing loss, focusing on the impact of age, linguistic stimuli, and specific hearing loss factors on the effectiveness of audiovisual (AV) integration. While existing studies offer varied and sometimes conflicting findings regarding the use of visual cues, our analysis shows that these key factors can distinctly shape AV speech perception outcomes. For instance, younger individuals and those who receive early intervention tend to benefit more from visual cues, particularly when linguistic complexity is lower. Additionally, languages with dense phoneme spaces demonstrate a higher dependency on visual information, underscoring the importance of tailoring rehabilitation strategies to specific linguistic contexts. By considering these influences, we highlight areas where understanding is still developing and suggest how personalized rehabilitation strategies and supportive systems could be tailored to better meet individual needs. Furthermore, this review brings attention to important aspects that warrant further investigation, aiming to refine theoretical models and contribute to more effective, customized approaches to hearing rehabilitation.

4.
Sci Rep ; 14(1): 20923, 2024 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-39251764

RESUMEN

Does congruence between auditory and visual modalities affect aesthetic experience? While cross-modal correspondences between vision and hearing are well-documented, previous studies show conflicting results regarding whether audiovisual correspondence affects subjective aesthetic experience. Here, in collaboration with the Kentler International Drawing Space (NYC, USA), we depart from previous research by using music specifically composed to pair with visual art in the professionally-curated Music as Image and Metaphor exhibition. Our pre-registered online experiment consisted of 4 conditions: Audio, Visual, Audio-Visual-Intended (artist-intended pairing of art/music), and Audio-Visual-Random (random shuffling). Participants (N = 201) were presented with 16 pieces and could click to proceed to the next piece whenever they liked. We used time spent as an implicit index of aesthetic interest. Additionally, after each piece, participants were asked about their subjective experience (e.g., feeling moved). We found that participants spent significantly more time with Audio, followed by Audiovisual, followed by Visual pieces; however, they felt most moved in the Audiovisual (bi-modal) conditions. Ratings of audiovisual correspondence were significantly higher for the Audiovisual-Intended compared to Audiovisual-Random condition; interestingly, though, there were no significant differences between intended and random conditions on any other subjective rating scale, or for time spent. Collectively, these results call into question the relationship between cross-modal correspondence and aesthetic appreciation. Additionally, the results complicate the use of time spent as an implicit measure of aesthetic experience.


Asunto(s)
Percepción Auditiva , Estética , Música , Percepción Visual , Humanos , Música/psicología , Femenino , Estética/psicología , Masculino , Adulto , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Adulto Joven , Arte , Estimulación Luminosa , Estimulación Acústica , Adolescente
5.
J Neuroeng Rehabil ; 21(1): 155, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39252006

RESUMEN

BACKGROUND: Planning and executing movements requires the integration of different sensory modalities, such as vision and proprioception. However, neurological diseases like stroke can lead to full or partial loss of proprioception, resulting in impaired movements. Recent advances focused on providing additional sensory feedback to patients to compensate for the sensory loss, proving vibrotactile stimulation to be a viable option as it is inexpensive and easy to implement. Here, we test how such vibrotactile information can be integrated with visual signals to estimate the spatial location of a reach target. METHODS: We used a center-out reach paradigm with 31 healthy human participants to investigate how artificial vibrotactile stimulation can be integrated with visual-spatial cues indicating target location. Specifically, we provided multisite vibrotactile stimulation to the moving dominant arm using eccentric rotating mass (ERM) motors. As the integration of inputs across multiple sensory modalities becomes especially relevant when one of them is uncertain, we additionally modulated the reliability of visual cues. We then compared the weighing of vibrotactile and visual inputs as a function of visual uncertainty to predictions from the maximum likelihood estimation (MLE) framework to decide if participants achieve quasi-optimal integration. RESULTS: Our results show that participants could estimate target locations based on vibrotactile instructions. After short training, combined visual and vibrotactile cues led to higher hit rates and reduced reach errors when visual cues were uncertain. Additionally, we observed lower reaction times in trials with low visual uncertainty when vibrotactile stimulation was present. Using MLE predictions, we found that integration of vibrotactile and visual cues followed optimal integration when vibrotactile cues required the detection of one or two active motors. However, if estimating the location of a target required discriminating the intensities of two cues, integration violated MLE predictions. CONCLUSION: We conclude that participants can quickly learn to integrate visual and artificial vibrotactile information. Therefore, using additional vibrotactile stimulation may serve as a promising way to improve rehabilitation or the control of prosthetic devices by patients suffering loss of proprioception.


Asunto(s)
Señales (Psicología) , Desempeño Psicomotor , Vibración , Percepción Visual , Humanos , Masculino , Femenino , Adulto , Percepción Visual/fisiología , Desempeño Psicomotor/fisiología , Adulto Joven , Retroalimentación Sensorial/fisiología , Propiocepción/fisiología , Percepción del Tacto/fisiología , Incertidumbre , Estimulación Física/métodos , Percepción Espacial/fisiología , Movimiento/fisiología
6.
J Exp Child Psychol ; 247: 106040, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39142077

RESUMEN

It is well-accepted that multisensory integration (MSI) undergoes protracted maturation from childhood to adulthood. However, existing evidence may have been confounded by potential age-related differences in attention. To unveil neurodevelopmental changes in MSI while matching top-down attention between children and adults, we recorded event-related potentials of healthy children aged 7 to 9 years and young adults in the visual-to-auditory attentional spreading paradigm wherein attention and MSI could be measured concurrently. The absence of children versus adults differences in the visual selection negativity component and behavioral measures of auditory interference first demonstrates that the child group could maintain top-down visual attention and ignore task-irrelevant auditory information to a similar extent as adults. Then, the stimulus-driven attentional spreading quantified by the auditory negative difference (Nd) component was found to be overall absent in the child group, revealing the children's largely immature audiovisual binding process. These findings furnish strong evidence for the protracted maturation of MSI per se from childhood to adulthood, hence providing a new benchmark for characterizing the developmental course of MSI. In addition, we also found that the representation-driven attentional spreading measured by another Nd was present but less robust in children, suggesting their substantially but not fully developed audiovisual representation coactivation process.


Asunto(s)
Atención , Percepción Auditiva , Potenciales Evocados , Percepción Visual , Humanos , Atención/fisiología , Niño , Femenino , Masculino , Percepción Auditiva/fisiología , Percepción Visual/fisiología , Adulto Joven , Potenciales Evocados/fisiología , Adulto , Electroencefalografía , Desarrollo Infantil/fisiología , Estimulación Acústica , Factores de Edad , Estimulación Luminosa
7.
Acta Psychol (Amst) ; 249: 104386, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39174407

RESUMEN

Virtual Reality has significantly improved the understanding of body experience, through techniques such as Body Illusion. Body Illusion allows individuals to perceive an artificial body as their own, changing body perceptual and affective components. Prior research has predominantly focused on female participants, leaving the impact of Body Illusion on males less understood. This study seeks to fill this gap by examining the nuanced bodily experiences of men in comparison to women. 40 participants (20 females and 20 males) were proposed with visuo-tactile synchronous and asynchronous Body Illusion to explore changes in body satisfaction and body size estimation across three critical areas: shoulders, hips, and waist. Results revealed significant initial disparities, with females displaying greater body dissatisfaction and a tendency to overestimate body size. After Body Illusion, females adjusted the hips perceived body size closer to that of the virtual body and reported increased body satisfaction independent of the condition. Conversely, males showed changes only in waist size estimation only after synchronous stimulation without significant shifts in body satisfaction. These results suggest a higher sensitivity of women to embodied experiences, potentially due to societal influences and a greater inclination towards self-objectification. These insights pave the way for creating more refined and effective interventions for body image issues, highlighting the importance of incorporating gender-specific considerations in VR-based prevention and therapeutical programs.


Asunto(s)
Imagen Corporal , Ilusiones , Realidad Virtual , Humanos , Masculino , Femenino , Imagen Corporal/psicología , Ilusiones/fisiología , Adulto , Adulto Joven , Factores Sexuales , Satisfacción Personal , Tamaño Corporal/fisiología , Caracteres Sexuales , Insatisfacción Corporal
8.
eNeuro ; 11(9)2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39147580

RESUMEN

The accurate estimation of limb state is necessary for movement planning and execution. While state estimation requires both feedforward and feedback information, we focus here on the latter. Prior literature has shown that integrating visual and proprioceptive feedback improves estimates of static limb position. However, differences in visual and proprioceptive feedback delays suggest that multisensory integration could be disadvantageous when the limb is moving. We formalized this hypothesis by modeling feedback-based state estimation using the long-standing maximum likelihood estimation model of multisensory integration, which we updated to account for sensory delays. Our model predicted that the benefit of multisensory integration was largely lost when the limb was passively moving. We tested this hypothesis in a series of experiments in human subjects that compared the degree of interference created by discrepant visual or proprioceptive feedback when estimating limb position either statically at the end of the movement or dynamically at movement midpoint. In the static case, we observed significant interference: discrepant feedback in one modality systematically biased sensory estimates based on the other modality. However, no interference was seen in the dynamic case: participants could ignore sensory feedback from one modality and accurately reproduce the motion indicated by the other modality. Together, these findings suggest that the sensory feedback used to compute a state estimate differs depending on whether the limb is stationary or moving. While the former may tend toward multimodal integration, the latter is more likely to be based on feedback from a single sensory modality.


Asunto(s)
Retroalimentación Sensorial , Movimiento , Propiocepción , Humanos , Masculino , Retroalimentación Sensorial/fisiología , Femenino , Propiocepción/fisiología , Adulto Joven , Adulto , Movimiento/fisiología , Percepción Visual/fisiología , Desempeño Psicomotor/fisiología
9.
Res Dev Disabil ; 153: 104813, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39163725

RESUMEN

Developmental dyslexia is characterized by difficulties in learning to read, affecting cognition and causing failure at school. Interventions for children with developmental dyslexia have focused on improving linguistic capabilities (phonics, orthographic and morphological instructions), but developmental dyslexia is accompanied by a wide variety of sensorimotor impairments. The goal of this study was to examine the effects of a proprioceptive intervention on reading performance and eye movement in children with developmental dyslexia. Nineteen children diagnosed with developmental dyslexia were randomly assigned to a regular Speech Therapy (ST) or to a Proprioceptive and Speech Intervention (PSI), in which they received both the usual speech therapy and a proprioceptive intervention aimed to correct their sensorimotor impairments (prism glasses, oral neurostimulation, insoles and breathing instructions). Silent reading performance and eye movements were measured pre- and post-intervention (after nine months). In the PSI group, reading performance improved and eye movements were smoother and faster, reaching values similar to those of children with typical reading performance. The recognition of written words also improved, indicating better lexical access. These results show that PSI might constitute a valuable tool for reading improvement children with developmental dyslexia.


Asunto(s)
Dislexia , Movimientos Oculares , Tecnología de Seguimiento Ocular , Lectura , Humanos , Dislexia/rehabilitación , Dislexia/fisiopatología , Dislexia/terapia , Niño , Masculino , Femenino , Movimientos Oculares/fisiología , Propiocepción/fisiología , Auxiliares Sensoriales
10.
Curr Biol ; 34(18): 4091-4103.e4, 2024 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-39216484

RESUMEN

Male mosquitoes form aerial aggregations, known as swarms, to attract females and maximize their chances of finding a mate. Within these swarms, individuals must be able to recognize potential mates and navigate the social environment to successfully intercept a mating partner. Prior research has almost exclusively focused on the role of acoustic cues in mediating the male mosquito's ability to recognize and pursue females. However, the role of other sensory modalities in this behavior has not been explored. Moreover, how males avoid collisions with one another in the swarm while pursuing females remains poorly understood. In this study, we combined free-flight and tethered-flight simulator experiments to demonstrate that swarming Anopheles coluzzii mosquitoes integrate visual and acoustic information to track conspecifics and avoid collisions. Our tethered experiments revealed that acoustic stimuli gated mosquito steering responses to visual objects simulating nearby mosquitoes, especially in males that exhibited a strong response toward visual objects in the presence of female flight tones. Additionally, we observed that visual cues alone could trigger changes in mosquitoes' wingbeat amplitude and frequency. These findings were corroborated by our free-flight experiments, which revealed that Anopheles coluzzii modulate their thrust-based flight responses to nearby conspecifics in a similar manner to tethered animals, potentially allowing for collision avoidance within swarms. Together, these results demonstrate that both males and females integrate multiple sensory inputs to mediate swarming behavior, and for males, the change in flight kinematics in response to multimodal cues might allow them to simultaneously track females while avoiding collisions.


Asunto(s)
Anopheles , Señales (Psicología) , Animales , Masculino , Anopheles/fisiología , Femenino , Vuelo Animal/fisiología , Conducta Sexual Animal/fisiología , Percepción Auditiva/fisiología , Percepción Visual/fisiología
11.
Front Pain Res (Lausanne) ; 5: 1414927, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39119526

RESUMEN

Our mental representation of our body depends on integrating various sensory modalities, such as tactile information. In tactile distance estimation (TDE) tasks, participants must estimate the distance between two tactile tips applied to their skin. This measure of tactile perception has been linked to body representation assessments. Studies in individuals with fibromyalgia (FM), a chronic widespread pain syndrome, suggest the presence of body representation distortions and tactile alterations, but TDE has never been examined in this population. Twenty participants with FM and 24 pain-free controls performed a TDE task on three Body regions (upper limb, trunk, lower limb), in which they manually estimated the interstimuli distance on a tablet. TDE error, the absolute difference between the estimation and the interstimuli distance, was not different between the Groups, on any Body region. Drawings of their body as they felt it revealed clear and frequent distortions of body representation in the group with FM, compared to negligible perturbations in controls. This contrast between distorted body drawings and unaltered TDE suggests a preserved integration of tactile information but an altered integration of this information with other sensory modalities to generate a precise and accurate body representation. Future research should investigate the relative contribution of each sensory information and prior knowledge about the body in body representation in individuals with FM to shed light on the observed distortions.

12.
Front Psychol ; 15: 1396946, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39091706

RESUMEN

Introduction: The prevailing theories of consciousness consider the integration of different sensory stimuli as a key component for this phenomenon to rise on the brain level. Despite many theories and models have been proposed for multisensory integration between supraliminal stimuli (e.g., the optimal integration model), we do not know if multisensory integration occurs also for subliminal stimuli and what psychophysical mechanisms it follows. Methods: To investigate this, subjects were exposed to visual (Virtual Reality) and/or haptic stimuli (Electro-Cutaneous Stimulation) above or below their perceptual threshold. They had to discriminate, in a two-Alternative Forced Choice Task, the intensity of unimodal and/or bimodal stimuli. They were then asked to discriminate the sensory modality while recording their EEG responses. Results: We found evidence of multisensory integration for supraliminal condition, following the classical optimal model. Importantly, even for subliminal trials participant's performances in the bimodal condition were significantly more accurate when discriminating the intensity of the stimulation. Moreover, significant differences emerged between unimodal and bimodal activity templates in parieto-temporal areas known for their integrative role. Discussion: These converging evidences - even if preliminary and needing confirmation from the collection of further data - suggest that subliminal multimodal stimuli can be integrated, thus filling a meaningful gap in the debate about the relationship between consciousness and multisensory integration.

13.
Front Neurosci ; 18: 1390696, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39161654

RESUMEN

Background: Deficits in Multisensory Integration (MSI) in ASD have been reported repeatedly and have been suggested to be caused by altered long-range connectivity. Here we investigate behavioral and ERP correlates of MSI in ASD using ecologically valid videos of emotional expressions. Methods: In the present study, we set out to investigate the electrophysiological correlates of audiovisual MSI in young autistic and neurotypical adolescents. We employed dynamic stimuli of high ecological validity (500 ms clips produced by actors) that depicted fear or disgust in unimodal (visual and auditory), and bimodal (audiovisual) conditions. Results: We report robust MSI effects at both the behavioral and electrophysiological levels and pronounced differences between autistic and neurotypical participants. Specifically, neurotypical controls showed robust behavioral MSI for both emotions as seen through a significant speed-up of bimodal response time (RT), confirmed by Miller's Race Model Inequality (RMI), with greater MSI effects for fear than disgust. Adolescents with ASD, by contrast, showed behavioral MSI only for fear. At the electrophysiological level, the bimodal condition as compared to the unimodal conditions reduced the amplitudes of the visual P100 and auditory P200 and increased the amplitude of the visual N170 regardless of group. Furthermore, a cluster-based analysis across all electrodes revealed that adolescents with ASD showed an overall delayed and spatially constrained MSI effect compared to controls. Conclusion: Given that the variables we measured reflect attention, our findings suggest that MSI can be modulated by the differential effects on attention that fear and disgust produce. We also argue that the MSI deficits seen in autistic individuals can be compensated for at later processing stages by (a) the attention-orienting effects of fear, at the behavioral level, and (b) at the electrophysiological level via increased attentional effort.

14.
J Vestib Res ; 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39150839

RESUMEN

BACKGROUND: Flight simulators have an essential role in aircrew training. Occasionally, symptoms of motion sickness, defined as simulator sickness, develop during training sessions. The reported incidence of simulator sickness ranged widely in different studies. OBJECTIVE: The aims of this study were to calculate the incidence of and to define a threshold value for simulator sickness among rotary-wing pilots using the validated Simulator Sickness Questionnaire (SSQ). METHODS: CH-53 and UH-60 helicopter pilots, who trained in helicopter simulators in the Israeli Air Force, were asked to fulfill SSQ. A score of 20 in the SSQ was defined as the threshold for simulator sickness. Simulator sickness incidence and average SSQ were calculated. Correlations between age and simulator training hours to SSQ scores were analyzed. RESULTS: A total of 207 rotary-wing aircrew participated in the study. Simulator sickness was experienced by 51.7% of trainees. The average SSQ score was 32.7. A significant negative correlation was found between age and SSQ score. CONCLUSIONS: Simulator sickness was experienced by more than half of helicopter pilots. A score of 20 in the SSQ was found to be suitable as the threshold for this condition.

15.
Hum Brain Mapp ; 45(12): e70009, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39185690

RESUMEN

Attention and crossmodal interactions are closely linked through a complex interplay at different stages of sensory processing. Within the context of motion perception, previous research revealed that attentional demands alter audiovisual interactions in the temporal domain. In the present study, we aimed to understand the neurophysiological correlates of these attentional modulations. We utilized an audiovisual motion paradigm that elicits auditory time interval effects on perceived visual speed. The audiovisual interactions in the temporal domain were quantified by changes in perceived visual speed across different auditory time intervals. We manipulated attentional demands in the visual field by having a secondary task on a stationary object (i.e., single- vs. dual-task conditions). When the attentional demands were high (i.e., dual-task condition), there was a significant decrease in the effects of auditory time interval on perceived visual speed, suggesting a reduction in audiovisual interactions. Moreover, we found significant differences in both early and late neural activities elicited by visual stimuli across task conditions (single vs. dual), reflecting an overall increase in attentional demands in the visual field. Consistent with the changes in perceived visual speed, the audiovisual interactions in neural signals declined in the late positive component range. Compared with the findings from previous studies using different paradigms, our findings support the view that attentional modulations of crossmodal interactions are not unitary and depend on task-specific components. They also have important implications for motion processing and speed estimation in daily life situations where sensory relevance and attentional demands constantly change.


Asunto(s)
Atención , Percepción Auditiva , Electroencefalografía , Estimulación Luminosa , Campos Visuales , Humanos , Atención/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Percepción Auditiva/fisiología , Campos Visuales/fisiología , Estimulación Luminosa/métodos , Percepción de Movimiento/fisiología , Estimulación Acústica , Percepción Visual/fisiología , Mapeo Encefálico , Encéfalo/fisiología
16.
Front Psychol ; 15: 1353490, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39156805

RESUMEN

People can use their sense of hearing for discerning thermal properties, though they are for the most part unaware that they can do so. While people unequivocally claim that they cannot perceive the temperature of pouring water through the auditory properties of hearing it being poured, our research further strengthens the understanding that they can. This multimodal ability is implicitly acquired in humans, likely through perceptual learning over the lifetime of exposure to the differences in the physical attributes of pouring water. In this study, we explore people's perception of this intriguing cross modal correspondence, and investigate the psychophysical foundations of this complex ecological mapping by employing machine learning. Our results show that not only can the auditory properties of pouring water be classified by humans in practice, the physical characteristics underlying this phenomenon can also be classified by a pre-trained deep neural network.

17.
Hum Brain Mapp ; 45(11): e26797, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39041175

RESUMEN

Speech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context. Our novel paradigm incorporated a broader social situational context, complete words, and speech-supporting iconic gestures, allowing for context-based pragmatics and semantic priors. We investigated MSI in the presence of unimodal (auditory or visual) or complementary, bimodal speech signals. During audio-visual speech trials, compared to unimodal trials, participants more accurately recognised spoken words and showed a more pronounced suppression of alpha power-an indicator of heightened integration load. Importantly, on the neural level, these effects surpassed mere summation of unimodal responses, suggesting non-linear MSI mechanisms. Overall, our findings demonstrate that typically developing adults integrate audio-visual speech and gesture information to facilitate speech comprehension in noisy environments, highlighting the importance of studying MSI in ecologically valid contexts.


Asunto(s)
Gestos , Percepción del Habla , Humanos , Femenino , Masculino , Percepción del Habla/fisiología , Adulto Joven , Adulto , Percepción Visual/fisiología , Electroencefalografía , Comprensión/fisiología , Estimulación Acústica , Habla/fisiología , Encéfalo/fisiología , Estimulación Luminosa/métodos
19.
Hum Brain Mapp ; 45(10): e26772, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-38962966

RESUMEN

Humans naturally integrate signals from the olfactory and intranasal trigeminal systems. A tight interplay has been demonstrated between these two systems, and yet the neural circuitry mediating olfactory-trigeminal (OT) integration remains poorly understood. Using functional magnetic resonance imaging (fMRI), combined with psychophysics, this study investigated the neural mechanisms underlying OT integration. Fifteen participants with normal olfactory function performed a localization task with air-puff stimuli, phenylethyl alcohol (PEA; rose odor), or a combination thereof while being scanned. The ability to localize PEA to either nostril was at chance. Yet, its presence significantly improved the localization accuracy of weak, but not strong, air-puffs, when both stimuli were delivered concurrently to the same nostril, but not when different nostrils received the two stimuli. This enhancement in localization accuracy, exemplifying the principles of spatial coincidence and inverse effectiveness in multisensory integration, was associated with multisensory integrative activity in the primary olfactory (POC), orbitofrontal (OFC), superior temporal (STC), inferior parietal (IPC) and cingulate cortices, and in the cerebellum. Multisensory enhancement in most of these regions correlated with behavioral multisensory enhancement, as did increases in connectivity between some of these regions. We interpret these findings as indicating that the POC is part of a distributed brain network mediating integration between the olfactory and trigeminal systems. PRACTITIONER POINTS: Psychophysical and neuroimaging study of olfactory-trigeminal (OT) integration. Behavior, cortical activity, and network connectivity show OT integration. OT integration obeys principles of inverse effectiveness and spatial coincidence. Behavioral and neural measures of OT integration are correlated.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Corteza Olfatoria , Humanos , Masculino , Femenino , Adulto , Corteza Olfatoria/fisiología , Corteza Olfatoria/diagnóstico por imagen , Adulto Joven , Percepción Olfatoria/fisiología , Alcohol Feniletílico , Psicofísica , Nervio Trigémino/fisiología , Nervio Trigémino/diagnóstico por imagen , Odorantes
20.
Curr Biol ; 34(16): 3616-3631.e5, 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39019036

RESUMEN

Effective detection and avoidance from environmental threats are crucial for animals' survival. Integration of sensory cues associated with threats across different modalities can significantly enhance animals' detection and behavioral responses. However, the neural circuit-level mechanisms underlying the modulation of defensive behavior or fear response under simultaneous multimodal sensory inputs remain poorly understood. Here, we report in mice that bimodal looming stimuli combining coherent visual and auditory signals elicit more robust defensive/fear reactions than unimodal stimuli. These include intensified escape and prolonged hiding, suggesting a heightened defensive/fear state. These various responses depend on the activity of the superior colliculus (SC), while its downstream nucleus, the parabigeminal nucleus (PBG), predominantly influences the duration of hiding behavior. PBG temporally integrates visual and auditory signals and enhances the salience of threat signals by amplifying SC sensory responses through its feedback projection to the visual layer of the SC. Our results suggest an evolutionarily conserved pathway in defense circuits for multisensory integration and cross-modality enhancement.


Asunto(s)
Miedo , Colículos Superiores , Animales , Colículos Superiores/fisiología , Ratones , Miedo/fisiología , Masculino , Ratones Endogámicos C57BL , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Reacción de Fuga/fisiología , Estimulación Acústica , Estimulación Luminosa , Femenino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...