Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 160
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(13): e2117000119, 2022 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-35312362

RESUMEN

SignificanceSyllables are important building blocks of speech. They occur at a rate between 4 and 8 Hz, corresponding to the theta frequency range of neural activity in the cerebral cortex. When listening to speech, the theta activity becomes aligned to the syllabic rhythm, presumably aiding in parsing a speech signal into distinct syllables. However, this neural activity cannot only be influenced by sound, but also by somatosensory information. Here, we show that the presentation of vibrotactile signals at the syllabic rate can enhance the comprehension of speech in background noise. We further provide evidence that this multisensory enhancement of speech comprehension reflects the multisensory integration of auditory and tactile information in the auditory cortex.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Corteza Auditiva/fisiología , Comprensión/fisiología , Habla/fisiología , Percepción del Habla/fisiología
2.
Proc Natl Acad Sci U S A ; 119(52): e2213847119, 2022 12 27.
Artículo en Inglés | MEDLINE | ID: mdl-36534792

RESUMEN

Do sensory cortices process more than one sensory modality? To answer these questions, scientists have generated a wide variety of studies at distinct space-time scales in different animal models, and often shown contradictory conclusions. Some conclude that this process occurs in early sensory cortices, but others that this occurs in areas central to sensory cortices. Here, we sought to determine whether sensory neurons process and encode physical stimulus properties of different modalities (tactile and acoustic). For this, we designed a bimodal detection task where the senses of touch and hearing compete from trial to trial. Two Rhesus monkeys performed this novel task, while neural activity was recorded in areas 3b and 1 of the primary somatosensory cortex (S1). We analyzed neurons' coding properties and variability, organizing them by their receptive field's position relative to the stimulation zone. Our results indicate that neurons of areas 3b and 1 are unimodal, encoding only the tactile modality in both the firing rate and variability. Moreover, we found that neurons in area 3b carried more information about the periodic stimulus structure than those in area 1, possessed lower response and coding latencies, and had a lower intrinsic time scale. In sum, these differences reveal a hidden processing-based hierarchy. Finally, using a powerful nonlinear dimensionality reduction algorithm, we show that the activity from areas 3b and 1 can be separated, establishing a clear division in the functionality of these two subareas of S1.


Asunto(s)
Corteza Somatosensorial , Percepción del Tacto , Animales , Corteza Somatosensorial/fisiología , Percepción del Tacto/fisiología , Tacto , Lóbulo Parietal , Células Receptoras Sensoriales
3.
Exp Brain Res ; 242(2): 451-462, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38165451

RESUMEN

Bodily resizing illusions typically use visual and/or tactile inputs to produce a vivid experience of one's body changing size. Naturalistic auditory input (an input that reflects the natural sounds of a stimulus) has been used to increase illusory experience during the rubber hand illusion, whilst non-naturalistic auditory input can influence estimations of finger length. We aimed to use a non-naturalistic auditory input during a hand-based resizing illusion using augmented reality, to assess whether the addition of an auditory input would increase both subjective illusion strength and measures of performance-based tasks. Forty-four participants completed the following three conditions: no finger stretching, finger stretching without tactile feedback and finger stretching with tactile feedback. Half of the participants had an auditory input throughout all the conditions, whilst the other half did not. After each condition, the participants were given one of the following three performance tasks: stimulated (right) hand dot touch task, non-stimulated (left) hand dot touch task, and a ruler judgement task. Dot tasks involved participants reaching for the location of a virtual dot, whereas the ruler task concerned estimates of the participant's own finger on a ruler whilst the hand was hidden from view. After all trials, the participants completed a questionnaire capturing subjective illusion strength. The addition of auditory input increased subjective illusion strength for manipulations without tactile feedback but not those with tactile feedback. No facilitatory effects of audio were found for any performance task. We conclude that adding auditory input to illusory finger stretching increased subjective illusory experience in the absence of tactile feedback but did not affect performance-based measures.


Asunto(s)
Ilusiones , Percepción del Tacto , Humanos , Tacto , Propiocepción , Mano , Percepción Visual , Imagen Corporal
4.
Adv Exp Med Biol ; 1437: 121-137, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38270857

RESUMEN

Neural oscillations play a role in sensory processing by coordinating synchronized neuronal activity. Synchronization of gamma oscillations is engaged in local computation of feedforward signals and synchronization of alpha-beta oscillations is engaged in feedback processing over long-range areas. These spatially and spectrally segregated bi-directional signals may be integrated by a mechanism of cross-frequency coupling. Synchronization of neural oscillations has also been proposed as a mechanism for information integration across multiple sensory modalities. A transient stimulus or rhythmic stimulus from one modality may lead to phase alignment of ongoing neural oscillations in multiple sensory cortices, through a mechanism of cross-modal phase reset or cross-modal neural entrainment. Synchronized activities in multiple sensory cortices are more likely to boost stronger activities in downstream areas. Compared to synchronized oscillations, asynchronized oscillations may impede signal processing, and may contribute to sensory selection by setting the oscillations in the target-related cortex and the oscillations in the distractor-related cortex to opposite phases.


Asunto(s)
Corteza Cerebral , Sensación , Rayos gamma , Modalidades de Fisioterapia , Procesamiento de Señales Asistido por Computador
5.
J Neurosci ; 42(46): 8729-8741, 2022 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-36223999

RESUMEN

To ensure survival in a dynamic environment, the human neocortex monitors input streams from different sensory organs for important sensory events. Which principles govern whether different senses share common or modality-specific brain networks for sensory target detection? We examined whether complex targets evoke sustained supramodal activity while simple targets rely on modality-specific networks with short-lived supramodal contributions. In a series of hierarchical multisensory target detection studies (n = 77, of either sex) using EEG, we applied a temporal cross-decoding approach to dissociate supramodal and modality-specific cortical dynamics elicited by rule-based global and feature-based local sensory deviations within and between the visual, somatosensory, and auditory modality. Our data show that each sense implements a cortical hierarchy orchestrating supramodal target detection responses, which operate at local and global timescales in successive processing stages. Across different sensory modalities, simple feature-based sensory deviations presented in temporal vicinity to a monotonous input stream triggered a mismatch negativity-like local signal which decayed quickly and early, whereas complex rule-based targets tracked across time evoked a P3b-like global neural response which generalized across a late time window. Converging results from temporal cross-modality decoding analyses across different datasets, we reveal that global neural responses are sustained in a supramodal higher-order network, whereas local neural responses canonically thought to rely on modality-specific regions evolve into short-lived supramodal activity. Together, our findings demonstrate that cortical organization largely follows a gradient in which short-lived modality-specific as well as supramodal processes dominate local responses, whereas higher-order processes encode temporally extended abstract supramodal information fed forward from modality-specific cortices.SIGNIFICANCE STATEMENT Each sense supports a cortical hierarchy of processes tracking deviant sensory events at multiple timescales. Conflicting evidence produced a lively debate around which of these processes are supramodal. Here, we manipulated the temporal complexity of auditory, tactile, and visual targets to determine whether cortical local and global ERP responses to sensory targets share cortical dynamics between the senses. Using temporal cross-decoding, we found that temporally complex targets elicit a supramodal sustained response. Conversely, local responses to temporally confined targets typically considered modality-specific rely on early short-lived supramodal activation. Our finding provides evidence for a supramodal gradient supporting sensory target detection in the cortex, with implications for multiple fields in which these responses are studied (e.g., predictive coding, consciousness, and attention).


Asunto(s)
Percepción del Tiempo , Percepción del Tacto , Humanos , Mapeo Encefálico/métodos , Atención/fisiología , Encéfalo/fisiología , Percepción del Tacto/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica/métodos
6.
J Neurosci ; 42(11): 2344-2355, 2022 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-35091504

RESUMEN

Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.


Asunto(s)
Toma de Decisiones , Electroencefalografía , Encéfalo/fisiología , Toma de Decisiones/fisiología , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Estimulación Luminosa , Percepción Visual/fisiología
7.
Hum Brain Mapp ; 44(2): 656-667, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36169038

RESUMEN

Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.


Asunto(s)
Percepción Auditiva , Percepción Visual , Humanos , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Lóbulo Parietal , Potenciales Evocados , Lóbulo Temporal , Estimulación Acústica , Estimulación Luminosa
8.
Cereb Cortex ; 32(21): 4818-4833, 2022 10 20.
Artículo en Inglés | MEDLINE | ID: mdl-35062025

RESUMEN

The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.


Asunto(s)
Percepción del Habla , Humanos , Estimulación Acústica , Labio , Habla , Movimiento
9.
Virtual Real ; 27(3): 2043-2057, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37614716

RESUMEN

Research has shown that high trait anxiety can alter multisensory processing of threat cues (by amplifying integration of angry faces and voices); however, it remains unknown whether differences in multisensory processing play a role in the psychological response to trauma. This study examined the relationship between multisensory emotion processing and intrusive memories over seven days following exposure to an analogue trauma in a sample of 55 healthy young adults. We used an adapted version of the trauma film paradigm, where scenes showing a car accident trauma were presented using virtual reality, rather than a conventional 2D film. Multisensory processing was assessed prior to the trauma simulation using a forced choice emotion recognition paradigm with happy, sad and angry voice-only, face-only, audiovisual congruent (face and voice expressed matching emotions) and audiovisual incongruent expressions (face and voice expressed different emotions). We found that increased accuracy in recognising anger (but not happiness and sadness) in the audiovisual condition relative to the voice- and face-only conditions was associated with more intrusions following VR trauma. Despite previous results linking trait anxiety and intrusion development, no significant influence of trait anxiety on intrusion frequency was observed. Enhanced integration of threat-related information (i.e. angry faces and voices) could lead to overly threatening appraisals of stressful life events and result in greater intrusion development after trauma. Supplementary Information: The online version contains supplementary material available at 10.1007/s10055-023-00784-1.

10.
Conscious Cogn ; 106: 103432, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36372053

RESUMEN

In body integrity dysphoria (BID), otherwise healthy individuals feel like a part of their physical body does not belong to them despite normal sensorimotor functioning. Theoretical and empirical evidence suggested aweakened integration of the affected body part into higher-order multisensory cortical body networks. Here, we used a multisensory stimulation paradigm in mixed reality to modulate and investigate multisensory processing underlying body (dis)ownership in individuals with BID of the lower limb. In 20 participants with BID, delay perception and body ownership were measured after introducing delays between the visual and tactile information of viewed stroking applied to affected and unaffected body parts. Unlike predicted, delay perception did not differ between the two body parts. However, specifically for the affected limb, ownership was lower and more strongly modulated by delay. These findings might be following the idea of a stronger dependency on online bottom up sensory signals in BID.


Asunto(s)
Ilusiones , Percepción del Tacto , Humanos , Ilusiones/fisiología , Percepción del Tacto/fisiología , Tacto , Extremidad Inferior , Imagen Corporal , Percepción Visual/fisiología , Mano/fisiología
11.
J Neurophysiol ; 124(3): 715-727, 2020 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-32727263

RESUMEN

The environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, optimally unifying such signals requires assigning particular signals to the same or different underlying objects or events. Many prior studies (especially in animals) have assumed fusion of cross-modal information, whereas recent work in humans has begun to probe the appropriateness of this assumption. Here we present results from a novel behavioral task in which both monkeys (Macaca mulatta) and humans localized visual and auditory stimuli and reported their perceived sources through saccadic eye movements. When the locations of visual and auditory stimuli were widely separated, subjects made two saccades, while when the two stimuli were presented at the same location they made only a single saccade. Intermediate levels of separation produced mixed response patterns: a single saccade to an intermediate position on some trials or separate saccades to both locations on others. The distribution of responses was well described by a hierarchical causal inference model that accurately predicted both the explicit "same vs. different" source judgments as well as biases in localization of the source(s) under each of these conditions. The results from this task are broadly consistent with prior work in humans across a wide variety of analogous tasks, extending the study of multisensory causal inference to nonhuman primates and to a natural behavioral task with both a categorical assay of the number of perceived sources and a continuous report of the perceived position of the stimuli.NEW & NOTEWORTHY We developed a novel behavioral paradigm for the study of multisensory causal inference in both humans and monkeys and found that both species make causal judgments in the same Bayes-optimal fashion. To our knowledge, this is the first demonstration of behavioral causal inference in animals, and this cross-species comparison lays the groundwork for future experiments using neuronal recording techniques that are impractical or impossible in human subjects.


Asunto(s)
Percepción Auditiva/fisiología , Movimientos Sacádicos/fisiología , Percepción Espacial/fisiología , Pensamiento/fisiología , Percepción Visual/fisiología , Adulto , Animales , Tecnología de Seguimiento Ocular , Femenino , Humanos , Masculino , Localización de Sonidos/fisiología
12.
Exp Brain Res ; 238(9): 2019-2029, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32617882

RESUMEN

Action binding refers to the observation that the perceived time of an action (e.g., a keypress) is shifted towards the distal sensory feedback (usually a sound) triggered by that action. Surprisingly, the role of somatosensory feedback for this phenomenon has been largely ignored. We fill this gap by showing that the somatosensory feedback, indexed by keypress peak force, is functional in judging keypress time. Specifically, the strength of somatosensory feedback is positively correlated with reported keypress time when the keypress is not associated with an auditory feedback and negatively correlated when the keypress triggers an auditory feedback. The result is consistent with the view that the reported keypress time is shaped by sensory information from different modalities. Moreover, individual differences in action binding can be explained by a sensory information weighting between somatosensory and auditory feedback. At the group level, increasing the strength of somatosensory feedback can decrease action binding to a level not being detected statistically. Therefore, a multisensory information integration account (between somatosensory and auditory inputs) explains action binding at both a group level and an individual level.


Asunto(s)
Retroalimentación Sensorial , Individualidad , Estimulación Acústica , Humanos
13.
Neuroimage ; 196: 261-268, 2019 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-30978494

RESUMEN

Recent studies provide evidence for changes in audiovisual perception as well as for adaptive cross-modal auditory cortex plasticity in older individuals with high-frequency hearing impairments (presbycusis). We here investigated whether these changes facilitate the use of visual information, leading to an increased audiovisual benefit of hearing-impaired individuals when listening to speech in noise. We used a naturalistic design in which older participants with a varying degree of high-frequency hearing loss attended to running auditory or audiovisual speech in noise and detected rare target words. Passages containing only visual speech served as a control condition. Simultaneously acquired scalp electroencephalography (EEG) data were used to study cortical speech tracking. Target word detection accuracy was significantly increased in the audiovisual as compared to the auditory listening condition. The degree of this audiovisual enhancement was positively related to individual high-frequency hearing loss and subjectively reported listening effort in challenging daily life situations, which served as a subjective marker of hearing problems. On the neural level, the early cortical tracking of the speech envelope was enhanced in the audiovisual condition. Similar to the behavioral findings, individual differences in the magnitude of the enhancement were positively associated with listening effort ratings. Our results therefore suggest that hearing-impaired older individuals make increased use of congruent visual information to compensate for the degraded auditory input.


Asunto(s)
Corteza Cerebral/fisiopatología , Ruido , Presbiacusia/fisiopatología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Anciano , Umbral Auditivo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Personas con Deficiencia Auditiva , Estimulación Luminosa
14.
Neuroimage ; 201: 116004, 2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31299368

RESUMEN

Face-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face videos and voice recordings of the same identity. Models of face and voice integration suggest that such representations could exist in multimodal brain regions, and in unimodal regions via direct coupling between face- and voice-selective regions. Therefore, in this study we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in face-selective, voice-selective, and person-selective multimodal brain regions. We used representational similarity analysis to (1) compare representational geometries (i.e. representational dissimilarity matrices) of face- and voice-elicited identities, and to (2) investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We did not find any evidence of similar representational geometries across modalities in any of our regions of interest. However, our results showed that pattern discriminants that were trained to discriminate pairs of identities from their faces could also discriminate the respective voices (and vice-versa) in the right posterior superior temporal sulcus (rpSTS). Our findings suggest that the rpSTS is a person-selective multimodal region that shows a modality-general person-identity representation and integrates face and voice identity information.


Asunto(s)
Percepción Auditiva/fisiología , Reconocimiento Facial/fisiología , Reconocimiento en Psicología/fisiología , Lóbulo Temporal/fisiología , Voz , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
15.
Eur J Neurosci ; 50(8): 3296-3310, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31077463

RESUMEN

Adaptation to a visuomotor rotation in a cursor-control task is accompanied by proprioceptive recalibration, whereas the existence of visual recalibration is uncertain and has even been doubted. In the present study, we tested both visual and proprioceptive recalibration; proprioceptive recalibration was not only assessed by means of psychophysical judgments of the perceived position of the hand, but also by an indirect procedure based on movement characteristics. Participants adapted to a gradually introduced visuomotor rotation of 30° by making center-out movements to remembered targets. In subsequent test trials, they made center-out movements without visual feedback or observed center-out motions of a cursor without moving the hand. In each test trial, they judged the endpoint of hand or cursor by matching the position of the hand or of a visual marker, respectively, moving along a semicircular path. This path ran through all possible endpoints of the center-out movements. We observed proprioceptive recalibration of 7.3° (3.1° with the indirect procedure) and a smaller, but significant, visual recalibration of 1.3°. Total recalibration of 8.6° was about half as strong as motor adaptation, the adaptive shift of the movement direction. The evidence of both proprioceptive and visual recalibration was obtained with a judgment procedure that suggests that recalibration is restricted to the type of movement performed during exposure to a visuomotor rotation. Consequently, identical physical positions of the hand can be perceived differently depending on how they have been reached, and similarly identical positions of a cursor on a monitor can be perceived differently.


Asunto(s)
Adaptación Fisiológica , Propiocepción , Percepción Visual , Fenómenos Biomecánicos , Retroalimentación Sensorial , Femenino , Mano , Humanos , Masculino , Memoria , Actividad Motora , Psicofísica , Rotación , Adulto Joven
16.
J Exp Child Psychol ; 178: 317-340, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30384968

RESUMEN

There are occasions when infants and children have difficulty in processing arbitrary auditory-visual pairings, with auditory input sometimes attenuating visual processing (i.e., auditory dominance). The current research examined possible mechanisms underlying these auditory dominance effects in infants and 4-year-olds. Do auditory dominance effects stem from auditory input attenuating encoding of visual input, from the difficulty of inhibiting auditory-based responses, or from a combination of these factors? In five reported experiments, 4-year-olds (Experiments 1A, 1B, 2A, and 2B) and 14- and 22-month-olds (Experiment 3) were presented with a variety of tasks that required simultaneous processing of auditory and visual input, and then we assessed memory for the visual items at test. Auditory dominance in young children resulted from response competition that children could not resolve. Infants' results were not as robust, but they provided some evidence that nonlinguistic sounds and possibly spoken words may attenuate encoding of visual input. The current findings shed light on mechanisms underlying cross-modal processing and auditory dominance and have implications for many tasks that hinge on the processing of arbitrary auditory-visual pairings.


Asunto(s)
Percepción Auditiva , Sonido , Percepción Visual , Estimulación Acústica , Preescolar , Femenino , Humanos , Lactante , Masculino , Memoria , Estimulación Luminosa , Percepción Visual/fisiología
17.
J Exp Child Psychol ; 180: 141-155, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30655099

RESUMEN

Although it is well known that attention can modulate multisensory processes in adults and infants, this relationship has not been investigated in school-age children. Attention abilities of 53 children (ages 7-13 years) were assessed using three subscales of the Test of Everyday Attention for Children (TEA-Ch): visuospatial attention (Sky Search [SS]), auditory sustained attention (Score), and audiovisual dual task (SSDT, where the SS and Score tasks are performed simultaneously). Multisensory processes were assessed using the McGurk effect (a verbal illusion where speech perception is altered by vision) and the Stream-Bounce (SB) effect (a nonverbal illusion where visual perception is altered by sound). The likelihood of perceiving both multisensory illusions tended to increase with age. The McGurk effect was significantly more pronounced in children who scored high on the audiovisual dual attention index (SSDT). In contrast, the SB effect was more pronounced in children with higher sustained auditory attention abilities as assessed by the Score index. These relationships between attention and the multisensory illusory percepts could not be explained solely by age or children's intellectual abilities. This study suggests that the interplay between attention and multisensory processing depends on both the nature of the multisensory task and the type of attention needed to effectively merge information across the senses.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Niño , Femenino , Humanos , Ilusiones , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa
18.
Eur J Neurosci ; 47(7): 800-811, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29461657

RESUMEN

Human-environment interactions are mediated through the body and occur within the peripersonal space (PPS), the space immediately adjacent to and surrounding the body. The PPS is taken to be a critical interface between the body and the environment, and indeed, body-part specific PPS remapping has been shown to depend on body-part utilization, such as upper limb movements in otherwise static observers. How vestibular signals induced by whole-body movement contribute to PPS representation is less well understood. In a series of experiments, we mapped the spatial extension of the PPS around the head while participants were submitted to passive whole-body rotations inducing vestibular stimulation. Forty-six participants, in three experiments, executed a tactile detection reaction time task while task-irrelevant auditory stimuli approached them. The maximal distance at which the auditory stimulus facilitated tactile reaction time was taken as a proxy for the boundary of peri-head space. The present results indicate two distinct vestibular effects. First, vestibular stimulation speeded tactile detection indicating a vestibular facilitation of somatosensory processing. Second, vestibular stimulation modulated audio-tactile interaction of peri-head space in a rotation direction-specific manner. Congruent but not incongruent audio-vestibular motion stimuli expanded the PPS boundary further away from the body as compared to no rotation. These results show that vestibular inputs dynamically update the multisensory delineation of PPS and far space, which may serve to maintain accurate tracking of objects close to the body and to update spatial self-representations.


Asunto(s)
Espacio Personal , Vestíbulo del Laberinto/fisiología , Estimulación Acústica , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Física , Tiempo de Reacción/fisiología , Rotación , Percepción Espacial/fisiología , Percepción del Tacto/fisiología , Adulto Joven
19.
J Child Psychol Psychiatry ; 59(8): 872-880, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29359802

RESUMEN

BACKGROUND: Effective multisensory processing develops in infancy and is thought to be important for the perception of unified and multimodal objects and events. Previous research suggests impaired multisensory processing in autism, but its role in the early development of the disorder is yet uncertain. Here, using a prospective longitudinal design, we tested whether reduced visual attention to audiovisual synchrony is an infant marker of later-emerging autism diagnosis. METHODS: We studied 10-month-old siblings of children with autism using an eye tracking task previously used in studies of preschoolers. The task assessed the effect of manipulations of audiovisual synchrony on viewing patterns while the infants were observing point light displays of biological motion. We analyzed the gaze data recorded in infancy according to diagnostic status at 3 years of age (DSM-5). RESULTS: Ten-month-old infants who later received an autism diagnosis did not orient to audiovisual synchrony expressed within biological motion. In contrast, both infants at low-risk and high-risk siblings without autism at follow-up had a strong preference for this type of information. No group differences were observed in terms of orienting to upright biological motion. CONCLUSIONS: This study suggests that reduced orienting to audiovisual synchrony within biological motion is an early sign of autism. The findings support the view that poor multisensory processing could be an important antecedent marker of this neurodevelopmental condition.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Trastorno del Espectro Autista/diagnóstico , Percepción de Movimiento/fisiología , Reconocimiento Visual de Modelos/fisiología , Trastorno del Espectro Autista/fisiopatología , Preescolar , Medidas del Movimiento Ocular , Femenino , Humanos , Lactante , Estudios Longitudinales , Masculino , Pronóstico , Hermanos
20.
Hum Brain Mapp ; 38(2): 842-854, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27696592

RESUMEN

Previous studies on visuo-haptic shape processing provide evidence that visually learned shape information can transfer to the haptic domain. In particular, recent neuroimaging studies have shown that visually learned novel objects that were haptically tested recruited parts of the ventral pathway from early visual cortex to the temporal lobe. Interestingly, in such tasks considerable individual variation in cross-modal transfer performance was observed. Here, we investigate whether this individual variation may be reflected in microstructural characteristics of white-matter (WM) pathways. We first trained participants on a fine-grained categorization task of novel shapes in the visual domain, followed by a haptic categorization test. We then correlated visual training-performance and haptic test-performance, as well as performance on a symbol-coding task requiring visuo-motor dexterity with microstructural properties of WM bundles potentially involved in visuo-haptic processing (the inferior longitudinal fasciculus [ILF], the fronto-temporal part of the superior longitudinal fasciculus [SLFft ] and the vertical occipital fasciculus [VOF]). Behavioral results showed that haptic categorization performance was good on average but exhibited large inter-individual variability. Haptic performance also was correlated with performance in the symbol-coding task. WM analyses showed that fast visual learners exhibited higher fractional anisotropy (FA) in left SLFft and left VOF. Importantly, haptic test-performance (and symbol-coding performance) correlated with FA in ILF and with axial diffusivity in SLFft . These findings provide clear evidence that individual variation in visuo-haptic performance can be linked to microstructural characteristics of WM pathways. Hum Brain Mapp 38:842-854, 2017. © 2016 Wiley Periodicals, Inc.


Asunto(s)
Vías Aferentes/fisiología , Mapeo Encefálico , Reconocimiento Visual de Modelos/fisiología , Percepción del Tacto/fisiología , Tacto/fisiología , Sustancia Blanca/fisiología , Adolescente , Adulto , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/diagnóstico por imagen , Estimulación Física , Percepción Visual/fisiología , Sustancia Blanca/diagnóstico por imagen , Adulto Joven
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda