Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.038
Filtrar
Más filtros

Intervalo de año de publicación
1.
Neuroreport ; 35(4): 269-276, 2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38305131

RESUMEN

This study explored how the human brain perceives stickiness through tactile and auditory channels, especially when presented with congruent or incongruent intensity cues. In our behavioral and functional MRI (fMRI) experiments, we presented participants with adhesive tape stimuli at two different intensities. The congruent condition involved providing stickiness stimuli with matching intensity cues in both auditory and tactile channels, whereas the incongruent condition involved cues of different intensities. Behavioral results showed that participants were able to distinguish between the congruent and incongruent conditions with high accuracy. Through fMRI searchlight analysis, we tested which brain regions could distinguish between congruent and incongruent conditions, and as a result, we identified the superior temporal gyrus, known primarily for auditory processing. Interestingly, we did not observe any significant activation in regions associated with somatosensory or motor functions. This indicates that the brain dedicates more attention to auditory cues than to tactile cues, possibly due to the unfamiliarity of conveying the sensation of stickiness through sound. Our results could provide new perspectives on the complexities of multisensory integration, highlighting the subtle yet significant role of auditory processing in understanding tactile properties such as stickiness.


Asunto(s)
Percepción Auditiva , Imagen por Resonancia Magnética , Humanos , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Lóbulo Temporal , Percepción Visual/fisiología
2.
PLoS Biol ; 22(2): e3002494, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38319934

RESUMEN

Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.


Asunto(s)
Mapeo Encefálico , Percepción Visual , Humanos , Anciano , Teorema de Bayes , Percepción Visual/fisiología , Encéfalo/fisiología , Atención/fisiología , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Estimulación Luminosa/métodos , Imagen por Resonancia Magnética
3.
Autism Res ; 17(2): 280-310, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38334251

RESUMEN

Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Percepción del Habla , Adulto , Niño , Humanos , Percepción del Habla/fisiología , Narración , Percepción Visual/fisiología , Trastorno del Espectro Autista/diagnóstico por imagen , Imagen por Resonancia Magnética , Percepción Auditiva/fisiología , Estimulación Acústica/métodos , Estimulación Luminosa/métodos
4.
J Neurosci ; 44(10)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38199864

RESUMEN

During communication in real-life settings, our brain often needs to integrate auditory and visual information and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging and magnetoencephalography to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing nonlinear signal interactions, was enhanced in the left frontotemporal and frontal regions. Focusing on the left inferior frontal gyrus, this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.


Asunto(s)
Encéfalo , Percepción del Habla , Humanos , Masculino , Femenino , Encéfalo/fisiología , Percepción Visual/fisiología , Magnetoencefalografía , Habla/fisiología , Atención/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Estimulación Luminosa
5.
Cortex ; 170: 26-31, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37926612

RESUMEN

The famous "Piazza del Duomo" paper, published in Cortex in 1978, inspired a considerable amount of research on visual mental imagery in brain-damaged patients. As a consequence, single-case reports featuring dissociations between perceptual and imagery abilities challenged the prevailing model of visual mental imagery. Here we focus on mental imagery for colors. A case study published in Cortex showed perfectly preserved color imagery in a patient with acquired achromatopsia after bilateral lesions at the borders between the occipital and temporal cortex. Subsequent neuroimaging findings in healthy participants extended and specified this result; color imagery elicited activation in both a domain-general region located in the left fusiform gyrus and the anterior color-biased patch within the ventral temporal cortex, but not in more posterior color-biased patches. Detailed studies of individual neurological patients, as those often published in Cortex, are still critical to inspire and constrain neurocognitive research and its theoretical models.


Asunto(s)
Lesiones Encefálicas , Imaginación , Humanos , Imaginación/fisiología , Lóbulo Temporal/fisiología , Corteza Cerebral , Imágenes en Psicoterapia , Percepción Visual/fisiología
6.
J Neurosci ; 44(7)2024 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-38129133

RESUMEN

Neuroimaging studies suggest cross-sensory visual influences in human auditory cortices (ACs). Whether these influences reflect active visual processing in human ACs, which drives neuronal firing and concurrent broadband high-frequency activity (BHFA; >70 Hz), or whether they merely modulate sound processing is still debatable. Here, we presented auditory, visual, and audiovisual stimuli to 16 participants (7 women, 9 men) with stereo-EEG depth electrodes implanted near ACs for presurgical monitoring. Anatomically normalized group analyses were facilitated by inverse modeling of intracranial source currents. Analyses of intracranial event-related potentials (iERPs) suggested cross-sensory responses to visual stimuli in ACs, which lagged the earliest auditory responses by several tens of milliseconds. Visual stimuli also modulated the phase of intrinsic low-frequency oscillations and triggered 15-30 Hz event-related desynchronization in ACs. However, BHFA, a putative correlate of neuronal firing, was not significantly increased in ACs after visual stimuli, not even when they coincided with auditory stimuli. Intracranial recordings demonstrate cross-sensory modulations, but no indication of active visual processing in human ACs.


Asunto(s)
Corteza Auditiva , Masculino , Humanos , Femenino , Corteza Auditiva/fisiología , Estimulación Acústica/métodos , Potenciales Evocados/fisiología , Electroencefalografía/métodos , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Estimulación Luminosa
7.
Trends Neurosci ; 47(2): 120-134, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38143202

RESUMEN

The pulvinar nucleus of the thalamus is a crucial component of the visual system and plays significant roles in sensory processing and cognitive integration. The pulvinar's extensive connectivity with cortical regions allows for bidirectional communication, contributing to the integration of sensory information across the visual hierarchy. Recent findings underscore the pulvinar's involvement in attentional modulation, feature binding, and predictive coding. In this review, we highlight recent advances in clarifying the pulvinar's circuitry and function. We discuss the contributions of the pulvinar to signal modulation across the global cortical network and place these findings within theoretical frameworks of cortical processing, particularly the global neuronal workspace (GNW) theory and predictive coding.


Asunto(s)
Pulvinar , Humanos , Pulvinar/fisiología , Tálamo/fisiología , Percepción Visual/fisiología , Atención/fisiología , Sensación
8.
Cortex ; 169: 259-278, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37967476

RESUMEN

There is a growing interest in the relationship between mental images and attentional templates as both are considered pictorial representations that involve similar neural mechanisms. Here, we investigated the role of mental imagery in the automatic implementation of attentional templates and their effect on involuntary attention. We developed a novel version of the contingent capture paradigm designed to encourage the generation of a new template on each trial and measure contingent spatial capture by a template-matching visual feature (color). Participants were required to search at four different locations for a specific object indicated at the start of each trial. Immediately prior to the search display, color cues were presented surrounding the potential target locations, one of which matched the target color (e.g., red for strawberry). Across three experiments, our task induced a robust contingent capture effect, reflected by faster responses when the target appeared in the location previously occupied by the target-matching cue. Contrary to our predictions, this effect remained consistent regardless of self-reported individual differences in visual mental imagery (Experiment 1, N = 216) or trial-by-trial variation of voluntary imagery vividness (Experiment 2, N = 121). Moreover, contingent capture was observed even among aphantasic participants, who report no imagery (Experiment 3, N = 91). The magnitude of the effect was not reduced in aphantasics compared to a control sample of non-aphantasics, although the two groups reported substantial differences in their search strategy and exhibited differences in overall speed and accuracy. Our results hence establish a dissociation between the generation and implementation of attentional templates for a visual feature (color) and subjectively experienced imagery.


Asunto(s)
Atención , Señales (Psicología) , Humanos , Atención/fisiología , Imágenes en Psicoterapia , Autoinforme , Individualidad , Percepción Visual/fisiología , Tiempo de Reacción/fisiología , Percepción de Color/fisiología
9.
Brain Lang ; 247: 105359, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37951157

RESUMEN

Visual information from a speaker's face enhances auditory neural processing and speech recognition. To determine whether auditory memory can be influenced by visual speech, the degree of auditory neural adaptation of an auditory syllable preceded by an auditory, visual, or audiovisual syllable was examined using EEG. Consistent with previous findings and additional adaptation of auditory neurons tuned to acoustic features, stronger adaptation of N1, P2 and N2 auditory evoked responses was observed when the auditory syllable was preceded by an auditory compared to a visual syllable. However, although stronger than when preceded by a visual syllable, lower adaptation was observed when the auditory syllable was preceded by an audiovisual compared to an auditory syllable. In addition, longer N1 and P2 latencies were then observed. These results further demonstrate that visual speech acts on auditory memory but suggest competing visual influences in the case of audiovisual stimulation.


Asunto(s)
Percepción del Habla , Humanos , Percepción del Habla/fisiología , Habla , Electroencefalografía , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica , Estimulación Luminosa
10.
Brain Res Bull ; 205: 110817, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37989460

RESUMEN

Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.


Asunto(s)
Implantes Cocleares , Sordera , Percepción del Habla , Niño , Humanos , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Electroencefalografía
11.
Brain Res ; 1821: 148582, 2023 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-37717887

RESUMEN

Conscious experiences normally result from the flow of external input into our sensory systems. However, we can also create conscious percepts independently of sensory stimulation. These internally generated percepts are referred to as mental images, and they have many similarities with real visual percepts. Consequently, mental imagery is often referred to as "seeing in the mind's eye". While the neural basis of imagery has been widely studied, the interaction between internal and external sources of visual information has received little interest. Here we examined this question by using fMRI to record brain activity of healthy human volunteers while they were performing visual imagery that was distracted with a concurrent presentation of a visual stimulus. Multivariate pattern analysis (MVPA) was used to identify the brain basis of this interaction. Visual imagery was reflected in several brain areas in ventral temporal, lateral occipitotemporal, and posterior frontal cortices, with a left-hemisphere dominance. The key finding was that imagery content representations in the left lateral occipitotemporal cortex were disrupted when a visual distractor was presented during imagery. Our results thus demonstrate that the representations of internal and external visual information interact in brain areas associated with the encoding of visual objects and shapes.


Asunto(s)
Encéfalo , Imaginación , Humanos , Imaginación/fisiología , Encéfalo/fisiología , Corteza Cerebral , Mapeo Encefálico , Imágenes en Psicoterapia , Imagen por Resonancia Magnética , Percepción Visual/fisiología
12.
Multisens Res ; 36(6): 527-556, 2023 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-37582519

RESUMEN

Atypical sensory processing is now considered a diagnostic feature of autism. Although multisensory integration (MSI) may have cascading effects on the development of higher-level skills such as socio-communicative functioning, there is a clear lack of understanding of how autistic individuals integrate multiple sensory inputs. Multisensory dynamic information is a more ecological construct than static stimuli, reflecting naturalistic sensory experiences given that our environment involves moving stimulation of more than one sensory modality at a time. In particular, depth movement informs about crucial social (approaching to interact) and non-social (avoiding threats/collisions) information. As autistic characteristics are distributed on a spectrum over clinical and general populations, our work aimed to explore the multisensory integration of depth cues in the autistic personality spectrum, using a go/no-go detection task. The autistic profile of 38 participants from the general population was assessed using questionnaires extensively used in the literature. Participants performed a detection task of auditory and/or visual depth moving stimuli compared to static stimuli. We found that subjects with high-autistic traits overreacted to depth movement and exhibited faster reaction times to audiovisual cues, particularly when the audiovisual stimuli were looming and/or were presented at a fast speed. These results provide evidence of sensory particularities in people with high-autistic traits and suggest that low-level stages of multisensory integration could operate differently all along the autistic personality spectrum.


Asunto(s)
Trastorno Autístico , Humanos , Trastorno Autístico/diagnóstico , Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Estimulación Luminosa/métodos
13.
Cortex ; 166: 338-347, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37481856

RESUMEN

Different individuals experience varying degrees of vividness in their visual mental images. The distribution of these variations across different imagery domains, such as object shape, color, written words, faces, and spatial relationships, remains unknown. To address this issue, we conducted a study with 117 healthy participants who reported different levels of imagery vividness. Of these participants, 44 reported experiencing absent or nearly absent visual imagery, a condition known as "aphantasia". These individuals were compared to those with typical (N = 42) or unusually vivid (N = 31) imagery ability. We used an online version of the French-language Battérie Imagination-Perception (eBIP), which consists of tasks tapping each of the above-mentioned domains, both in visual imagery and in visual perception. We recorded the accuracy and response times (RTs) of participants' responses. Aphantasic participants reached similar levels of accuracy on all tasks compared to the other groups (Bayesian repeated measures ANOVA, BF = .02). However, their RTs were slower in both imagery and perceptual tasks (BF = 266), and they had lower confidence in their responses on perceptual tasks (BF = 7.78e5). A Bayesian regression analysis revealed that there was an inverse correlation between subjective vividness and RTs for the entire participant group: higher levels of vividness were associated with faster RTs. The pattern was similar in all the explored domains. The findings suggest that individuals with congenital aphantasia experience a slowing in processing visual information in both imagery and perception, but the precision of their processing remains unaffected. The observed performance pattern lends support to the hypotheses that congenital aphantasia is primarily a deficit of phenomenal consciousness, or that it employs alternative strategies other than visualization to access preserved visual information.


Asunto(s)
Imaginación , Percepción Visual , Humanos , Teorema de Bayes , Percepción Visual/fisiología , Imaginación/fisiología , Imágenes en Psicoterapia/métodos , Estado de Conciencia
14.
Neuroimage ; 278: 120271, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37442310

RESUMEN

Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.


Asunto(s)
Percepción del Habla , Habla , Humanos , Imagen por Resonancia Magnética , Individualidad , Percepción Visual/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Inteligibilidad del Habla , Estimulación Acústica/métodos
15.
Conscious Cogn ; 113: 103548, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37451040

RESUMEN

Aphantasia is the experience of having little to no visual imagery. We assessed the prevalence rate of aphantasia in 5,010 people from the general population of adults in the United States through self-report and responses to two visual imagery scales. The self-reported prevalence rate of aphantasia was 8.9% in this sample. However, not all participants who reported themselves as aphantasic showed low-imagery profiles on the questionnaire scales, and scale prevalence was much lower (1.5%). Self-reported aphantasic individuals reported lower dream frequencies and self-talk and showed poorer memory performance compared to individuals who reported average and high mental imagery. Self-reported aphantasic individuals showed a greater preference for written instruction compared to video instruction for learning a hypothetical new task although there were differences for men and women in this regard. Categorizing aphantasia using a scale measure and relying on self-identification may provide a more consistent picture of who lacks visual imagery.


Asunto(s)
Imaginación , Análisis y Desempeño de Tareas , Masculino , Adulto , Humanos , Femenino , Imaginación/fisiología , Autoinforme , Prevalencia , Cognición/fisiología , Percepción Visual/fisiología
16.
Trends Hear ; 27: 23312165221076681, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37377212

RESUMEN

The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.


Asunto(s)
Implantes Cocleares , Ilusiones , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Ilusiones/fisiología , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Estimulación Luminosa , Estimulación Acústica
17.
Behav Res Ther ; 165: 104311, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37037182

RESUMEN

Bilateral eye movement (EM) is a critical component in eye movement desensitization and reprocessing (EMDR), an effective treatment for post-traumatic stress disorder. However, the role of bilateral EM in alleviating trauma-related symptoms is unclear. Here we hypothesize that bilateral EM selectively disrupts the perceptual representation of traumatic memories. We used the trauma film paradigm as an analog for trauma experience. Nonclinical participants viewed trauma films followed by a bilateral EM intervention or a static Fixation period as a control. Perceptual and semantic memories for the film were assessed with different measures. Results showed a significant decrease in perceptual memory recognition shortly after the EM intervention and subsequently in the frequency and vividness of film-related memory intrusions across one week, relative to the Fixation condition. The EM intervention did not affect the explicit recognition of semantic memories, suggesting a dissociation between perceptual and semantic memory disruption. Furthermore, the EM intervention effectively reduced psychophysiological affective responses, including the skin conductance response and pupil size, to film scenes and subjective affective ratings of film-related intrusions. Together, bilateral EMs effectively reduce the perceptual representation and affective response of trauma-related memories. Further theoretical developments are needed to elucidate the mechanism of bilateral EMs in trauma treatment.


Asunto(s)
Movimientos Oculares , Memoria , Trauma Psicológico , Percepción Visual , Movimientos Oculares/fisiología , Memoria/fisiología , Trauma Psicológico/fisiopatología , Humanos , Afecto , Masculino , Femenino , Adolescente , Adulto Joven , Adulto , Autoinforme , Encuestas y Cuestionarios , Emociones , Percepción Visual/fisiología , Reconocimiento en Psicología/fisiología , Fijación Ocular/fisiología , Desensibilización y Reprocesamiento del Movimiento Ocular , Trastornos por Estrés Postraumático/fisiopatología
18.
J Cogn Neurosci ; 35(6): 1045-1060, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37043235

RESUMEN

Visual perception and mental imagery have been shown to share a hierarchical topological visual structure of neural representation, despite the existence of dissociation of neural substrate between them in function and structure. However, we have limited knowledge about how the visual hierarchical cortex is involved in visual perception and visual imagery in a unique and shared fashion. In this study, a data set including a visual perception and an imagery experiment with human participants was used to train 2 types of voxel-wise encoding models. These models were based on Gabor features and voxel activity patterns of high-level visual cortex (i.e., fusiform face area, parahippocampal place area, and lateral occipital complex) to predict activity in the early visual cortex (EVC, i.e., V1, V2, V3) during perception, and then tested with respect to the generalization of these models to mental imagery. Our results showed that during perception and imagery, activities in the EVC could be independently predicted by the Gabor features and activity of high-level visual cortex via voxel-wise encoding models, which suggested that perception and imagery might share neural representation in the EVC. We further found Gabor-specific and non-Gabor-specific patterns of neural response to stimuli in the EVC, which were shared by perception and imagery. These findings provide insight into the mechanisms of how visual perception and imagery share representation in the EVC.


Asunto(s)
Imaginación , Corteza Visual , Humanos , Imaginación/fisiología , Percepción Visual/fisiología , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Imagen por Resonancia Magnética
19.
Exp Brain Res ; 241(4): 1021-1039, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36928694

RESUMEN

Recent evidence suggests that imagined auditory and visual sensory stimuli can be integrated with real sensory information from a different sensory modality to change the perception of external events via cross-modal multisensory integration mechanisms. Here, we explored whether imagined voluntary movements can integrate visual and proprioceptive cues to change how we perceive our own limbs in space. Participants viewed a robotic hand wearing a glove repetitively moving its right index finger up and down at a frequency of 1 Hz, while they imagined executing the corresponding movements synchronously or asynchronously (kinesthetic-motor imagery); electromyography (EMG) from the participants' right index flexor muscle confirmed that the participants kept their hand relaxed while imagining the movements. The questionnaire results revealed that the synchronously imagined movements elicited illusory ownership and a sense of agency over the moving robotic hand-the moving rubber hand illusion-compared with asynchronously imagined movements; individuals who affirmed experiencing the illusion with real synchronous movement also did so with synchronous imagined movements. The results from a proprioceptive drift task further demonstrated a shift in the perceived location of the participants' real hand toward the robotic hand in the synchronous versus the asynchronous motor imagery condition. These results suggest that kinesthetic motor imagery can be used to replace veridical congruent somatosensory feedback from a moving finger in the moving rubber hand illusion to trigger illusory body ownership and agency, but only if the temporal congruence rule of the illusion is obeyed. This observation extends previous studies on the integration of mental imagery and sensory perception to the case of multisensory bodily awareness, which has potentially important implications for research into embodiment of brain-computer interface controlled robotic prostheses and computer-generated limbs in virtual reality.


Asunto(s)
Ilusiones , Percepción del Tacto , Humanos , Ilusiones/fisiología , Percepción del Tacto/fisiología , Retroalimentación Sensorial , Mano/fisiología , Dedos , Propiocepción/fisiología , Percepción Visual/fisiología , Imagen Corporal
20.
Psychophysiology ; 60(8): e14295, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36966486

RESUMEN

Efference copy-based forward model mechanisms may help us to distinguish between self-generated and externally-generated sensory consequences. Previous studies have shown that self-initiation modulates neural and perceptual responses to identical stimulation. For example, event-related potentials (ERPs) elicited by tones that follow a button press are reduced in amplitude relative to ERPs elicited by passively attended tones. However, previous EEG studies investigating visual stimuli in this context are rare, provide inconclusive results, and lack adequate control conditions with passive movements. Furthermore, although self-initiation is known to modulate behavioral responses, it is not known whether differences in the amplitude of ERPs also reflect differences in perception of sensory outcomes. In this study, we presented to participants visual stimuli consisting of gray discs following either active button presses, or passive button presses, in which an electromagnet moved the participant's finger. Two discs presented visually 500-1250 ms apart followed each button press, and participants judged which of the two was more intense. Early components of the primary visual response (N1 and P2) over the occipital electrodes were suppressed in the active condition. Interestingly, suppression in the intensity judgment task was only correlated with suppression of the visual P2 component. These data support the notion of efference copy-based forward model predictions in the visual sensory modality, but especially later processes (P2) seem to be perceptually relevant. Taken together, the results challenge the assumption that N1 differences reflect perceptual suppression and emphasize the relevance of the P2 ERP component.


Asunto(s)
Electroencefalografía , Potenciales Evocados Auditivos , Humanos , Potenciales Evocados Auditivos/fisiología , Potenciales Evocados/fisiología , Dedos , Percepción , Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA