Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Cognition ; 254: 105970, 2024 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-39368349

RESUMO

Our perceptual experience is generally framed in multisensory environments abundant in predictive information. Previous research on statistical learning has shown that humans can learn regularities in different sensory modalities in parallel, but it has not yet determined whether multisensory predictions are generated through a modality-specific predictive mechanism or instead, rely on a supra-modal predictive system. Here, across two experiments, we tested these hypotheses by presenting participants with concurrent pairs of predictable auditory and visual low-level stimuli (i.e., tones and gratings). In different experimental blocks, participants had to attend the stimuli in one modality while ignoring stimuli from the other sensory modality (distractors), and perform a perceptual discrimination task on the second stimulus of the attended modality (targets). Orthogonal to the task goal, both the attended and unattended pairs followed transitional probabilities, so targets and distractors could be expected or unexpected. We found that participants performed better for expected compared to unexpected targets. This effect generalized to the distractors but only when relevant targets were expected. Such interactive effects suggest that predictions may be gated by a supra-modal system with shared resources across sensory modalities that are distributed according to their respective behavioural relevance.

2.
Cereb Cortex ; 33(13): 8300-8311, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-37005064

RESUMO

The human brain is capable of using statistical regularities to predict future inputs. In the real world, such inputs typically comprise a collection of objects (e.g. a forest constitutes numerous trees). The present study aimed to investigate whether perceptual anticipation relies on lower-level or higher-level information. Specifically, we examined whether the human brain anticipates each object in a scene individually or anticipates the scene as a whole. To explore this issue, we first trained participants to associate co-occurring objects within fixed spatial arrangements. Meanwhile, participants implicitly learned temporal regularities between these displays. We then tested how spatial and temporal violations of the structure modulated behavior and neural activity in the visual system using fMRI. We found that participants only showed a behavioral advantage of temporal regularities when the displays conformed to their previously learned spatial structure, demonstrating that humans form configuration-specific temporal expectations instead of predicting individual objects. Similarly, we found suppression of neural responses for temporally expected compared with temporally unexpected objects in lateral occipital cortex only when the objects were embedded within expected configurations. Overall, our findings indicate that humans form expectations about object configurations, demonstrating the prioritization of higher-level over lower-level information in temporal expectation.


Assuntos
Reconhecimento Visual de Modelos , Árvores , Humanos , Reconhecimento Visual de Modelos/fisiologia , Lobo Occipital/fisiologia , Aprendizagem , Imageamento por Ressonância Magnética , Mapeamento Encefálico , Florestas , Percepção Visual/fisiologia , Estimulação Luminosa
3.
Curr Biol ; 33(9): 1836-1843.e6, 2023 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-37060906

RESUMO

Computational models and in vivo studies in rodents suggest that the emergence of gamma activity (40-140 Hz) during memory encoding and retrieval is coupled to opposed-phase states of the underlying hippocampal theta rhythm (4-9 Hz).1,2,3,4,5,6,7,8,9,10 However, direct evidence for whether human hippocampal gamma-modulated oscillatory activity in memory processes is coupled to opposed-phase states of the ongoing theta rhythm remains elusive. Here, we recorded local field potentials (LFPs) directly from the hippocampus of 10 patients with epilepsy, using depth electrodes. We used a memory encoding and retrieval task whereby trial unique sequences of pictures depicting real-life episodes were presented, and 24 h later, participants were asked to recall them upon the appearance of the first picture of the encoded episodic sequence. We found theta-to-gamma cross-frequency coupling that was specific to the hippocampus during both the encoding and retrieval of episodic memories. We also revealed that gamma was coupled to opposing theta phases during both encoding and recall processes. Additionally, we observed that the degree of theta-gamma phase opposition between encoding and recall was associated with participants' memory performance, so gamma power was modulated by theta phase for both remembered and forgotten trials, although only for remembered trials the dominant theta phase was different for encoding and recall trials. The current results offer direct empirical evidence in support of hippocampal theta-gamma phase opposition models in human long-term memory and provide fundamental insights into mechanistic predictions derived from computational and animal work, thereby contributing to establishing similarities and differences across species.


Assuntos
Memória Episódica , Animais , Humanos , Rememoração Mental , Ritmo Teta , Hipocampo , Memória de Longo Prazo
4.
Commun Biol ; 6(1): 12, 2023 01 06.
Artigo em Inglês | MEDLINE | ID: mdl-36604455

RESUMO

Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases.


Assuntos
Percepção Auditiva , Magnetoencefalografia , Humanos , Percepção Auditiva/fisiologia , Estimulação Acústica , Eletroencefalografia , Som
5.
J Vis ; 21(5): 13, 2021 05 03.
Artigo em Inglês | MEDLINE | ID: mdl-33988675

RESUMO

A set of recent neuroimaging studies observed that the perception of an illusory shape can elicit both positive and negative feedback modulations in different parts of the early visual cortex. When three Pac-Men shapes were aligned in such a way that they created an illusory triangle (i.e., the Kanizsa illusion), neural activity in early visual cortex was enhanced in those neurons that had receptive fields that overlapped with the illusory shape but suppressed in neurons whose receptive field overlapped with the Pac-Men inducers. These results were interpreted as congruent with the predictive coding framework, in which neurons in early visual cortex enhance or suppress their activity depending on whether the top-down predictions match the bottom-up sensory inputs. However, there are several plausible alternative explanations for the activity modulations. Here we tested a recent proposal (Moors, 2015) that the activity suppression in early visual cortex during illusory shape perception reflects neural adaptation to perceptually stable input. Namely, the inducers appear perceptually stable during the illusory shape condition (discs on which a triangle is superimposed), but not during the control condition (discs that change into Pac-Men). We examined this hypothesis by manipulating the perceptual stability of inducers. When the inducers could be perceptually interpreted as persistent circles, we replicated the up- and downregulation pattern shown in previous studies. However, when the inducers could not be perceived as persistent circles, we still observed enhanced activity in neurons representing the illusory shape but the suppression of activity in neurons representing the inducers was absent. Thus our results support the hypothesis that the activity suppression in neurons representing the inducers during the Kanizsa illusion is better explained by neural adaptation to perceptually stable input than by reduced prediction error.


Assuntos
Percepção de Forma , Ilusões , Ilusões Ópticas , Córtex Visual , Retroalimentação , Humanos , Masculino , Neurônios , Percepção Visual
6.
J Cogn Neurosci ; 32(4): 691-702, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31820679

RESUMO

Perceptual expectations can change how a visual stimulus is perceived. Recent studies have shown mixed results in terms of whether expectations modulate sensory representations. Here, we used a statistical learning paradigm to study the temporal characteristics of perceptual expectations. We presented participants with pairs of object images organized in a predictive manner and then recorded their brain activity with magnetoencephalography while they viewed expected and unexpected image pairs on the subsequent day. We observed stronger alpha-band (7-14 Hz) activity in response to unexpected compared with expected object images. Specifically, the alpha-band modulation occurred as early as the onset of the stimuli and was most pronounced in left occipito-temporal cortex. Given that the differential response to expected versus unexpected stimuli occurred in sensory regions early in time, our results suggest that expectations modulate perceptual decision-making by changing the sensory response elicited by the stimuli.


Assuntos
Ritmo alfa , Encéfalo/fisiologia , Aprendizagem/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Adulto Jovem
7.
Sci Rep ; 8(1): 16637, 2018 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-30413736

RESUMO

The spatial context in which we view a visual stimulus strongly determines how we perceive the stimulus. In the visual tilt illusion, the perceived orientation of a visual grating is affected by the orientation signals in its surrounding context. Conceivably, the spatial context in which a visual grating is perceived can be defined by interactive multisensory information rather than visual signals alone. Here, we tested the hypothesis that tactile signals engage the neural mechanisms supporting visual contextual modulation. Because tactile signals also convey orientation information and touch can selectively interact with visual orientation perception, we predicted that tactile signals would modulate the visual tilt illusion. We applied a bias-free method to measure the tilt illusion while testing visual-only, tactile-only or visuo-tactile contextual surrounds. We found that a tactile context can influence visual tilt perception. Moreover, combining visual and tactile orientation information in the surround results in a larger tilt illusion relative to the illusion achieved with the visual-only surround. These results demonstrate that the visual tilt illusion is subject to multisensory influences and imply that non-visual signals access the neural circuits whose computations underlie the contextual modulation of vision.


Assuntos
Ilusões/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Processamento Espacial/fisiologia , Tato/fisiologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
8.
R Soc Open Sci ; 5(3): 170909, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29657743

RESUMO

The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.

9.
Cereb Cortex ; 28(11): 3908-3921, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-29045579

RESUMO

Recent studies have challenged the traditional notion of modality-dedicated cortical systems by showing that audition and touch evoke responses in the same sensory brain regions. While much of this work has focused on somatosensory responses in auditory regions, fewer studies have investigated sound responses and representations in somatosensory regions. In this functional magnetic resonance imaging (fMRI) study, we measured BOLD signal changes in participants performing an auditory frequency discrimination task and characterized activation patterns related to stimulus frequency using both univariate and multivariate analysis approaches. Outside of bilateral temporal lobe regions, we observed robust and frequency-specific responses to auditory stimulation in classically defined somatosensory areas. Moreover, using representational similarity analysis to define the relationships between multi-voxel activation patterns for all sound pairs, we found clear similarity patterns for auditory responses in the parietal lobe that correlated significantly with perceptual similarity judgments. Our results demonstrate that auditory frequency representations can be distributed over brain regions traditionally considered to be dedicated to somatosensation. The broad distribution of auditory and tactile responses over parietal and temporal regions reveals a number of candidate brain areas that could support general temporal frequency processing and mediate the extensive and robust perceptual interactions between audition and touch.


Assuntos
Percepção Auditiva/fisiologia , Córtex Somatossensorial/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Discriminação Psicológica/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
10.
J Neurophysiol ; 117(3): 1352-1362, 2017 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-28077668

RESUMO

Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear; perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.NEW & NOTEWORTHY Auditory signals can influence the tactile perception of temporal frequency. Multiple neural mechanisms could account for the perceptual interactions between contemporaneous auditory and tactile signals. Using a crossmodal adaptation paradigm, we found that auditory adaptation causes frequency- and feature-specific improvements in tactile perception. This crossmodal transfer of aftereffects between audition and touch implies that tactile frequency perception relies on neural circuits that also process auditory frequency.


Assuntos
Adaptação Fisiológica/fisiologia , Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Percepção do Tato/fisiologia , Tato/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Modelos Lineares , Masculino , Estimulação Física , Psicofísica , Adulto Jovem
11.
J Vis ; 15(14): 5, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26462174

RESUMO

Recent studies have proposed that some cross-modal illusions might be expressed in what were previously thought of as sensory-specific brain areas. Therefore, one interesting question is whether auditory-driven visual illusory percepts respond to manipulations of low-level visual attributes (such as luminance or chromatic contrast) in the same way as their nonillusory analogs. Here, we addressed this question using the double flash illusion (DFI), whereby one brief flash can be perceived as two when combined with two beeps presented in rapid succession. Our results showed that the perception of two illusory flashes depended on luminance contrast, just as the temporal resolution for two real flashes did. Specifically we found that the higher the luminance contrast, the stronger the DFI. Such a pattern seems to contradict what would be predicted from a maximum likelihood estimation perspective, and can be explained by considering that low-level visual stimulus attributes similarly modulate the perception of sound-induced visual phenomena and "real" visual percepts. This finding provides psychophysical support for the involvement of sensory-specific brain areas in the expression of the DFI. On the other hand, the addition of chromatic contrast failed to produce a change in the strength of the DFI despite it improved visual sensitivity to real flashes. The null impact of chromaticity on the cross-modal illusion might suggest a weaker interaction of the parvocellular visual pathway with the auditory system for cross-modal illusions.


Assuntos
Percepção Auditiva/fisiologia , Ilusões Ópticas/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Funções Verossimilhança , Masculino , Estimulação Luminosa , Vias Visuais/fisiologia , Adulto Jovem
12.
J Neurophysiol ; 113(6): 1800-18, 2015 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-25520431

RESUMO

The mechanisms responsible for the integration of sensory information from different modalities have become a topic of intense interest in psychophysics and neuroscience. Many authors now claim that early, sensory-based cross-modal convergence improves performance in detection tasks. An important strand of supporting evidence for this claim is based on statistical models such as the Pythagorean model or the probabilistic summation model. These models establish statistical benchmarks representing the best predicted performance under the assumption that there are no interactions between the two sensory paths. Following this logic, when observed detection performances surpass the predictions of these models, it is often inferred that such improvement indicates cross-modal convergence. We present a theoretical analyses scrutinizing some of these models and the statistical criteria most frequently used to infer early cross-modal interactions during detection tasks. Our current analysis shows how some common misinterpretations of these models lead to their inadequate use and, in turn, to contradictory results and misleading conclusions. To further illustrate the latter point, we introduce a model that accounts for detection performances in multimodal detection tasks but for which surpassing of the Pythagorean or probabilistic summation benchmark can be explained without resorting to early cross-modal interactions. Finally, we report three experiments that put our theoretical interpretation to the test and further propose how to adequately measure multimodal interactions in audiotactile detection tasks.


Assuntos
Modelos Neurológicos , Córtex Somatossensorial/fisiologia , Adulto , Feminino , Humanos , Masculino , Desempenho Psicomotor , Limiar Sensorial
13.
Exp Brain Res ; 232(6): 1631-8, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24699769

RESUMO

Crossmodal interaction conferring enhancement in sensory processing is nowadays widely accepted. Such benefit is often exemplified by neural response amplification reported in physiological studies conducted with animals, which parallel behavioural demonstrations of sound-driven improvement in visual tasks in humans. Yet, a good deal of controversy still surrounds the nature and interpretation of these human psychophysical studies. Here, we consider the interpretation of crossmodal enhancement findings under the light of the functional as well as anatomical specialization of magno- and parvocellular visual pathways, whose paramount relevance has been well established in visual research but often overlooked in crossmodal research. We contend that a more explicit consideration of this important visual division may resolve some current controversies and help optimize the design of future crossmodal research.


Assuntos
Percepção Auditiva/fisiologia , Visão Ocular/fisiologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Humanos , Estimulação Luminosa , Psicofísica
14.
Curr Biol ; 24(5): 531-5, 2014 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-24530067

RESUMO

Developmental dyslexia affects 5%-10% of the population, resulting in poor spelling and reading skills. While there are well-documented differences in the way dyslexics process low-level visual and auditory stimuli, it is mostly unknown whether there are similar differences in audiovisual multisensory processes. Here, we investigated audiovisual integration using the redundant target effect (RTE) paradigm. Some conditions demonstrating audiovisual integration appear to depend upon magnocellular pathways, and dyslexia has been associated with deficits in this pathway; so, we postulated that developmental dyslexics ("dyslexics" hereafter) would show differences in audiovisual integration compared with controls. Reaction times (RTs) to multisensory stimuli were compared with predictions from Miller's race model. Dyslexics showed difficulty shifting their attention between modalities; but such "sluggish attention shifting" (SAS) appeared only when dyslexics shifted their attention from the visual to the auditory modality. These results suggest that dyslexics distribute their crossmodal attention resources differently from controls, causing different patterns in multisensory responses compared to controls. From this, we propose that dyslexia training programs should take into account the asymmetric shifts of crossmodal attention.


Assuntos
Atenção/fisiologia , Dislexia , Tempo de Reação , Estimulação Acústica , Estudos de Casos e Controles , Humanos , Estimulação Luminosa , Percepção Visual
15.
J Neurophysiol ; 109(4): 1065-77, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23221404

RESUMO

Cross-modal enhancement can be mediated both by higher-order effects due to attention and decision making and by detection-level stimulus-driven interactions. However, the contribution of each of these sources to behavioral improvements has not been conclusively determined and quantified separately. Here, we apply psychophysical analysis based on Piéron functions in order to separate stimulus-dependent changes from those accounted by decision-level contributions. Participants performed a simple visual speeded detection task on Gabor patches of different spatial frequencies and contrast values, presented with and without accompanying sounds. On one hand, we identified an additive cross-modal improvement in mean reaction times across all types of visual stimuli that would be well explained by interactions not strictly based on stimulus-driven modulations (e.g., due to reduction of temporal uncertainty and motor times). On the other hand, we singled out an audio-visual benefit that strongly depended on stimulus features such as frequency and contrast. This particular enhancement was selective to low-visual spatial frequency stimuli, optimized for magnocellular sensitivity. We therefore conclude that interactions at detection stages and at decisional processes in response selection that contribute to audio-visual enhancement can be separated online and express on partly different aspects of visual processing.


Assuntos
Percepção Auditiva/fisiologia , Som , Visão Ocular/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Tomada de Decisões , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação , Detecção de Sinal Psicológico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA