Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 120
Filtrar
1.
Cereb Cortex ; 33(16): 9465-9477, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37365814

RESUMO

Pre-stimulus endogenous neural activity can influence the processing of upcoming sensory input and subsequent behavioral reactions. Despite it is known that spontaneous oscillatory activity mostly appears in stochastic bursts, typical approaches based on trial averaging fail to capture this. We aimed at relating spontaneous oscillatory bursts in the alpha band (8-13 Hz) to visual detection behavior, via an electroencephalography-based brain-computer interface (BCI) that allowed for burst-triggered stimulus presentation in real-time. According to alpha theories, we hypothesized that visual targets presented during alpha-bursts should lead to slower responses and higher miss rates, whereas targets presented in the absence of bursts (low alpha activity) should lead to faster responses and higher false alarm rates. Our findings support the role of bursts of alpha oscillations in visual perception and exemplify how real-time BCI systems can be used as a test bench for brain-behavioral theories.


Assuntos
Encéfalo , Percepção Visual , Encéfalo/fisiologia , Eletroencefalografia , Estimulação Luminosa , Percepção Visual/fisiologia , Humanos
2.
PLoS Biol ; 18(11): e3000895, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33137084

RESUMO

A crucial aspect when learning a language is discovering the rules that govern how words are combined in order to convey meanings. Because rules are characterized by sequential co-occurrences between elements (e.g., "These cupcakes are unbelievable"), tracking the statistical relationships between these elements is fundamental. However, purely bottom-up statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive shift from statistical learning to goal-directed attention. In addition, and consistent with the recruitment of attention, functional MRI (fMRI) analyses of late learning stages showed left parietal activity within a broad bilateral dorsal frontoparietal network. Critically, repetitive transcranial magnetic stimulation (rTMS) on participants' peak of activation within the left parietal cortex impaired their ability to generalize learned rules to a structurally analogous new language. No stimulation or rTMS on a nonrelevant brain region did not have the same interfering effect on generalization. Performance on an additional attentional task showed that this rTMS on the parietal site hindered participants' ability to integrate "what" (stimulus identity) and "when" (stimulus timing) information about an expected target. The present findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention-involving left parietal regions-integrates "what" and "when" stimulus information to facilitate rapid rule generalization.


Assuntos
Atenção/fisiologia , Aprendizagem/fisiologia , Lobo Parietal/fisiologia , Adulto , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Cognição/fisiologia , Feminino , Lobo Frontal/fisiologia , Lateralidade Funcional/fisiologia , Humanos , Idioma , Linguística/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Estimulação Magnética Transcraniana/métodos , Adulto Jovem
3.
Behav Res Methods ; 55(1): 58-76, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35262897

RESUMO

In the last few decades, the field of neuroscience has witnessed major technological advances that have allowed researchers to measure and control neural activity with great detail. Yet, behavioral experiments in humans remain an essential approach to investigate the mysteries of the mind. Their relatively modest technological and economic requisites make behavioral research an attractive and accessible experimental avenue for neuroscientists with very diverse backgrounds. However, like any experimental enterprise, it has its own inherent challenges that may pose practical hurdles, especially to less experienced behavioral researchers. Here, we aim at providing a practical guide for a steady walk through the workflow of a typical behavioral experiment with human subjects. This primer concerns the design of an experimental protocol, research ethics, and subject care, as well as best practices for data collection, analysis, and sharing. The goal is to provide clear instructions for both beginners and experienced researchers from diverse backgrounds in planning behavioral experiments.


Assuntos
Ética em Pesquisa , Pesquisadores , Humanos , Coleta de Dados
4.
J Headache Pain ; 24(1): 104, 2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37545005

RESUMO

BACKGROUND: Migraine is a cyclic, neurosensory disorder characterized by recurrent headaches and altered sensory processing. The latter is manifested in hypersensitivity to visual stimuli, measured with questionnaires and sensory thresholds, as well as in abnormal cortical excitability and a lack of habituation, assessed with visual evoked potentials elicited by pattern-reversal stimulation. Here, the goal was to determine whether factors such as age and/or disease severity may exert a modulatory influence on sensory sensitivity, cortical excitability, and habituation. METHODS: Two similar experiments were carried out, the first comparing 24 young, episodic migraine patients and 28 healthy age- and gender-matched controls and the second 36 middle-aged, episodic migraine patients and 30 healthy age- and gender-matched controls. A neurologist confirmed the diagnoses. Migraine phases were obtained using eDiaries. Sensory sensitivity was assessed with the Sensory Perception Quotient and group comparisons were carried out. We obtained pattern-reversal visual evoked potentials and calculated the N1-P1 Peak-to-Peak amplitude. Two linear mixed-effects models were fitted to these data. The first model had Block (first block, last block) and Group (patients, controls) as fixed factors, whereas the second model had Trial (all trials) and Group as fixed factors. Participant was included as a random factor in both. N1-P1 first block amplitude was used to assess cortical excitability and habituation was defined as a decrease of N1-P1 amplitude across Blocks/Trials. Both experiments were performed interictally. RESULTS: The final samples consisted of 18 patients with episodic migraine and 27 headache-free controls (first experiment) and 19 patients and 29 controls (second experiment). In both experiments, patients reported increased visual hypersensitivity on the Sensory Perception Quotient as compared to controls. Regarding N1-P1 peak-to-peak data, there was no main effect of Group, indicating no differences in cortical excitability between groups. Finally, significant main effects of both Block and Trial were found indicating habituation in both groups, regardless of age and headache frequency. CONCLUSIONS: The results of this study yielded evidence for significant hypersensitivity in patients but no significant differences in either habituation or cortical excitability, as compared to headache-free controls. Although the alterations in patients may be less pronounced than originally anticipated they demonstrate the need for the definition and standardization of optimal methodological parameters.


Assuntos
Potenciais Evocados Visuais , Transtornos de Enxaqueca , Humanos , Pessoa de Meia-Idade , Habituação Psicofisiológica/fisiologia , Cefaleia , Gravidade do Paciente , Estudos de Casos e Controles
5.
Eur J Neurosci ; 55(1): 138-153, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34872157

RESUMO

To make sense of ambiguous and, at times, fragmentary sensory input, the brain must rely on a process of active interpretation. At any given moment, only one of several possible perceptual representations prevails in our conscious experience. Our hypothesis is that the competition between alternative representations induces a pattern of neural activation resembling cognitive conflict, eventually leading to fluctuations between different perceptual outcomes in the case of steep competition. To test this hypothesis, we probed changes in perceptual awareness between competing images using binocular rivalry. We drew our predictions from the conflict monitoring theory, which holds that cognitive control is invoked by the detection of conflict during information processing. Our results show that fronto-medial theta oscillations (5-7 Hz), an established electroencephalography (EEG) marker of conflict, increases right before perceptual alternations and decreases thereafter, suggesting that conflict monitoring occurs during perceptual competition. Furthermore, to investigate conflict resolution via attentional engagement, we looked for a neural marker of perceptual switches as by parieto-occipital alpha oscillations (8-12 Hz). The power of parieto-occipital alpha displayed an inverse pattern to that of fronto-medial theta, reflecting periods of high interocular inhibition during stable perception, and low inhibition around moments of perceptual change. Our findings aim to elucidate the relationship between conflict monitoring mechanisms and perceptual awareness.


Assuntos
Visão Binocular , Percepção Visual , Atenção/fisiologia , Encéfalo , Eletroencefalografia/métodos , Estimulação Luminosa , Visão Binocular/fisiologia , Percepção Visual/fisiologia
6.
Eur J Neurosci ; 55(11-12): 3224-3240, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-32745332

RESUMO

Electrical brain oscillations reflect fluctuations in neural excitability. Fluctuations in the alpha band (α, 8-12 Hz) in the occipito-parietal cortex are thought to regulate sensory responses, leading to cyclic variations in visual perception. Inspired by this theory, some past and recent studies have addressed the relationship between α-phase from extra-cranial EEG and behavioural responses to visual stimuli in humans. The latest studies have used offline approaches to confirm α-gated cyclic patterns. However, a particularly relevant implication is the possibility to use this principle online, whereby stimuli are time-locked to specific α-phases leading to predictable outcomes in performance. Here, we aimed at providing a proof of concept for such real-time neurotechnology. Participants performed a speeded response task to visual targets that were presented upon a real-time estimation of the α-phase via an EEG closed-loop brain-computer interface (BCI). According to the theory, we predicted a modulation of reaction times (RTs) along the α-cycle. Our BCI system achieved reliable trial-to-trial phase locking of stimuli to the phase of individual occipito-parietal α-oscillations. Yet, the behavioural results did not support a consistent relation between RTs and the phase of the α-cycle neither at group nor at single participant levels. We must conclude that although the α-phase might play a role in perceptual decisions from a theoretical perspective, its impact on EEG-based BCI application appears negligible.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Humanos , Lobo Parietal/fisiologia , Estimulação Luminosa/métodos , Percepção Visual/fisiologia
7.
Eur J Neurosci ; 49(2): 150-164, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30270546

RESUMO

In everyday life multisensory events, such as a glass crashing on the floor, the different sensory inputs are often experienced as simultaneous, despite the sensory processing of sound and sight within the brain are temporally misaligned. This lack of cross-modal synchrony is the unavoidable consequence of different light and sound speeds, and their different neural transmission times in the corresponding sensory pathways. Hence, cross-modal synchrony must be reconstructed during perception. It has been suggested that spontaneous fluctuations in neural excitability might be involved in the temporal organisation of sensory events during perception and account for variability in behavioural performance. Here, we addressed the relationship between ongoing brain oscillations and the perception of cross-modal simultaneity. Participants performed an audio-visual simultaneity judgement task while their EEG was recorded. We focused on pre-stimulu activity, and found that the phase of neural oscillations at 13 ± 2 Hz 200 ms prior to the stimulus correlated with subjective simultaneity of otherwise identical sound-flash events. Remarkably, the correlation between EEG phase and behavioural report occurred in the absence of concomitant changes in EEG amplitude. The probability of simultaneity perception fluctuated significantly as a function of pre-stimulus phase, with the largest perceptual variation being accounted for phase angles nearly 180º apart. This pattern was strongly reliable for sound-flash pairs but not for flash-sound pairs. Overall, these findings suggest that the phase of ongoing brain activity might underlie internal states of the observer that influence cross-modal temporal organisation between the senses and, in turn, subjective synchrony.


Assuntos
Percepção Auditiva/fisiologia , Ondas Encefálicas , Encéfalo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Feminino , Humanos , Julgamento/fisiologia , Masculino , Estimulação Luminosa , Adulto Jovem
8.
Psychol Sci ; 30(10): 1483-1496, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31532709

RESUMO

Humans can effectively search visual scenes by spatial location, visual feature, or whole object. Here, we showed that visual search can also benefit from fast appraisal of relations between individuals in human groups. Healthy adults searched for a facing (seemingly interacting) body dyad among nonfacing dyads or a nonfacing dyad among facing dyads. We varied the task parameters to emphasize processing of targets or distractors. Facing-dyad targets were more likely to recruit attention than nonfacing-dyad targets (Experiments 1, 2, and 4). Facing-dyad distractors were checked and rejected more efficiently than nonfacing-dyad distractors (Experiment 3). Moreover, search for an individual body was more difficult when it was embedded in a facing dyad than in a nonfacing dyad (Experiment 5). We propose that fast grouping of interacting bodies in one attentional unit is the mechanism that accounts for efficient processing of dyads within human groups and for the inefficient access to individual parts within a dyad.


Assuntos
Atenção , Corpo Humano , Reconhecimento Visual de Modelos/fisiologia , Percepção Social , Adulto , Feminino , Humanos , Masculino , Tempo de Reação , Adulto Jovem
9.
Psychol Res ; 83(8): 1626-1639, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29774432

RESUMO

Temporal orienting leads to well-documented behavioural benefits for sensory events occurring at the anticipated moment. However, the consequences of temporal orienting in cross-modal contexts are still unclear. On the one hand, some studies using audio-tactile paradigms suggest that attentional orienting in time and modality are a closely coupled system, in which temporal orienting dominates modality orienting, similar to what happens in cross-modal spatial attention. On the other hand, recent findings using a visuo-tactile paradigm suggest that attentional orienting in time can unfold independently in each modality, leading to cross-modal decoupling. In the present study, we investigated if cross-modal decoupling in time can be extrapolated to audio-tactile contexts. If so, decoupling might represent a general property of cross-modal attention in time. To this end, we used a speeded discrimination task in which we manipulated the probability of target presentation in time and modality. In each trial, a manipulation of time-based expectancy was used to guide participants' attention to task-relevant events, either tactile or auditory, at different points in time. In two experiments, we found that participants generally showed enhanced behavioural performance at the most likely onset time of each modality and no evidence for coupling. This pattern supports the hypothesis that cross-modal decoupling could be a general phenomenon in temporal orienting.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Percepção do Tato/fisiologia , Tato/fisiologia , Adulto , Feminino , Humanos , Masculino , Orientação/fisiologia , Adulto Jovem
10.
Eur J Neurosci ; 47(7): 832-844, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29495127

RESUMO

In everyday life, we often must coordinate information across spatial locations and different senses for action. It is well known, for example, that reactions are faster when an imperative stimulus and its required response are congruent than when they are not, even if stimulus location itself is completely irrelevant for the task (the so-called Simon effect). However, because these effects have been frequently investigated in single-modality scenarios, the consequences of spatial congruence when more than one sensory modality is at play are less well known. Interestingly, at a behavioral level, the visual Simon effect vanishes in mixed (visual and tactile) modality scenarios, suggesting that irrelevant spatial information ceases to exert influence on vision. To shed some light on this surprising result, here we address the expression of irrelevant spatial information in EEG markers typical of the visual Simon effect (P300, theta power modulation, LRP) in mixed-modality contexts. Our results show no evidence for the visual-spatial information to affect performance at behavioral and neurophysiological levels. The absence of evidence of the neural markers of visual S-R conflict in the mixed-modality scenario implies that some aspects of spatial representations that are strongly expressed in single-modality scenarios might be bypassed.


Assuntos
Ondas Encefálicas/fisiologia , Percepção Espacial/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Adulto Jovem
11.
Eur J Neurosci ; 48(7): 2630-2641, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29250857

RESUMO

The McGurk illusion is one of the most famous illustrations of cross-modal integration in human perception. It has been often used as a proxy of audiovisual (AV) integration and to infer the properties of the integration process in natural (congruent) AV conditions. Nonetheless, a blatant difference between McGurk stimuli and natural, congruent, AV speech is the conflict between the auditory and the visual information in the former. Here, we hypothesized that McGurk stimuli (and any AV incongruency) engage brain responses similar to those found in more general cases of perceptual conflict (e.g., Stroop), and propose that the McGurk illusion arises as a result of the resolution of such conflict. We used electroencephalography to measure variations in the power of theta, a well-known marker of the brain response to conflict. The results showed that perception of AV McGurk stimuli, just like AV incongruence in general, induces an increase in activity in the theta band. This response was similar to that evoked by Stroop stimuli, as measured in the same participants. This finding suggests that the McGurk illusion is mediated by general-purpose conflict mechanisms, and calls for caution in generalizing findings obtained using the McGurk illusion, to the general case of multisensory integration.


Assuntos
Percepção Auditiva/fisiologia , Ilusões/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Encéfalo/fisiologia , Mapeamento Encefálico , Eletroencefalografia/métodos , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Fala/fisiologia
12.
Hum Brain Mapp ; 38(11): 5691-5705, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28792094

RESUMO

There are two main behavioral expressions of multisensory integration (MSI) in speech; the perceptual enhancement produced by the sight of the congruent lip movements of the speaker, and the illusory sound perceived when a speech syllable is dubbed with incongruent lip movements, in the McGurk effect. These two models have been used very often to study MSI. Here, we contend that, unlike congruent audiovisually (AV) speech, the McGurk effect involves brain areas related to conflict detection and resolution. To test this hypothesis, we used fMRI to measure blood oxygen level dependent responses to AV speech syllables. We analyzed brain activity as a function of the nature of the stimuli-McGurk or non-McGurk-and the perceptual outcome regarding MSI-integrated or not integrated response-in a 2 × 2 factorial design. The results showed that, regardless of perceptual outcome, AV mismatch activated general-purpose conflict areas (e.g., anterior cingulate cortex) as well as specific AV speech conflict areas (e.g., inferior frontal gyrus), compared with AV matching stimuli. Moreover, these conflict areas showed stronger activation on trials where the McGurk illusion was perceived compared with non-illusory trials, despite the stimuli where physically identical. We conclude that the AV incongruence in McGurk stimuli triggers the activation of conflict processing areas and that the process of resolving the cross-modal conflict is critical for the McGurk illusion to arise. Hum Brain Mapp 38:5691-5705, 2017. © 2017 Wiley Periodicals, Inc.


Assuntos
Encéfalo/fisiologia , Reconhecimento Facial/fisiologia , Ilusões/fisiologia , Percepção da Fala/fisiologia , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Circulação Cerebrovascular/fisiologia , Feminino , Humanos , Leitura Labial , Imageamento por Ressonância Magnética , Masculino , Modelos Psicológicos , Testes Neuropsicológicos , Oxigênio/sangue
13.
Psychol Sci ; 28(3): 369-379, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28140764

RESUMO

How does one perceive groups of people? It is known that functionally interacting objects (e.g., a glass and a pitcher tilted as if pouring water into it) are perceptually grouped. Here, we showed that processing of multiple human bodies is also influenced by their relative positioning. In a series of categorization experiments, bodies facing each other (seemingly interacting) were recognized more accurately than bodies facing away from each other (noninteracting). Moreover, recognition of facing body dyads (but not nonfacing body dyads) was strongly impaired when those stimuli were inverted, similar to what has been found for individual bodies. This inversion effect demonstrates sensitivity of the visual system to facing body dyads in their common upright configuration and might imply recruitment of configural processing (i.e., processing of the overall body configuration without prior part-by-part analysis). These findings suggest that facing dyads are represented as one structured unit, which may be the intermediate level of representation between multiple-object (body) perception and representation of social actions.


Assuntos
Corpo Humano , Reconhecimento Visual de Modelos/fisiologia , Percepção Social , Adolescente , Adulto , Humanos , Adulto Jovem
14.
Neuroimage ; 132: 129-137, 2016 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-26892858

RESUMO

During public addresses, speakers accompany their discourse with spontaneous hand gestures (beats) that are tightly synchronized with the prosodic contour of the discourse. It has been proposed that speech and beat gestures originate from a common underlying linguistic process whereby both speech prosody and beats serve to emphasize relevant information. We hypothesized that breaking the consistency between beats and prosody by temporal desynchronization, would modulate activity of brain areas sensitive to speech-gesture integration. To this aim, we measured BOLD responses as participants watched a natural discourse where the speaker used beat gestures. In order to identify brain areas specifically involved in processing hand gestures with communicative intention, beat synchrony was evaluated against arbitrary visual cues bearing equivalent rhythmic and spatial properties as the gestures. Our results revealed that left MTG and IFG were specifically sensitive to speech synchronized with beats, compared to the arbitrary vision-speech pairing. Our results suggest that listeners confer beats a function of visual prosody, complementary to the prosodic structure of speech. We conclude that the emphasizing function of beat gestures in speech perception is instantiated through a specialized brain network sensitive to the communicative intent conveyed by a speaker with his/her hands.


Assuntos
Lobo Frontal/fisiologia , Gestos , Linguística , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto , Encéfalo/fisiologia , Sinais (Psicologia) , Feminino , Mãos , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
15.
Exp Brain Res ; 234(5): 1307-23, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26931340

RESUMO

Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.


Assuntos
Atenção , Mapeamento Encefálico , Encéfalo/fisiologia , Objetivos , Percepção/fisiologia , Feminino , Humanos , Masculino , Estimulação Física
16.
Neuroimage ; 119: 272-85, 2015 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-26119022

RESUMO

The interplay between attention and multisensory integration has proven to be a difficult question to tackle. There are almost as many studies showing that multisensory integration occurs independently from the focus of attention as studies implying that attention has a profound effect on integration. Addressing the neural expression of multisensory integration for attended vs. unattended stimuli can help disentangle this apparent contradiction. In the present study, we examine if selective attention to sound pitch influences the expression of audiovisual integration in both behavior and neural activity. Participants were asked to attend to one of two auditory speech streams while watching a pair of talking lips that could be congruent or incongruent with the attended speech stream. We measured behavioral and neural responses (fMRI) to multisensory stimuli under attended and unattended conditions while physical stimulation was kept constant. Our results indicate that participants recognized words more accurately from an auditory stream that was both attended and audiovisually (AV) congruent, thus reflecting a benefit due to AV integration. On the other hand, no enhancement was found for AV congruency when it was unattended. Furthermore, the fMRI results indicated that activity in the superior temporal sulcus (an area known to be related to multisensory integration) was contingent on attention as well as on audiovisual congruency. This attentional modulation extended beyond heteromodal areas to affect processing in areas classically recognized as unisensory, such as the superior temporal gyrus or the extrastriate cortex, and to non-sensory areas such as the motor cortex. Interestingly, attention to audiovisual incongruence triggered responses in brain areas related to conflict processing (i.e., the anterior cingulate cortex and the anterior insula). Based on these results, we hypothesize that AV speech integration can take place automatically only when both modalities are sufficiently processed, and that if a mismatch is detected between the AV modalities, feedback from conflict areas minimizes the influence of this mismatch by reducing the processing of the least informative modality.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Adulto Jovem
17.
J Neurophysiol ; 113(6): 1800-18, 2015 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-25520431

RESUMO

The mechanisms responsible for the integration of sensory information from different modalities have become a topic of intense interest in psychophysics and neuroscience. Many authors now claim that early, sensory-based cross-modal convergence improves performance in detection tasks. An important strand of supporting evidence for this claim is based on statistical models such as the Pythagorean model or the probabilistic summation model. These models establish statistical benchmarks representing the best predicted performance under the assumption that there are no interactions between the two sensory paths. Following this logic, when observed detection performances surpass the predictions of these models, it is often inferred that such improvement indicates cross-modal convergence. We present a theoretical analyses scrutinizing some of these models and the statistical criteria most frequently used to infer early cross-modal interactions during detection tasks. Our current analysis shows how some common misinterpretations of these models lead to their inadequate use and, in turn, to contradictory results and misleading conclusions. To further illustrate the latter point, we introduce a model that accounts for detection performances in multimodal detection tasks but for which surpassing of the Pythagorean or probabilistic summation benchmark can be explained without resorting to early cross-modal interactions. Finally, we report three experiments that put our theoretical interpretation to the test and further propose how to adequately measure multimodal interactions in audiotactile detection tasks.


Assuntos
Modelos Neurológicos , Córtex Somatossensorial/fisiologia , Adulto , Feminino , Humanos , Masculino , Desempenho Psicomotor , Limiar Sensorial
18.
J Vis ; 15(14): 5, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26462174

RESUMO

Recent studies have proposed that some cross-modal illusions might be expressed in what were previously thought of as sensory-specific brain areas. Therefore, one interesting question is whether auditory-driven visual illusory percepts respond to manipulations of low-level visual attributes (such as luminance or chromatic contrast) in the same way as their nonillusory analogs. Here, we addressed this question using the double flash illusion (DFI), whereby one brief flash can be perceived as two when combined with two beeps presented in rapid succession. Our results showed that the perception of two illusory flashes depended on luminance contrast, just as the temporal resolution for two real flashes did. Specifically we found that the higher the luminance contrast, the stronger the DFI. Such a pattern seems to contradict what would be predicted from a maximum likelihood estimation perspective, and can be explained by considering that low-level visual stimulus attributes similarly modulate the perception of sound-induced visual phenomena and "real" visual percepts. This finding provides psychophysical support for the involvement of sensory-specific brain areas in the expression of the DFI. On the other hand, the addition of chromatic contrast failed to produce a change in the strength of the DFI despite it improved visual sensitivity to real flashes. The null impact of chromaticity on the cross-modal illusion might suggest a weaker interaction of the parvocellular visual pathway with the auditory system for cross-modal illusions.


Assuntos
Percepção Auditiva/fisiologia , Ilusões Ópticas/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Funções Verossimilhança , Masculino , Estimulação Luminosa , Vias Visuais/fisiologia , Adulto Jovem
19.
Eur J Neurosci ; 39(12): 2089-97, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24689879

RESUMO

Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross-modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non-attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross-modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time.


Assuntos
Atenção , Percepção do Tempo , Percepção do Tato , Percepção Visual , Adolescente , Adulto , Feminino , Mãos , Humanos , Masculino , Modelos Psicológicos , Estimulação Luminosa , Estimulação Física , Testes Psicológicos , Tempo de Reação , Adulto Jovem
20.
Exp Brain Res ; 232(6): 1631-8, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24699769

RESUMO

Crossmodal interaction conferring enhancement in sensory processing is nowadays widely accepted. Such benefit is often exemplified by neural response amplification reported in physiological studies conducted with animals, which parallel behavioural demonstrations of sound-driven improvement in visual tasks in humans. Yet, a good deal of controversy still surrounds the nature and interpretation of these human psychophysical studies. Here, we consider the interpretation of crossmodal enhancement findings under the light of the functional as well as anatomical specialization of magno- and parvocellular visual pathways, whose paramount relevance has been well established in visual research but often overlooked in crossmodal research. We contend that a more explicit consideration of this important visual division may resolve some current controversies and help optimize the design of future crossmodal research.


Assuntos
Percepção Auditiva/fisiologia , Visão Ocular/fisiologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Humanos , Estimulação Luminosa , Psicofísica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA