Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
1.
Annu Rev Neurosci ; 44: 449-473, 2021 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-33882258

RESUMO

Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.


Assuntos
Atenção , Percepção Auditiva , Encéfalo , Aprendizagem , Percepção Visual
2.
PLoS Biol ; 22(2): e3002494, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38319934

RESUMO

Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.


Assuntos
Mapeamento Encefálico , Percepção Visual , Humanos , Idoso , Teorema de Bayes , Percepção Visual/fisiologia , Encéfalo/fisiologia , Atenção/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Estimulação Luminosa/métodos , Imageamento por Ressonância Magnética
3.
J Cogn Neurosci ; 36(4): 730-733, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38307128

RESUMO

The papers collected in this Special Focus, prompted by S. Buergers and U. Noppeney [The role of alpha oscillations in temporal binding within and across the senses. Nature Human Behaviour, 6, 732-742, 2022], have raised several interesting ideas, arguments, and empirical results relating to the alpha temporal resolution hypothesis. Here we briefly respond to these, and in the process emphasize four challenges for future research: defining the scope and limitation of the hypothesis; developing experimental paradigms and study designs that rigorously test its tenets; decomposing the scalp-level signal and isolating underlying neural circuits; and bringing uniformity to the current diversity of analysis and statistical methods. Addressing these challenges will facilitate the progression from merely correlating alpha frequency with various perceptual phenomena to establishing whether and (if so) how alpha frequency influences sensory integration and segregation.


Assuntos
Percepção Visual , Humanos , Percepção Visual/fisiologia
4.
J Cogn Neurosci ; 36(4): 655-690, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38330177

RESUMO

An intriguing question in cognitive neuroscience is whether alpha oscillations shape how the brain transforms the continuous sensory inputs into distinct percepts. According to the alpha temporal resolution hypothesis, sensory signals arriving within a single alpha cycle are integrated, whereas those in separate cycles are segregated. Consequently, shorter alpha cycles should be associated with smaller temporal binding windows and higher temporal resolution. However, the evidence supporting this hypothesis is contentious, and the neural mechanisms remain unclear. In this review, we first elucidate the alpha temporal resolution hypothesis and the neural circuitries that generate alpha oscillations. We then critically evaluate study designs, experimental paradigms, psychophysics, and neurophysiological analyses that have been employed to investigate the role of alpha frequency in temporal binding. Through the lens of this methodological framework, we then review evidence from between-subject, within-subject, and causal perturbation studies. Our review highlights the inherent interpretational ambiguities posed by previous study designs and experimental paradigms and the extensive variability in analysis choices across studies. We also suggest best practice recommendations that may help to guide future research. To establish a mechanistic role of alpha frequency in temporal parsing, future research is needed that demonstrates its causal effects on the temporal binding window with consistent, experimenter-independent methods.


Assuntos
Encéfalo , Percepção Visual , Humanos , Percepção Visual/fisiologia , Encéfalo/fisiologia , Projetos de Pesquisa
5.
Hum Brain Mapp ; 45(4): e26653, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38488460

RESUMO

Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals. According to these models, the inter-sensory conflict in McGurk stimuli may only increase observers' perceptual uncertainty. This functional magnetic resonance imaging (fMRI) study presented participants (20 male and 24 female) with audiovisual congruent, McGurk (i.e., auditory /ba/ + visual /ga/), and incongruent (i.e., auditory /ga/ + visual /ba/) stimuli along with their unisensory counterparts in a syllable categorization task. Behaviorally, observers' response entropy was greater for McGurk compared to congruent audiovisual stimuli. At the neural level, McGurk stimuli increased activations in a widespread neural system, extending from the inferior frontal sulci (IFS) to the pre-supplementary motor area (pre-SMA) and insulae, typically involved in cognitive control processes. Crucially, in line with Bayesian theories these activation increases were fully accounted for by observers' perceptual uncertainty as measured by their response entropy. Our findings suggest that McGurk and congruent speech processing rely on shared neural mechanisms, thereby supporting the McGurk illusion as a valid measure of natural audiovisual speech perception.


Assuntos
Ilusões , Percepção da Fala , Humanos , Masculino , Feminino , Percepção Auditiva/fisiologia , Fala/fisiologia , Ilusões/fisiologia , Percepção Visual/fisiologia , Teorema de Bayes , Incerteza , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Estimulação Luminosa/métodos
6.
PLoS Biol ; 19(11): e3001465, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34793436

RESUMO

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Sensação/fisiologia , Adolescente , Adulto , Teorema de Bayes , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Análise Multivariada , Oxigênio/sangue , Fatores de Tempo , Adulto Jovem
7.
Adv Exp Med Biol ; 1437: 59-76, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38270853

RESUMO

Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.


Assuntos
Nível de Saúde , Longevidade , Criança , Adulto Jovem , Humanos , Idoso , Teorema de Bayes , Reprodutibilidade dos Testes , Causalidade
8.
Ann Neurol ; 89(4): 646-656, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33368496

RESUMO

OBJECTIVE: Patients with traumatic brain injury who fail to obey commands after sedation-washout pose one of the most significant challenges for neurological prognostication. Reducing prognostic uncertainty will lead to more appropriate care decisions and ensure provision of limited rehabilitation resources to those most likely to benefit. Bedside markers of covert residual cognition, including speech comprehension, may reduce this uncertainty. METHODS: We recruited 28 patients with acute traumatic brain injury who were 2 to 7 days sedation-free and failed to obey commands. Patients heard streams of isochronous monosyllabic words that built meaningful phrases and sentences while their brain activity via electroencephalography (EEG) was recorded. In healthy individuals, EEG activity only synchronizes with the rhythm of phrases and sentences when listeners consciously comprehend the speech. This approach therefore provides a measure of residual speech comprehension in unresponsive patients. RESULTS: Seventeen and 16 patients were available for assessment with the Glasgow Outcome Scale Extended (GOSE) at 3 months and 6 months, respectively. Outcome significantly correlated with the strength of patients' acute cortical tracking of phrases and sentences (r > 0.6, p < 0.007), quantified by inter-trial phase coherence. Linear regressions revealed that the strength of this comprehension response (beta = 0.603, p = 0.006) significantly improved the accuracy of prognoses relative to clinical characteristics alone (eg, Glasgow Coma Scale [GCS], computed tomography [CT] grade). INTERPRETATION: A simple, passive, auditory EEG protocol improves prognostic accuracy in a critical period of clinical decision making. Unlike other approaches to probing covert cognition for prognostication, this approach is entirely passive and therefore less susceptible to cognitive deficits, increasing the number of patients who may benefit. ANN NEUROL 2021;89:646-656.


Assuntos
Morte Encefálica/diagnóstico , Compreensão , Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Morte Encefálica/diagnóstico por imagem , Lesões Encefálicas Traumáticas/diagnóstico , Lesões Encefálicas Traumáticas/psicologia , Córtex Cerebral/fisiopatologia , Eletroencefalografia , Feminino , Escala de Resultado de Glasgow , Humanos , Modelos Lineares , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Tomografia Computadorizada por Raios X
9.
PLoS Biol ; 17(4): e3000210, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30939128

RESUMO

To form a percept of the environment, the brain needs to solve the binding problem-inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain's uncertainty about the world's causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment.


Assuntos
Percepção Auditiva/fisiologia , Psicofísica/métodos , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Teorema de Bayes , Encéfalo/fisiologia , Eletroencefalografia/métodos , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
10.
J Neurosci ; 40(34): 6600-6612, 2020 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-32669354

RESUMO

In our natural environment the senses are continuously flooded with a myriad of signals. To form a coherent representation of the world, the brain needs to integrate sensory signals arising from a common cause and segregate signals coming from separate causes. An unresolved question is how the brain solves this binding or causal inference problem and determines the causal structure of the sensory signals. In this functional magnetic resonance imaging (fMRI) study human observers (female and male) were presented with synchronous auditory and visual signals at the same location (i.e., common cause) or different locations (i.e., separate causes). On each trial, observers decided whether signals come from common or separate sources(i.e., "causal decisions"). To dissociate participants' causal inference from the spatial correspondence cues we adjusted the audiovisual disparity of the signals individually for each participant to threshold accuracy. Multivariate fMRI pattern analysis revealed the lateral prefrontal cortex as the only region that encodes predominantly the outcome of observers' causal inference (i.e., common vs separate causes). By contrast, the frontal eye field (FEF) and the intraparietal sulcus (IPS0-4) form a circuitry that concurrently encodes spatial (auditory and visual stimulus locations), decisional (causal inference), and motor response dimensions. These results suggest that the lateral prefrontal cortex plays a key role in inferring and making explicit decisions about the causal structure that generates sensory signals in our environment. By contrast, informed by observers' inferred causal structure, the FEF-IPS circuitry integrates auditory and visual spatial signals into representations that guide motor responses.SIGNIFICANCE STATEMENT In our natural environment, our senses are continuously flooded with a myriad of signals. Transforming this barrage of sensory signals into a coherent percept of the world relies inherently on solving the causal inference problem, deciding whether sensory signals arise from a common cause and should hence be integrated or else be segregated. This functional magnetic resonance imaging study shows that the lateral prefrontal cortex plays a key role in inferring the causal structure of the environment. Crucially, informed by the spatial correspondence cues and the inferred causal structure the frontal eye field and the intraparietal sulcus form a circuitry that integrates auditory and visual spatial signals into representations that guide motor responses.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Discriminação Psicológica/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Lobo Frontal/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Análise Multivariada , Lobo Parietal/fisiologia , Estimulação Luminosa , Córtex Pré-Frontal/fisiologia , Psicofísica , Localização de Som/fisiologia , Adulto Jovem
11.
J Neurosci ; 39(12): 2301-2312, 2019 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-30659086

RESUMO

Spatial attention (i.e., task-relevance) and expectation (i.e., signal probability) are two critical top-down mechanisms guiding perceptual inference. Spatial attention prioritizes processing of information at task-relevant locations. Spatial expectations encode the statistical structure of the environment. An unresolved question is how the brain allocates attention and forms expectations in a multisensory environment, where task-relevance and signal probability over space can differ across sensory modalities. We used functional magnetic resonance imaging in human participants (female and male) to investigate whether the brain encodes task-relevance and signal probability over space separately or interactively across sensory modalities. In a novel multisensory paradigm, we manipulated spatial attention and expectation selectively in audition and assessed their effects on behavioral and neural responses to auditory and visual stimuli. Our results show that both auditory and visual stimuli increased activations in a right-lateralized frontoparietal system, when they were presented at locations that were task-irrelevant in audition. Yet, only auditory stimuli increased activations in the medial prefrontal cortex when presented at expected locations and in audiovisual and frontoparietal cortices signaling a prediction error when presented at unexpected locations. This dissociation in multisensory generalization for attention and expectation effects shows that the brain controls attentional resources interactively across the senses but encodes the statistical structure of the environment as spatial expectations independently for each sensory system. Our results demonstrate that spatial attention and expectation engage partly overlapping neural systems via distinct mechanisms to guide perceptual inference in a multisensory world.SIGNIFICANCE STATEMENT In our natural environment the brain is exposed to a constant influx of signals through all our senses. How does the brain allocate attention and form spatial expectations in this multisensory environment? Because observers need to respond to stimuli regardless of their sensory modality, they may allocate attentional resources and encode the probability of events jointly across the senses. This psychophysics and neuroimaging study shows that the brain controls attentional resources interactively across the senses via a frontoparietal system but encodes the statistical structure of the environment independently for each sense in sensory and frontoparietal areas. Thus, spatial attention and expectation engage partly overlapping neural systems via distinct mechanisms to guide perceptual inference in a multisensory world.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Processamento Espacial/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Psicofísica , Adulto Jovem
12.
Eur J Neurosci ; 52(12): 4709-4731, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32725895

RESUMO

Asynchrony is a critical cue informing the brain whether sensory signals are caused by a common source and should be integrated or segregated. This psychophysics-electroencephalography (EEG) study investigated the influence of asynchrony on how the brain binds audiotactile (AT) signals to enable faster responses in a redundant target paradigm. Human participants actively responded (psychophysics) or passively attended (EEG) to noise bursts, "taps-to-the-face" and their AT combinations at seven AT asynchronies: 0, ±20, ±70 and ±500 ms. Behaviourally, observers were faster at detecting AT than unisensory stimuli within a temporal integration window: the redundant target effect was maximal for synchronous stimuli and declined within a ≤70 ms AT asynchrony. EEG revealed a cascade of AT interactions that relied on different neural mechanisms depending on AT asynchrony. At small (≤20 ms) asynchronies, AT interactions arose for evoked response potentials (ERPs) at 110 ms and ~400 ms post-stimulus. Selectively at ±70 ms asynchronies, AT interactions were observed for the P200 ERP, theta-band inter-trial coherence (ITC) and power at ~200 ms post-stimulus. In conclusion, AT binding was mediated by distinct neural mechanisms depending on the asynchrony of the AT signals. Early AT interactions in ERPs and theta-band ITC and power were critical for the behavioural response facilitation within a ≤±70 ms temporal integration window.


Assuntos
Percepção Auditiva , Percepção Visual , Estimulação Acústica , Eletroencefalografia , Potenciais Evocados , Humanos , Estimulação Luminosa , Tempo de Reação
13.
J Vis ; 20(8): 1, 2020 08 03.
Artigo em Inglês | MEDLINE | ID: mdl-32744617

RESUMO

In our natural environment, the brain needs to combine signals from multiple sensory modalities into a coherent percept. Whereas spatial attention guides perceptual decisions by prioritizing processing of signals that are task-relevant, spatial expectations encode the probability of signals over space. Previous studies have shown that behavioral effects of spatial attention generalize across sensory modalities. However, because they manipulated spatial attention as signal probability over space, these studies could not dissociate attention and expectation or assess their interaction. In two experiments, we orthogonally manipulated spatial attention (i.e., task-relevance) and expectation (i.e., signal probability) selectively in one sensory modality (i.e., primary modality) (experiment 1: audition, experiment 2: vision) and assessed their effects on primary and secondary sensory modalities in which attention and expectation were held constant. Our results show behavioral effects of spatial attention that are comparable for audition and vision as primary modalities; however, signal probabilities were learned more slowly in audition, so that spatial expectations were formed later in audition than vision. Critically, when these differences in learning between audition and vision were accounted for, both spatial attention and expectation affected responses more strongly in the primary modality in which they were manipulated and generalized to the secondary modality only in an attenuated fashion. Collectively, our results suggest that both spatial attention and expectation rely on modality-specific and multisensory mechanisms.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Adulto , Feminino , Humanos , Masculino , Motivação , Incerteza
14.
PLoS Biol ; 13(2): e1002073, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25710328

RESUMO

To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the "causal inference problem." Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.


Assuntos
Percepção Auditiva/fisiologia , Rede Nervosa/fisiologia , Vias Neurais/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo/anatomia & histologia , Córtex Auditivo/fisiologia , Teorema de Bayes , Mapeamento Encefálico , Cognição/fisiologia , Movimentos Oculares/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Lobo Parietal/anatomia & histologia , Lobo Parietal/fisiologia , Estimulação Luminosa , Psicofísica , Tempo de Reação , Córtex Visual/anatomia & histologia , Córtex Visual/fisiologia
16.
Eur J Neurosci ; 46(12): 2807-2816, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29044872

RESUMO

For effective interactions with the environment, the brain needs to form perceptual decisions based on noisy sensory evidence. Accumulating evidence suggests that perceptual decisions are formed by widespread interactions amongst sensory areas representing the noisy sensory evidence and fronto-parietal areas integrating the evidence into a decision variable that is compared to a decisional threshold. This concurrent transcranial magnetic stimulation (TMS)-fMRI study applied 10 Hz bursts of four TMS (or Sham) pulses to the intraparietal sulcus (IPS) to investigate the causal influence of IPS on the neural systems involved in perceptual decision-making. Participants had to detect visual signals at threshold intensity that were presented in their left lower visual field on 50% of the trials. Critically, we adjusted the signal strength such that participants failed to detect the visual stimulus on approximately 30% of the trials allowing us to categorise trials into hits, misses and correct rejections (CR). Our results show that IPS-relative to Sham-TMS attenuated activation increases for misses relative to CR in the left middle and superior frontal gyri. Critically, while IPS-TMS did not significantly affect participants' performance accuracy, it affected how observers adjusted their response times after making an error. We therefore suggest that activation increases in superior frontal gyri for misses relative to correct responses may not be critical for signal detection performance, but rather reflect post-decisional processing such as metacognitive monitoring of choice accuracy or decisional confidence.


Assuntos
Lobo Parietal/fisiologia , Córtex Pré-Frontal/fisiologia , Percepção Visual , Adulto , Idoso , Tomada de Decisões , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação , Limiar Sensorial , Estimulação Magnética Transcraniana
17.
J Neurosci ; 35(32): 11445-57, 2015 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-26269649

RESUMO

Adaptive behavior relies on combining bottom-up sensory inputs with top-down control signals to guide responses in line with current goals and task demands. Over the past decade, accumulating evidence has suggested that the dorsal and ventral frontoparietal attentional systems are recruited interactively in this process. This fMRI study used concurrent transcranial magnetic stimulation (TMS) as a causal perturbation approach to investigate the interactions between dorsal and ventral attentional systems and sensory processing areas. In a sustained spatial attention paradigm, human participants detected weak visual targets that were presented in the lower-left visual field on 50% of the trials. Further, we manipulated the presence/absence of task-irrelevant auditory signals. Critically, on each trial we applied 10 Hz bursts of four TMS (or Sham) pulses to the intraparietal sulcus (IPS). IPS-TMS relative to Sham-TMS increased activation in the parietal cortex regardless of sensory stimulation, confirming the neural effectiveness of TMS stimulation. Visual targets increased activations in the anterior insula, a component of the ventral attentional system responsible for salience detection. Conversely, they decreased activations in the ventral visual areas. Importantly, IPS-TMS abolished target-evoked activation increases in the right temporoparietal junction (TPJ) of the ventral attentional system, whereas it eliminated target-evoked activation decreases in the right fusiform. Our results demonstrate that IPS-TMS exerts profound directional causal influences not only on visual areas but also on the TPJ as a critical component of the ventral attentional system. They reveal a complex interplay between dorsal and ventral attentional systems during target detection under sustained spatial attention. SIGNIFICANCE STATEMENT: Adaptive behavior relies on combining bottom-up sensory inputs with top-down attentional control. Although the dorsal and ventral frontoparietal systems are key players in attentional control, their distinct contributions remain unclear. In this TMS-fMRI study, participants attended to the left visual field to detect weak visual targets presented on half of the trials. We applied brief TMS bursts (or Sham-TMS) to the dorsal intraparietal sulcus (IPS) 100 ms after visual stimulus onset. IPS-TMS abolished the visual induced response suppression in the ventral occipitotemporal cortex and the response enhancement to visual targets in the temporoparietal junction. Our results demonstrate that IPS causally influences neural activity in the ventral attentional system 100 ms poststimulus. They have important implications for our understanding of the neural mechanisms underlying attentional control.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Campos Visuais/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Estimulação Magnética Transcraniana , Adulto Jovem
18.
Neuroimage ; 124(Pt A): 876-886, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26419391

RESUMO

The brain generates a representation of our environment by integrating signals from a common source, but segregating signals from different sources. This fMRI study investigated how the brain arbitrates between perceptual integration and segregation based on top-down congruency expectations and bottom-up stimulus-bound congruency cues. Participants were presented audiovisual movies of phonologically congruent, incongruent or McGurk syllables that can be integrated into an illusory percept (e.g. "ti" percept for visual «ki¼ with auditory /pi/). They reported the syllable they perceived. Critically, we manipulated participants' top-down congruency expectations by presenting McGurk stimuli embedded in blocks of congruent or incongruent syllables. Behaviorally, participants were more likely to fuse audiovisual signals into an illusory McGurk percept in congruent than incongruent contexts. At the neural level, the left inferior frontal sulcus (lIFS) showed increased activations for bottom-up incongruent relative to congruent inputs. Moreover, lIFS activations were increased for physically identical McGurk stimuli, when participants segregated the audiovisual signals and reported their auditory percept. Critically, this activation increase for perceptual segregation was amplified when participants expected audiovisually incongruent signals based on prior sensory experience. Collectively, our results demonstrate that the lIFS combines top-down prior (in)congruency expectations with bottom-up (in)congruency cues to arbitrate between multisensory integration and segregation.


Assuntos
Percepção/fisiologia , Enquadramento Psicológico , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Sinais (Psicologia) , Feminino , Humanos , Ilusões/fisiologia , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Ruído , Estimulação Luminosa , Córtex Pré-Frontal/fisiologia , Psicometria , Tempo de Reação/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
19.
Neuroimage ; 124(Pt A): 641-653, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26363344

RESUMO

Speech recognition is rapid, automatic and amazingly robust. How the brain is able to decode speech from noisy acoustic inputs is unknown. We show that the brain recognizes speech by integrating bottom-up acoustic signals with top-down predictions. Subjects listened to intelligible normal and unintelligible fine structure speech that lacked the predictability of the temporal envelope and did not enable access to higher linguistic representations. Their top-down predictions were manipulated using priming. Activation for unintelligible fine structure speech was confined to primary auditory cortices, but propagated into posterior middle temporal areas when fine structure speech was made intelligible by top-down predictions. By contrast, normal speech engaged posterior middle temporal areas irrespective of subjects' predictions. Critically, when speech violated subjects' expectations, activation increases in anterior temporal gyri/sulci signalled a prediction error and the need for new semantic integration. In line with predictive coding, our findings compellingly demonstrate that top-down predictions determine whether and how the brain translates bottom-up acoustic inputs into intelligible speech.


Assuntos
Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Córtex Auditivo , Teorema de Bayes , Mapeamento Encefálico , Feminino , Humanos , Linguística , Imageamento por Ressonância Magnética , Masculino , Priming de Repetição/fisiologia , Lobo Temporal/fisiologia , Adulto Jovem
20.
Neuroimage ; 122: 203-13, 2015 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-26244276

RESUMO

In everyday life, our auditory system is bombarded with many signals in complex auditory scenes. Limited processing capacities allow only a fraction of these signals to enter perceptual awareness. This magnetoencephalography (MEG) study used informational masking to identify the neural mechanisms that enable auditory awareness. On each trial, participants indicated whether they detected a pair of sequentially presented tones (i.e., the target) that were embedded within a multi-tone background. We analysed MEG activity for 'hits' and 'misses', separately for the first and second tones within a target pair. Comparing physically identical stimuli that were detected or missed provided insights into the neural processes underlying auditory awareness. While the first tone within a target elicited a stronger early P50m on hit trials, only the second tone evoked a negativity at 150 ms, which may index segregation of the tone pair from the multi-tone background. Notably, a later sustained deflection peaking around 300 and 500 ms (P300m) was the only component that was significantly amplified for both tones, when they were detected pointing towards its key role in perceptual awareness. Additional Dynamic Causal Modelling analyses indicated that the negativity at 150 ms underlying auditory stream segregation is mediated predominantly via changes in intrinsic connectivity within auditory cortices. By contrast, the later P300m response as a signature of perceptual awareness relies on interactions between parietal and auditory cortices. In conclusion, our results suggest that successful detection and hence auditory awareness of a two-tone pair within complex auditory scenes relies on recurrent processing between auditory and higher-order parietal cortices.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Estimulação Acústica , Adulto , Conscientização/fisiologia , Mapeamento Encefálico , Potenciais Evocados P300 , Potenciais Evocados Auditivos , Feminino , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa