Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 139
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Cereb Cortex ; 34(1)2024 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-37950879

RESUMO

Bistable perception follows from observing a static, ambiguous, (visual) stimulus with two possible interpretations. Here, we present an active (Bayesian) inference account of bistable perception and posit that perceptual transitions between different interpretations (i.e. inferences) of the same stimulus ensue from specific eye movements that shift the focus to a different visual feature. Formally, these inferences are a consequence of precision control that determines how confident beliefs are and change the frequency with which one can perceive-and alternate between-two distinct percepts. We hypothesized that there are multiple, but distinct, ways in which precision modulation can interact to give rise to a similar frequency of bistable perception. We validated this using numerical simulations of the Necker cube paradigm and demonstrate the multiple routes that underwrite the frequency of perceptual alternation. Our results provide an (enactive) computational account of the intricate precision balance underwriting bistable perception. Importantly, these precision parameters can be considered the computational homologs of particular neurotransmitters-i.e. acetylcholine, noradrenaline, dopamine-that have been previously implicated in controlling bistable perception, providing a computational link between the neurochemistry and perception.


Assuntos
Movimentos Oculares , Percepção Visual , Teorema de Bayes , Estimulação Luminosa/métodos
2.
Neuroimage ; 279: 120310, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37544417

RESUMO

This article details a scheme for approximate Bayesian inference, which has underpinned thousands of neuroimaging studies since its introduction 15 years ago. Variational Laplace (VL) provides a generic approach to fitting linear or non-linear models, which may be static or dynamic, returning a posterior probability density over the model parameters and an approximation of log model evidence, which enables Bayesian model comparison. VL applies variational Bayesian inference in conjunction with quadratic or Laplace approximations of the evidence lower bound (free energy). Importantly, update equations do not need to be derived for each model under consideration, providing a general method for fitting a broad class of models. This primer is intended for experimenters and modellers who may wish to fit models to data using variational Bayesian methods, without assuming previous experience of variational Bayes or machine learning. Accompanying code demonstrates how to fit different kinds of model using the reference implementation of the VL scheme in the open-source Statistical Parametric Mapping (SPM) software package. In addition, we provide a standalone software function that does not require SPM, in order to ease translation to other fields, together with detailed pseudocode. Finally, the supplementary materials provide worked derivations of the key equations.


Assuntos
Algoritmos , Neuroimagem , Humanos , Teorema de Bayes , Aprendizado de Máquina , Software
3.
Neural Comput ; 35(5): 807-852, 2023 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-36944240

RESUMO

Active inference is a probabilistic framework for modeling the behavior of biological and artificial agents, which derives from the principle of minimizing free energy. In recent years, this framework has been applied successfully to a variety of situations where the goal was to maximize reward, often offering comparable and sometimes superior performance to alternative approaches. In this article, we clarify the connection between reward maximization and active inference by demonstrating how and when active inference agents execute actions that are optimal for maximizing reward. Precisely, we show the conditions under which active inference produces the optimal solution to the Bellman equation, a formulation that underlies several approaches to model-based reinforcement learning and control. On partially observed Markov decision processes, the standard active inference scheme can produce Bellman optimal actions for planning horizons of 1 but not beyond. In contrast, a recently developed recursive active inference scheme (sophisticated inference) can produce Bellman optimal actions on any finite temporal horizon. We append the analysis with a discussion of the broader relationship between active inference and reinforcement learning.


Assuntos
Comportamento de Escolha , Aprendizagem , Recompensa
4.
PLoS Comput Biol ; 18(9): e1010490, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36099315

RESUMO

A growing body of evidence highlights the intricate linkage of exteroceptive perception to the rhythmic activity of the visceral body. In parallel, interoceptive inference theories of affective perception and self-consciousness are on the rise in cognitive science. However, thus far no formal theory has emerged to integrate these twin domains; instead, most extant work is conceptual in nature. Here, we introduce a formal model of cardiac active inference, which explains how ascending cardiac signals entrain exteroceptive sensory perception and uncertainty. Through simulated psychophysics, we reproduce the defensive startle reflex and commonly reported effects linking the cardiac cycle to affective behaviour. We further show that simulated 'interoceptive lesions' blunt affective expectations, induce psychosomatic hallucinations, and exacerbate biases in perceptual uncertainty. Through synthetic heart-rate variability analyses, we illustrate how the balance of arousal-priors and visceral prediction errors produces idiosyncratic patterns of physiological reactivity. Our model thus offers a roadmap for computationally phenotyping disordered brain-body interaction.


Assuntos
Interocepção , Encéfalo , Emoções/fisiologia , Frequência Cardíaca/fisiologia , Interocepção/fisiologia
5.
Oecologia ; 202(4): 795-806, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37582947

RESUMO

The impacts of animals on the biogeochemical cycles of major bioelements like C, N, and P are well-studied across ecosystem types. However, more than 20 elements are necessary for life. The feedbacks between animals and the biogeochemical cycles of the other bioelements are an emerging research priority. We explored how much freshwater mussels (Bivalvia: Unionoida) were related to variability in ecosystem pools of 10 bioelements (Ca, Cu, Fe, K, Mn, Na, Mg, P, S and Zn) in streams containing a natural mussel density gradient in the US Interior Highlands. We studied the concentrations of these bioelements across the aquatic-terrestrial interface-in the porewater of riverine gravel bars, and the emergent macrophyte Justicia americana. Higher mussel density was associated with increased calcium in gravel bars and macrophytes. Mussel density also correlated with variability in iron and other redox-sensitive trace elements in gravel bars and macrophytes, although this relationship was mediated by sediment grain size. We found that two explanations for the patterns we observed are worthy of further research: (1) increased calcium availability in gravel bars near denser mussel aggregations may be a product of the buildup and dissolution of shells in the gravel bar, and (2) mussels may alter redox conditions, and thus elemental availability in gravel bars with fine sediments, either behaviorally or through physical structure provided by shell material. A better understanding of the physical and biogeochemical impacts of animals on a wide range of elemental cycles is thus necessary to conserve the societal value of freshwater ecosystems.


Assuntos
Bivalves , Ecossistema , Animais , Cálcio , Água Doce , Rios
6.
Entropy (Basel) ; 25(7)2023 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-37509911

RESUMO

This paper introduces a variational formulation of natural selection, paying special attention to the nature of 'things' and the way that different 'kinds' of 'things' are individuated from-and influence-each other. We use the Bayesian mechanics of particular partitions to understand how slow phylogenetic processes constrain-and are constrained by-fast, phenotypic processes. The main result is a formulation of adaptive fitness as a path integral of phenotypic fitness. Paths of least action, at the phenotypic and phylogenetic scales, can then be read as inference and learning processes, respectively. In this view, a phenotype actively infers the state of its econiche under a generative model, whose parameters are learned via natural (Bayesian model) selection. The ensuing variational synthesis features some unexpected aspects. Perhaps the most notable is that it is not possible to describe or model a population of conspecifics per se. Rather, it is necessary to consider populations of distinct natural kinds that influence each other. This paper is limited to a description of the mathematical apparatus and accompanying ideas. Subsequent work will use these methods for simulations and numerical analyses-and identify points of contact with related mathematical formulations of evolution.

7.
Neural Comput ; 34(4): 829-855, 2022 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-35231935

RESUMO

Under the Bayesian brain hypothesis, behavioral variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioral preferences when confronted with similar choices. For example, greedy preferences are a consequence of confident (or precise) beliefs over certain outcomes. Here, we offer an alternative account of behavioral variability using Rényi divergences and their associated variational bounds. Rényi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived under the same assumptions. Importantly, these bounds provide a formal way to establish behavioral differences through an α parameter, given fixed priors. This rests on changes in α that alter the bound (on a continuous scale), inducing different posterior estimates and consequent variations in behavior. Thus, it looks as if individuals have different priors and have reached different conclusions. More specifically, α→0+ optimization constrains the variational posterior to be positive whenever the true posterior is positive. This leads to mass-covering variational estimates and increased variability in choice behavior. Furthermore, α→+∞ optimization constrains the variational posterior to be zero whenever the true posterior is zero. This leads to mass-seeking variational posteriors and greedy preferences. We exemplify this formulation through simulations of the multiarmed bandit task. We note that these α parameterizations may be especially relevant (i.e., shape preferences) when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density, which may be the case in many real-world scenarios. The ensuing departure from vanilla variational inference provides a potentially useful explanation for differences in behavioral preferences of biological (or artificial) agents under the assumption that the brain performs variational Bayesian inference.


Assuntos
Encéfalo , Teorema de Bayes , Humanos
8.
Brain ; 144(6): 1799-1818, 2021 07 28.
Artigo em Inglês | MEDLINE | ID: mdl-33704439

RESUMO

We propose a computational neurology of movement based on the convergence of theoretical neurobiology and clinical neurology. A significant development in the former is the idea that we can frame brain function as a process of (active) inference, in which the nervous system makes predictions about its sensory data. These predictions depend upon an implicit predictive (generative) model used by the brain. This means neural dynamics can be framed as generating actions to ensure sensations are consistent with these predictions-and adjusting predictions when they are not. We illustrate the significance of this formulation for clinical neurology by simulating a clinical examination of the motor system using an upper limb coordination task. Specifically, we show how tendon reflexes emerge naturally under the right kind of generative model. Through simulated perturbations, pertaining to prior probabilities of this model's variables, we illustrate the emergence of hyperreflexia and pendular reflexes, reminiscent of neurological lesions in the corticospinal tract and cerebellum. We then turn to the computational lesions causing hypokinesia and deficits of coordination. This in silico lesion-deficit analysis provides an opportunity to revisit classic neurological dichotomies (e.g. pyramidal versus extrapyramidal systems) from the perspective of modern approaches to theoretical neurobiology-and our understanding of the neurocomputational architecture of movement control based on first principles.


Assuntos
Encéfalo/fisiologia , Simulação por Computador , Modelos Neurológicos , Movimento/fisiologia , Neurologia/métodos , Humanos
9.
Behav Brain Sci ; 45: e203, 2022 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-36172764

RESUMO

This commentary suggests that, although Markov blankets may have different interpretations in different systems, these distinctions rest not upon the type of blanket, but upon the model that determines the blanket. As an example, the conditions for a model in which the Markov blanket may be interpretable as a physical (spatial) boundary are considered.

10.
Neural Comput ; 33(3): 674-712, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33400903

RESUMO

Active inference is a first principle account of how autonomous agents operate in dynamic, nonstationary environments. This problem is also considered in reinforcement learning, but limited work exists on comparing the two approaches on the same discrete-state environments. In this letter, we provide (1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and (2) an explicit discrete-state comparison between active inference and reinforcement learning on an OpenAI gym baseline. We begin by providing a condensed overview of the active inference literature, in particular viewing the various natural behaviors of active inference agents through the lens of reinforcement learning. We show that by operating in a pure belief-based setting, active inference agents can carry out epistemic exploration-and account for uncertainty about their environment-in a Bayes-optimal fashion. Furthermore, we show that the reliance on an explicit reward signal in reinforcement learning is removed in active inference, where reward can simply be treated as another observation we have a preference over; even in the total absence of rewards, agent behaviors are learned through preference learning. We make these properties explicit by showing two scenarios in which active inference agents can infer behaviors in reward-free environments compared to both Q-learning and Bayesian model-based reinforcement learning agents and by placing zero prior preferences over rewards and learning the prior preferences over the observations corresponding to reward. We conclude by noting that this formalism can be applied to more complex settings (e.g., robotic arm movement, Atari games) if appropriate generative models can be formulated. In short, we aim to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrate these behaviors in a OpenAI gym environment, alongside reinforcement learning agents.

11.
Neural Comput ; 33(3): 713-763, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33626312

RESUMO

Active inference offers a first principle account of sentient behavior, from which special and important cases-for example, reinforcement learning, active learning, Bayes optimal inference, Bayes optimal design-can be derived. Active inference finesses the exploitation-exploration dilemma in relation to prior preferences by placing information gain on the same footing as reward or value. In brief, active inference replaces value functions with functionals of (Bayesian) beliefs, in the form of an expected (variational) free energy. In this letter, we consider a sophisticated kind of active inference using a recursive form of expected free energy. Sophistication describes the degree to which an agent has beliefs about beliefs. We consider agents with beliefs about the counterfactual consequences of action for states of affairs and beliefs about those latent states. In other words, we move from simply considering beliefs about "what would happen if I did that" to "what I would believe about what would happen if I did that." The recursive form of the free energy functional effectively implements a deep tree search over actions and outcomes in the future. Crucially, this search is over sequences of belief states as opposed to states per se. We illustrate the competence of this scheme using numerical simulations of deep decision problems.

12.
Neural Comput ; 33(2): 398-446, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33253028

RESUMO

The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computational modeling. Using deep active inference, a hierarchical inference scheme that rests on inverting a model of how sensory data are generated, we develop a principled Bayesian model of emotional valence. This formulation asserts that agents infer their valence state based on the expected precision of their action model-an internal estimate of overall model fitness ("subjective fitness"). This index of subjective fitness can be estimated within any environment and exploits the domain generality of second-order beliefs (beliefs about beliefs). We show how maintaining internal valence representations allows the ensuing affective agent to optimize confidence in action selection preemptively. Valence representations can in turn be optimized by leveraging the (Bayes-optimal) updating term for subjective fitness, which we label affective charge (AC). AC tracks changes in fitness estimates and lends a sign to otherwise unsigned divergences between predictions and outcomes. We simulate the resulting affective inference by subjecting an in silico affective agent to a T-maze paradigm requiring context learning, followed by context reversal. This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition. It characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action (i.e., top-down modulation of priors that underwrite confidence). Thus, we demonstrate the potential of active inference to provide a formal and computationally tractable account of affect. Our demonstration of the face validity and potential utility of this formulation represents the first step within a larger research program. Next, this model can be leveraged to test the hypothesized role of valence by fitting the model to behavioral and neuronal responses.

13.
Cereb Cortex ; 30(2): 682-695, 2020 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-31298270

RESUMO

The prefrontal cortex is vital for a range of cognitive processes, including working memory, attention, and decision-making. Notably, its absence impairs the performance of tasks requiring the maintenance of information through a delay period. In this paper, we formulate a rodent task-which requires maintenance of delay-period activity-as a Markov decision process and treat optimal task performance as an (active) inference problem. We simulate the behavior of a Bayes optimal mouse presented with 1 of 2 cues that instructs the selection of concurrent visual and auditory targets on a trial-by-trial basis. Formulating inference as message passing, we reproduce features of neuronal coupling within and between prefrontal regions engaged by this task. We focus on the micro-circuitry that underwrites delay-period activity and relate it to functional specialization within the prefrontal cortex in primates. Finally, we simulate the electrophysiological correlates of inference and demonstrate the consequences of lesions to each part of our in silico prefrontal cortex. In brief, this formulation suggests that recurrent excitatory connections-which support persistent neuronal activity-encode beliefs about transition probabilities over time. We argue that attentional modulation can be understood as the contextualization of sensory input by these persistent beliefs.


Assuntos
Atenção/fisiologia , Tomada de Decisões/fisiologia , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Córtex Pré-Frontal/fisiologia , Animais , Teorema de Bayes , Comportamento Animal , Encéfalo/fisiologia , Humanos , Cadeias de Markov , Camundongos , Redes Neurais de Computação , Vias Neurais/fisiologia , Neurônios
14.
Cereb Cortex ; 30(11): 5750-5766, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32488244

RESUMO

The notions of degeneracy and redundancy are important constructs in many areas, ranging from genomics through to network science. Degeneracy finds a powerful role in neuroscience, explaining key aspects of distributed processing and structure-function relationships in the brain. For example, degeneracy accounts for the superadditive effect of lesions on functional deficits in terms of a "many-to-one" structure-function mapping. In this paper, we offer a principled account of degeneracy and redundancy, when function is operationalized in terms of active inference, namely, a formulation of perception and action as belief updating under generative models of the world. In brief, "degeneracy" is quantified by the "entropy" of posterior beliefs about the causes of sensations, while "redundancy" is the "complexity" cost incurred by forming those beliefs. From this perspective, degeneracy and redundancy are complementary: Active inference tries to minimize redundancy while maintaining degeneracy. This formulation is substantiated using statistical and mathematical notions of degenerate mappings and statistical efficiency. We then illustrate changes in degeneracy and redundancy during the learning of a word repetition task. Finally, we characterize the effects of lesions-to intrinsic and extrinsic connections-using in silico disconnections. These numerical analyses highlight the fundamental difference between degeneracy and redundancy-and how they score distinct imperatives for perceptual inference and structure learning that are relevant to synthetic and biological intelligence.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Animais , Teorema de Bayes , Humanos
15.
Entropy (Basel) ; 23(5)2021 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-34068913

RESUMO

Active inference is an increasingly prominent paradigm in theoretical biology. It frames the dynamics of living systems as if they were solving an inference problem. This rests upon their flow towards some (non-equilibrium) steady state-or equivalently, their maximisation of the Bayesian model evidence for an implicit probabilistic model. For many models, these self-evidencing dynamics manifest as messages passed among elements of a system. Such messages resemble synaptic communication at a neuronal network level but could also apply to other network structures. This paper attempts to apply the same formulation to biochemical networks. The chemical computation that occurs in regulation of metabolism relies upon sparse interactions between coupled reactions, where enzymes induce conditional dependencies between reactants. We will see that these reactions may be viewed as the movement of probability mass between alternative categorical states. When framed in this way, the master equations describing such systems can be reformulated in terms of their steady-state distribution. This distribution plays the role of a generative model, affording an inferential interpretation of the underlying biochemistry. Finally, we see that-in analogy with computational neurology and psychiatry-metabolic disorders may be characterized as false inference under aberrant prior beliefs.

16.
Entropy (Basel) ; 23(8)2021 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-34441216

RESUMO

Biehl et al. (2021) present some interesting observations on an early formulation of the free energy principle. We use these observations to scaffold a discussion of the technical arguments that underwrite the free energy principle. This discussion focuses on solenoidal coupling between various (subsets of) states in sparsely coupled systems that possess a Markov blanket-and the distinction between exact and approximate Bayesian inference, implied by the ensuing Bayesian mechanics.

17.
Entropy (Basel) ; 23(9)2021 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-34573845

RESUMO

In this treatment of random dynamical systems, we consider the existence-and identification-of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions-and the functional form of the underlying densities-have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition-and polynomial expansions-to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified-using the accompanying Hessian-to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology.

18.
Entropy (Basel) ; 23(4)2021 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-33921298

RESUMO

Active inference is a normative framework for explaining behaviour under the free energy principle-a theory of self-organisation originating in neuroscience. It specifies neuronal dynamics for state-estimation in terms of a descent on (variational) free energy-a measure of the fit between an internal (generative) model and sensory observations. The free energy gradient is a prediction error-plausibly encoded in the average membrane potentials of neuronal populations. Conversely, the expected probability of a state can be expressed in terms of neuronal firing rates. We show that this is consistent with current models of neuronal dynamics and establish face validity by synthesising plausible electrophysiological responses. We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space. We compare the information length of belief updating in both schemes, a measure of the distance travelled in information space that has a direct interpretation in terms of metabolic cost. We show that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference.

19.
Entropy (Basel) ; 23(9)2021 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-34573730

RESUMO

In theoretical biology, we are often interested in random dynamical systems-like the brain-that appear to model their environments. This can be formalized by appealing to the existence of a (possibly non-equilibrium) steady state, whose density preserves a conditional independence between a biological entity and its surroundings. From this perspective, the conditioning set, or Markov blanket, induces a form of vicarious synchrony between creature and world-as if one were modelling the other. However, this results in an apparent paradox. If all conditional dependencies between a system and its surroundings depend upon the blanket, how do we account for the mnemonic capacity of living systems? It might appear that any shared dependence upon past blanket states violates the independence condition, as the variables on either side of the blanket now share information not available from the current blanket state. This paper aims to resolve this paradox, and to demonstrate that conditional independence does not preclude memory. Our argument rests upon drawing a distinction between the dependencies implied by a steady state density, and the density dynamics of the system conditioned upon its configuration at a previous time. The interesting question then becomes: What determines the length of time required for a stochastic system to 'forget' its initial conditions? We explore this question for an example system, whose steady state density possesses a Markov blanket, through simple numerical analyses. We conclude with a discussion of the relevance for memory in cognitive systems like us.

20.
J Neurosci ; 39(32): 6265-6275, 2019 08 07.
Artigo em Inglês | MEDLINE | ID: mdl-31182633

RESUMO

In this paper, we draw from recent theoretical work on active perception, which suggests that the brain makes use of an internal (i.e., generative) model to make inferences about the causes of sensations. This view treats visual sensations as consequent on action (i.e., saccades) and implies that visual percepts must be actively constructed via a sequence of eye movements. Oculomotor control calls on a distributed set of brain sources that includes the dorsal and ventral frontoparietal (attention) networks. We argue that connections from the frontal eye fields to ventral parietal sources represent the mapping from "where", fixation location to information derived from "what" representations in the ventral visual stream. During scene construction, this mapping must be learned, putatively through changes in the effective connectivity of these synapses. Here, we test the hypothesis that the coupling between the dorsal frontal cortex and the right temporoparietal cortex is modulated during saccadic interrogation of a simple visual scene. Using dynamic causal modeling for magnetoencephalography with (male and female) human participants, we assess the evidence for changes in effective connectivity by comparing models that allow for this modulation with models that do not. We find strong evidence for modulation of connections between the two attention networks; namely, a disinhibition of the ventral network by its dorsal counterpart.SIGNIFICANCE STATEMENT This work draws from recent theoretical accounts of active vision and provides empirical evidence for changes in synaptic efficacy consistent with these computational models. In brief, we used magnetoencephalography in combination with eye-tracking to assess the neural correlates of a form of short-term memory during a dot cancellation task. Using dynamic causal modeling to quantify changes in effective connectivity, we found evidence that the coupling between the dorsal and ventral attention networks changed during the saccadic interrogation of a simple visual scene. Intuitively, this is consistent with the idea that these neuronal connections may encode beliefs about "what I would see if I looked there", and that this mapping is optimized as new data are obtained with each fixation.


Assuntos
Atenção/fisiologia , Modelos Neurológicos , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Causalidade , Conectoma , Cultura , Dominância Cerebral , Feminino , Fixação Ocular/fisiologia , Lobo Frontal/fisiologia , Humanos , Magnetoencefalografia , Masculino , Rede Nervosa/fisiologia , Lobo Parietal/fisiologia , Transtornos da Percepção/fisiopatologia , Estimulação Luminosa , Movimentos Sacádicos/fisiologia , Lobo Temporal/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA