Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(17): e2320239121, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38630721

RESUMEN

Collective motion is ubiquitous in nature; groups of animals, such as fish, birds, and ungulates appear to move as a whole, exhibiting a rich behavioral repertoire that ranges from directed movement to milling to disordered swarming. Typically, such macroscopic patterns arise from decentralized, local interactions among constituent components (e.g., individual fish in a school). Preeminent models of this process describe individuals as self-propelled particles, subject to self-generated motion and "social forces" such as short-range repulsion and long-range attraction or alignment. However, organisms are not particles; they are probabilistic decision-makers. Here, we introduce an approach to modeling collective behavior based on active inference. This cognitive framework casts behavior as the consequence of a single imperative: to minimize surprise. We demonstrate that many empirically observed collective phenomena, including cohesion, milling, and directed motion, emerge naturally when considering behavior as driven by active Bayesian inference-without explicitly building behavioral rules or goals into individual agents. Furthermore, we show that active inference can recover and generalize the classical notion of social forces as agents attempt to suppress prediction errors that conflict with their expectations. By exploring the parameter space of the belief-based model, we reveal nontrivial relationships between the individual beliefs and group properties like polarization and the tendency to visit different collective states. We also explore how individual beliefs about uncertainty determine collective decision-making accuracy. Finally, we show how agents can update their generative model over time, resulting in groups that are collectively more sensitive to external fluctuations and encode information more robustly.


Asunto(s)
Conducta de Masa , Modelos Biológicos , Animales , Teorema de Bayes , Movimiento , Movimiento (Física) , Peces , Conducta Social , Conducta Animal
2.
PLoS Comput Biol ; 20(4): e1011183, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38557984

RESUMEN

One of the key problems the brain faces is inferring the state of the world from a sequence of dynamically changing stimuli, and it is not yet clear how the sensory system achieves this task. A well-established computational framework for describing perceptual processes in the brain is provided by the theory of predictive coding. Although the original proposals of predictive coding have discussed temporal prediction, later work developing this theory mostly focused on static stimuli, and key questions on neural implementation and computational properties of temporal predictive coding networks remain open. Here, we address these questions and present a formulation of the temporal predictive coding model that can be naturally implemented in recurrent networks, in which activity dynamics rely only on local inputs to the neurons, and learning only utilises local Hebbian plasticity. Additionally, we show that temporal predictive coding networks can approximate the performance of the Kalman filter in predicting behaviour of linear systems, and behave as a variant of a Kalman filter which does not track its own subjective posterior variance. Importantly, temporal predictive coding networks can achieve similar accuracy as the Kalman filter without performing complex mathematical operations, but just employing simple computations that can be implemented by biological networks. Moreover, when trained with natural dynamic inputs, we found that temporal predictive coding can produce Gabor-like, motion-sensitive receptive fields resembling those observed in real neurons in visual areas. In addition, we demonstrate how the model can be effectively generalized to nonlinear systems. Overall, models presented in this paper show how biologically plausible circuits can predict future stimuli and may guide research on understanding specific neural circuits in brain areas involved in temporal prediction.


Asunto(s)
Encéfalo , Modelos Neurológicos , Encéfalo/fisiología , Aprendizaje , Neuronas/fisiología
3.
PLoS Comput Biol ; 19(8): e1011280, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37531366

RESUMEN

Predictive coding is an influential model of cortical neural activity. It proposes that perceptual beliefs are furnished by sequentially minimising "prediction errors"-the differences between predicted and observed data. Implicit in this proposal is the idea that successful perception requires multiple cycles of neural activity. This is at odds with evidence that several aspects of visual perception-including complex forms of object recognition-arise from an initial "feedforward sweep" that occurs on fast timescales which preclude substantial recurrent activity. Here, we propose that the feedforward sweep can be understood as performing amortized inference (applying a learned function that maps directly from data to beliefs) and recurrent processing can be understood as performing iterative inference (sequentially updating neural activity in order to improve the accuracy of beliefs). We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner by describing both in terms of a dual optimization of a single objective function. We show that the resulting scheme can be implemented in a biologically plausible neural architecture that approximates Bayesian inference utilising local Hebbian update rules. We demonstrate that our hybrid predictive coding model combines the benefits of both amortized and iterative inference-obtaining rapid and computationally cheap perceptual inference for familiar data while maintaining the context-sensitivity, precision, and sample efficiency of iterative inference schemes. Moreover, we show how our model is inherently sensitive to its uncertainty and adaptively balances iterative and amortized inference to obtain accurate beliefs using minimum computational expense. Hybrid predictive coding offers a new perspective on the functional relevance of the feedforward and recurrent activity observed during visual perception and offers novel insights into distinct aspects of visual phenomenology.


Asunto(s)
Aprendizaje , Percepción Visual , Teorema de Bayes
4.
PLoS Comput Biol ; 19(4): e1010719, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-37058541

RESUMEN

The computational principles adopted by the hippocampus in associative memory (AM) tasks have been one of the most studied topics in computational and theoretical neuroscience. Recent theories suggested that AM and the predictive activities of the hippocampus could be described within a unitary account, and that predictive coding underlies the computations supporting AM in the hippocampus. Following this theory, a computational model based on classical hierarchical predictive networks was proposed and was shown to perform well in various AM tasks. However, this fully hierarchical model did not incorporate recurrent connections, an architectural component of the CA3 region of the hippocampus that is crucial for AM. This makes the structure of the model inconsistent with the known connectivity of CA3 and classical recurrent models such as Hopfield Networks, which learn the covariance of inputs through their recurrent connections to perform AM. Earlier PC models that learn the covariance information of inputs explicitly via recurrent connections seem to be a solution to these issues. Here, we show that although these models can perform AM, they do it in an implausible and numerically unstable way. Instead, we propose alternatives to these earlier covariance-learning predictive coding networks, which learn the covariance information implicitly and plausibly, and can use dendritic structures to encode prediction errors. We show analytically that our proposed models are perfectly equivalent to the earlier predictive coding model learning covariance explicitly, and encounter no numerical issues when performing AM tasks in practice. We further show that our models can be combined with hierarchical predictive coding networks to model the hippocampo-neocortical interactions. Our models provide a biologically plausible approach to modelling the hippocampal network, pointing to a potential computational mechanism during hippocampal memory formation and recall, which employs both predictive coding and covariance learning based on the recurrent network structure of the hippocampus.


Asunto(s)
Hipocampo , Aprendizaje , Recuerdo Mental , Condicionamiento Clásico , Modelos Neurológicos
5.
Entropy (Basel) ; 26(9)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39330098

RESUMEN

Evolution by natural selection is believed to be the only possible source of spontaneous adaptive organisation in the natural world. This places strict limits on the kinds of systems that can exhibit adaptation spontaneously, i.e., without design. Physical systems can show some properties relevant to adaptation without natural selection or design. (1) The relaxation, or local energy minimisation, of a physical system constitutes a natural form of optimisation insomuch as it finds locally optimal solutions to the frustrated forces acting on it or between its components. (2) When internal structure 'gives way' or accommodates a pattern of forcing on a system, this constitutes learning insomuch, as it can store, recall, and generalise past configurations. Both these effects are quite natural and general, but in themselves insufficient to constitute non-trivial adaptation. However, here we show that the recurrent interaction of physical optimisation and physical learning together results in significant spontaneous adaptive organisation. We call this adaptation by natural induction. The effect occurs in dynamical systems described by a network of viscoelastic connections subject to occasional disturbances. When the internal structure of such a system accommodates slowly across many disturbances and relaxations, it spontaneously learns to preferentially visit solutions of increasingly greater quality (exceptionally low energy). We show that adaptation by natural induction thus produces network organisations that improve problem-solving competency with experience (without supervised training or system-level reward). We note that the conditions for adaptation by natural induction, and its adaptive competency, are different from those of natural selection. We therefore suggest that natural selection is not the only possible source of spontaneous adaptive organisation in the natural world.

6.
Neural Comput ; 34(6): 1329-1368, 2022 05 19.
Artículo en Inglés | MEDLINE | ID: mdl-35534010

RESUMEN

Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. Recently it has been shown that backprop in multilayer perceptrons (MLPs) can be approximated using predictive coding, a biologically plausible process theory of cortical computation that relies solely on local and Hebbian updates. The power of backprop, however, lies not in its instantiation in MLPs but in the concept of automatic differentiation, which allows for the optimization of any differentiable program expressed as a computation graph. Here, we demonstrate that predictive coding converges asymptotically (and in practice, rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rules. We apply this result to develop a straightforward strategy to translate core machine learning architectures into their predictive coding equivalents. We construct predictive coding convolutional neural networks, recurrent neural networks, and the more complex long short-term memory, which include a nonlayer-like branching internal graph structure and multiplicative interactions. Our models perform equivalently to backprop on challenging machine learning benchmarks while using only local and (mostly) Hebbian plasticity. Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry and may also contribute to the development of completely distributed neuromorphic architectures.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Algoritmos , Aprendizaje Automático
7.
Neural Comput ; 33(2): 447-482, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33400900

RESUMEN

The expected free energy (EFE) is a central quantity in the theory of active inference. It is the quantity that all active inference agents are mandated to minimize through action, and its decomposition into extrinsic and intrinsic value terms is key to the balance of exploration and exploitation that active inference agents evince. Despite its importance, the mathematical origins of this quantity and its relation to the variational free energy (VFE) remain unclear. In this letter, we investigate the origins of the EFE in detail and show that it is not simply "the free energy in the future." We present a functional that we argue is the natural extension of the VFE but actively discourages exploratory behavior, thus demonstrating that exploration does not directly follow from free energy minimization into the future. We then develop a novel objective, the free energy of the expected future (FEEF), which possesses both the epistemic component of the EFE and an intuitive mathematical grounding as the divergence between predicted and desired futures.

8.
Nat Neurosci ; 27(2): 348-358, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38172438

RESUMEN

For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output, a challenge that is known as 'credit assignment'. It has long been assumed that credit assignment is best solved by backpropagation, which is also the foundation of modern machine learning. Here, we set out a fundamentally different principle on credit assignment called 'prospective configuration'. In prospective configuration, the network first infers the pattern of neural activity that should result from learning, and then the synaptic weights are modified to consolidate the change in neural activity. We demonstrate that this distinct mechanism, in contrast to backpropagation, (1) underlies learning in a well-established family of models of cortical circuits, (2) enables learning that is more efficient and effective in many contexts faced by biological organisms and (3) reproduces surprising patterns of neural activity and behavior observed in diverse human and rat learning experiments.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Humanos , Ratas , Animales , Estudios Prospectivos , Plasticidad Neuronal
9.
Interface Focus ; 13(3): 20220029, 2023 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-37213925

RESUMEN

The aim of this paper is to introduce a field of study that has emerged over the last decade, called Bayesian mechanics. Bayesian mechanics is a probabilistic mechanics, comprising tools that enable us to model systems endowed with a particular partition (i.e. into particles), where the internal states (or the trajectories of internal states) of a particular system encode the parameters of beliefs about external states (or their trajectories). These tools allow us to write down mechanical theories for systems that look as if they are estimating posterior probability distributions over the causes of their sensory states. This provides a formal language for modelling the constraints, forces, potentials and other quantities determining the dynamics of such systems, especially as they entail dynamics on a space of beliefs (i.e. on a statistical manifold). Here, we will review the state of the art in the literature on the free energy principle, distinguishing between three ways in which Bayesian mechanics has been applied to particular systems (i.e. path-tracking, mode-tracking and mode-matching). We go on to examine a duality between the free energy principle and the constrained maximum entropy principle, both of which lie at the heart of Bayesian mechanics, and discuss its implications.

10.
Phys Life Rev ; 40: 24-50, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34895862

RESUMEN

The free energy principle (FEP) states that any dynamical system can be interpreted as performing Bayesian inference upon its surrounding environment. Although, in theory, the FEP applies to a wide variety of systems, there has been almost no direct exploration or demonstration of the principle in concrete systems. In this work, we examine in depth the assumptions required to derive the FEP in the simplest possible set of systems - weakly-coupled non-equilibrium linear stochastic systems. Specifically, we explore (i) how general the requirements imposed on the statistical structure of a system are and (ii) how informative the FEP is about the behaviour of such systems. We discover that two requirements of the FEP - the Markov blanket condition (i.e. a statistical boundary precluding direct coupling between internal and external states) and stringent restrictions on its solenoidal flows (i.e. tendencies driving a system out of equilibrium) - are only valid for a very narrow space of parameters. Suitable systems require an absence of perception-action asymmetries that is highly unusual for living systems interacting with an environment. More importantly, we observe that a mathematically central step in the argument, connecting the behaviour of a system to variational inference, relies on an implicit equivalence between the dynamics of the average states of a system with the average of the dynamics of those states. This equivalence does not hold in general even for linear stochastic systems, since it requires an effective decoupling from the system's history of interactions. These observations are critical for evaluating the generality and applicability of the FEP and indicate the existence of significant problems of the theory in its current form. These issues make the FEP, as it stands, not straightforwardly applicable to the simple linear systems studied here and suggest that more development is needed before the theory could be applied to the kind of complex systems that describe living and cognitive processes.


Asunto(s)
Conducción de Automóvil , Física , Teorema de Bayes , Entropía , Existencialismo
11.
Proc Mach Learn Res ; 162: 15561-15583, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36751405

RESUMEN

A large number of neural network models of associative memory have been proposed in the literature. These include the classical Hopfield networks (HNs), sparse distributed memories (SDMs), and more recently the modern continuous Hopfield networks (MCHNs), which possess close links with self-attention in machine learning. In this paper, we propose a general framework for understanding the operation of such memory networks as a sequence of three operations: similarity, separation, and projection. We derive all these memory models as instances of our general framework with differing similarity and separation functions. We extend the mathematical framework of Krotov & Hopfield (2020) to express general associative memory models using neural network dynamics with local computation, and derive a general energy function that is a Lyapunov function of the dynamics. Finally, using our framework, we empirically investigate the capacity of using different similarity functions for these associative memory models, beyond the dot product similarity measure, and demonstrate empirically that Euclidean or Manhattan distance similarity metrics perform substantially better in practice on many tasks, enabling a more robust retrieval and higher memory capacity than existing models.

12.
Adv Neural Inf Process Syst ; 35: 38232-38244, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37090087

RESUMEN

Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons. This enables the model to be queried on stimuli with different structures, such as partial images, images with labels, or images without labels. We conclude by investigating how the topology of the graph influences the final performance, and comparing against simple baselines trained with BP.

13.
Front Neurorobot ; 15: 729665, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34675792

RESUMEN

This paper presents an active inference based simulation study of visual foraging. The goal of the simulation is to show the effect of the acquisition of culturally patterned attention styles on cognitive task performance, under active inference. We show how cultural artefacts like antique vase decorations drive cognitive functions such as perception, action and learning, as well as task performance in a simple visual discrimination task. We thus describe a new active inference based research pipeline that future work may employ to inquire on deep guiding principles determining the manner in which material culture drives human thought, by building and rebuilding our patterns of attention.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA