RESUMEN
Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.
Asunto(s)
Encéfalo , Aprendizaje , Hipocampo , Corteza Prefrontal , Simulación por ComputadorRESUMEN
Sub-additivity and variability are ubiquitous response motifs in the primary visual cortex (V1). Response sub-additivity enables the construction of useful interpretations of the visual environment, whereas response variability indicates the factors that limit the precision with which the brain can do this. There is increasing evidence that experimental manipulations that elicit response sub-additivity often also quench response variability. Here, we provide an overview of these phenomena and suggest that they may have common origins. We discuss empirical findings and recent model-based insights into the functional operations, computational objectives and circuit mechanisms underlying V1 activity. These different modelling approaches all predict that response sub-additivity and variability quenching often co-occur. The phenomenology of these two response motifs, as well as many of the insights obtained about them in V1, generalize to other cortical areas. Thus, the connection between response sub-additivity and variability quenching may be a canonical motif across the cortex.
Asunto(s)
Corteza Visual , Humanos , Corteza Visual/fisiología , Encéfalo , Estimulación Luminosa , Vías Visuales/fisiologíaRESUMEN
ASBTRACT: Humans spend a lifetime learning, storing and refining a repertoire of motor memories. For example, through experience, we become proficient at manipulating a large range of objects with distinct dynamical properties. However, it is unknown what principle underlies how our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire. Here we develop a theory of motor learning based on the key principle that memory creation, updating and expression are all controlled by a single computation-contextual inference. Our theory reveals that adaptation can arise both by creating and updating memories (proper learning) and by changing how existing memories are differentially expressed (apparent learning). This insight enables us to account for key features of motor learning that had no unified explanation: spontaneous recovery1, savings2, anterograde interference3, how environmental consistency affects learning rate4,5 and the distinction between explicit and implicit learning6. Critically, our theory also predicts new phenomena-evoked recovery and context-dependent single-trial learning-which we confirm experimentally. These results suggest that contextual inference, rather than classical single-context mechanisms1,4,7-9, is the key principle underlying how a diverse set of experiences is reflected in our motor behaviour.
Asunto(s)
Aprendizaje , Desempeño Psicomotor , Adaptación Fisiológica , Condicionamiento Psicológico , Humanos , Aprendizaje/fisiología , Desempeño Psicomotor/fisiologíaRESUMEN
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal loading of information into working memory involves inputs that are largely orthogonal, rather than similar, to the late delay activities observed during memory maintenance, naturally leading to the widely observed phenomenon of dynamic coding in PFC. Using a theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading. We also find that optimal information loading emerges as a general dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics and reveals a normative principle underlying dynamic coding.
Asunto(s)
Memoria a Corto Plazo , Neuronas , Memoria a Corto Plazo/fisiología , Neuronas/fisiología , Corteza Prefrontal/fisiología , Redes Neurales de la ComputaciónRESUMEN
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT: Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level.
Asunto(s)
Potenciales de Acción/fisiología , Toma de Decisiones/fisiología , Objetivos , Redes Neurales de la Computación , Neuronas/fisiología , Animales , Conducta de Elección/fisiología , Ratas , Refuerzo en PsicologíaRESUMEN
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations.
Asunto(s)
Modelos Neurológicos , Modelos Estadísticos , Red Nerviosa/fisiología , Inhibición Neural/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Animales , Simulación por Computador , Retroalimentación Fisiológica/fisiología , Humanos , Transmisión Sináptica/fisiologíaRESUMEN
A venerable history of classical work on autoassociative memory has significantly shaped our understanding of several features of the hippocampus, and most prominently of its CA3 area, in relation to memory storage and retrieval. However, existing theories of hippocampal memory processing ignore a key biological constraint affecting memory storage in neural circuits: the bounded dynamical range of synapses. Recent treatments based on the notion of metaplasticity provide a powerful model for individual bounded synapses; however, their implications for the ability of the hippocampus to retrieve memories well and the dynamics of neurons associated with that retrieval are both unknown. Here, we develop a theoretical framework for memory storage and recall with bounded synapses. We formulate the recall of a previously stored pattern from a noisy recall cue and limited-capacity (and therefore lossy) synapses as a probabilistic inference problem, and derive neural dynamics that implement approximate inference algorithms to solve this problem efficiently. In particular, for binary synapses with metaplastic states, we demonstrate for the first time that memories can be efficiently read out with biologically plausible network dynamics that are completely constrained by the synaptic plasticity rule, and the statistics of the stored patterns and of the recall cue. Our theory organises into a coherent framework a wide range of existing data about the regulation of excitability, feedback inhibition, and network oscillations in area CA3, and makes novel and directly testable predictions that can guide future experiments.
Asunto(s)
Región CA3 Hipocampal/fisiología , Recuerdo Mental/fisiología , Modelos Neurológicos , Adaptación Fisiológica , Algoritmos , Animales , Biología Computacional , Retroalimentación Fisiológica , Humanos , Modelos Psicológicos , Modelos Estadísticos , Plasticidad Neuronal , Sinapsis/fisiologíaRESUMEN
Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been difficult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models, and task-optimized networks. We review principled mathematical reasons for why classical models do not, while task-optimized models naturally do exhibit dynamic coding. We suggest an update to our understanding of WM maintenance, in which dynamic coding is a fundamental computational feature rather than an epiphenomenon.
Asunto(s)
Memoria a Corto Plazo , Modelos Neurológicos , Memoria a Corto Plazo/fisiología , Humanos , Encéfalo/fisiología , Redes Neurales de la Computación , AnimalesRESUMEN
Context is widely regarded as a major determinant of learning and memory across numerous domains, including classical and instrumental conditioning, episodic memory, economic decision-making, and motor learning. However, studies across these domains remain disconnected due to the lack of a unifying framework formalizing the concept of context and its role in learning. Here, we develop a unified vernacular allowing direct comparisons between different domains of contextual learning. This leads to a Bayesian model positing that context is unobserved and needs to be inferred. Contextual inference then controls the creation, expression, and updating of memories. This theoretical approach reveals two distinct components that underlie adaptation, proper and apparent learning, respectively referring to the creation and updating of memories versus time-varying adjustments in their expression. We review a number of extensions of the basic Bayesian model that allow it to account for increasingly complex forms of contextual learning.
Asunto(s)
Aprendizaje , Memoria , Humanos , Teorema de Bayes , HipocampoRESUMEN
Variations in the geometry of the environment, such as the shape and size of an enclosure, have profound effects on navigational behavior and its neural underpinning. Here, we show that these effects arise as a consequence of a single, unifying principle: to navigate efficiently, the brain must maintain and update the uncertainty about one's location. We developed an image-computable Bayesian ideal observer model of navigation, continually combining noisy visual and self-motion inputs, and a neural encoding model optimized to represent the location uncertainty computed by the ideal observer. Through mathematical analysis and numerical simulations, we show that the ideal observer accounts for a diverse range of sometimes paradoxical distortions of human homing behavior in anisotropic and deformed environments, including 'boundary tethering', and its neural encoding accounts for distortions of rodent grid cell responses under identical environmental manipulations. Our results demonstrate that spatial uncertainty plays a key role in navigation.
RESUMEN
The input-output transformation of individual neurons is a key building block of neural circuit dynamics. While previous models of this transformation vary widely in their complexity, they all describe the underlying functional architecture as unitary, such that each synaptic input makes a single contribution to the neuronal response. Here, we show that the input-output transformation of CA1 pyramidal cells is instead best captured by two distinct functional architectures operating in parallel. We used statistically principled methods to fit flexible, yet interpretable, models of the transformation of input spikes into the somatic "output" voltage and to automatically select among alternative functional architectures. With dendritic Na+ channels blocked, responses are accurately captured by a single static and global nonlinearity. In contrast, dendritic Na+-dependent integration requires a functional architecture with multiple dynamic nonlinearities and clustered connectivity. These two architectures incorporate distinct morphological and biophysical properties of the neuron and its synaptic organization.
Asunto(s)
Dendritas , Neuronas , Dendritas/fisiología , Neuronas/fisiología , Células Piramidales/fisiología , Potenciales de Acción/fisiología , Sinapsis/fisiología , Modelos NeurológicosRESUMEN
Recent breakthroughs in artificial intelligence (AI) have enabled machines to plan in tasks previously thought to be uniquely human. Meanwhile, the planning algorithms implemented by the brain itself remain largely unknown. Here, we review neural and behavioral data in sequential decision-making tasks that elucidate the ways in which the brain does-and does not-plan. To systematically review available biological data, we create a taxonomy of planning algorithms by summarizing the relevant design choices for such algorithms in AI. Across species, recording techniques, and task paradigms, we find converging evidence that the brain represents future states consistent with a class of planning algorithms within our taxonomy-focused, depth-limited, and serial. However, we argue that current data are insufficient for addressing more detailed algorithmic questions. We propose a new approach leveraging AI advances to drive experiments that can adjudicate between competing candidate algorithms.
Asunto(s)
Algoritmos , Inteligencia Artificial , Encéfalo , HumanosRESUMEN
Therapeutic targets in cancer cells defective for the tumor suppressor ARID1A are fundamentals of synthetic lethal strategies. However, whether modulating ARID1A function in premalignant breast epithelial cells could be exploited to reduce carcinogenic potential remains to be elucidated. In search of chromatin-modulating mechanisms activated by anti-proliferative agents in normal breast epithelial (HME-hTert) cells, we identified a distinct pattern of genome-wide H3K27 histone acetylation marks characteristic for the combined treatment by the cancer preventive rexinoid bexarotene (Bex) and carvedilol (Carv). Among these marks, several enhancers functionally linked to TGF-ß signaling were enriched for ARID1A and Brg1, subunits within the SWI/SNF chromatin-remodeling complex. The recruitment of ARID1A and Brg1 was associated with the suppression of TGFBR2, KLF4, and FoxQ1, and the induction of BMP6, while the inverse pattern ensued upon the knock-down of ARID1A. Bex+Carv treatment resulted in fewer cells expressing N-cadherin and dictated a more epithelial phenotype. However, the silencing of ARID1A expression reversed the ability of Bex and Carv to limit epithelial-mesenchymal transition. The nuclear levels of SMAD4, a canonical mediator of TGF-ß action, were more effectively suppressed by the combination than by TGF-ß. In contrast, TGF-ß treatment exceeded the ability of Bex+Carv to lower nuclear FoxQ1 levels and induced markedly higher E-cadherin positivity, indicating a target-selective antagonism of Bex+Carv to TGF-ß action. In summary, the chromatin-wide redistribution of ARID1A by Bex and Carv treatment is instrumental in the suppression of genes mediating TGF-ß signaling, and, thus, the morphologic reprogramming of normal breast epithelial cells. The concerted engagement of functionally linked targets using low toxicity clinical agents represents an attractive new approach for cancer interception.
Asunto(s)
Transición Epitelial-Mesenquimal , Neoplasias , Cadherinas , Cromatina , Ensamble y Desensamble de Cromatina , Transición Epitelial-Mesenquimal/genética , Factores de Transcripción Forkhead , Humanos , Factor de Crecimiento Transformador betaRESUMEN
Sequential activity reflecting previously experienced temporal sequences is considered a hallmark of learning across cortical areas. However, it is unknown how cortical circuits avoid the converse problem: producing spurious sequences that are not reflecting sequences in their inputs. We develop methods to quantify and study sequentiality in neural responses. We show that recurrent circuit responses generally include spurious sequences, which are specifically prevented in circuits that obey two widely known features of cortical microcircuit organization: Dale's law and Hebbian connectivity. In particular, spike-timing-dependent plasticity in excitation-inhibition networks leads to an adaptive erasure of spurious sequences. We tested our theory in multielectrode recordings from the visual cortex of awake ferrets. Although responses to natural stimuli were largely non-sequential, responses to artificial stimuli initially included spurious sequences, which diminished over extended exposure. These results reveal an unexpected role for Hebbian experience-dependent plasticity and Dale's law in sensory cortical circuits.
Asunto(s)
Modelos Neurológicos , Corteza Visual , Animales , Hurones , Plasticidad Neuronal/fisiología , Lóbulo Parietal , Corteza Visual/fisiologíaRESUMEN
The brain exhibits coherent, long-range oscillations, and it now appears that these oscillations play a substantial role in neural coding: they can boost the information contained in action potentials by as much as 50%.
Asunto(s)
Potenciales de Acción/fisiología , Relojes Biológicos/fisiología , Corteza Visual/fisiología , Animales , MacacaRESUMEN
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input.
Asunto(s)
Aprendizaje/fisiología , Visión Ocular/fisiología , Teorema de Bayes , HumanosRESUMEN
Perception is often described as probabilistic inference requiring an internal representation of uncertainty. However, it is unknown whether uncertainty is represented in a task-dependent manner, solely at the level of decisions, or in a fully Bayesian manner, across the entire perceptual pathway. To address this question, we first codify and evaluate the possible strategies the brain might use to represent uncertainty, and highlight the normative advantages of fully Bayesian representations. In such representations, uncertainty information is explicitly represented at all stages of processing, including early sensory areas, allowing for flexible and efficient computations in a wide variety of situations. Next, we critically review neural and behavioral evidence about the representation of uncertainty in the brain agreeing with fully Bayesian representations. We argue that sufficient behavioral evidence for fully Bayesian representations is lacking and suggest experimental approaches for demonstrating the existence of multivariate posterior distributions along the perceptual pathway.
RESUMEN
The dendritic tree contributes significantly to the elementary computations a neuron performs while converting its synaptic inputs into action potential output. Traditionally, these computations have been characterized as both temporally and spatially localized. Under this localist account, neurons compute near-instantaneous mappings from their current input to their current output, brought about by somatic summation of dendritic contributions that are generated in functionally segregated compartments. However, recent evidence about the presence of oscillations in dendrites suggests a qualitatively different mode of operation: the instantaneous phase of such oscillations can depend on a long history of inputs, and under appropriate conditions, even dendritic oscillators that are remote may interact through synchronization. Here, we develop a mathematical framework to analyze the interactions of local dendritic oscillations and the way these interactions influence single cell computations. Combining weakly coupled oscillator methods with cable theoretic arguments, we derive phase-locking states for multiple oscillating dendritic compartments. We characterize how the phase-locking properties depend on key parameters of the oscillating dendrite: the electrotonic properties of the (active) dendritic segment, and the intrinsic properties of the dendritic oscillators. As a direct consequence, we show how input to the dendrites can modulate phase-locking behavior and hence global dendritic coherence. In turn, dendritic coherence is able to gate the integration and propagation of synaptic signals to the soma, ultimately leading to an effective control of somatic spike generation. Our results suggest that dendritic oscillations enable the dendritic tree to operate on more global temporal and spatial scales than previously thought; notably that local dendritic activity may be a mechanism for generating on-going whole-cell voltage oscillations.