RESUMO
Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying the recently developed modified 0-1 chaos test to electrocorticography (ECoG) and magnetoencephalography (MEG) recordings from the cortices of humans and macaques across normal waking, generalized seizure, anesthesia, and psychedelic states. Our evidence suggests that cortical information processing is disrupted during unconscious states because of a transition of low-frequency cortical electric oscillations away from this critical point; conversely, we show that psychedelics may increase the information richness of cortical activity by tuning low-frequency cortical oscillations closer to this critical point. Finally, we analyze clinical electroencephalography (EEG) recordings from patients with disorders of consciousness (DOC) and show that assessing the proximity of slow cortical oscillatory electrodynamics to the edge-of-chaos critical point may be useful as an index of consciousness in the clinical setting.
Assuntos
Córtex Cerebral/fisiologia , Estado de Consciência/fisiologia , Fenômenos Eletrofisiológicos , Animais , Mapeamento Encefálico , HumanosRESUMO
Inhibitory neurons dominate the intrinsic circuits in the visual thalamus. Interneurons in the lateral geniculate nucleus innervate relay cells and each other densely to provide powerful inhibition. The visual sector of the overlying thalamic reticular nucleus receives input from relay cells and supplies feedback inhibition to them in return. Together, these two inhibitory circuits influence all information transmitted from the retina to the primary visual cortex. By contrast, relay cells make few local connections. This review explores the role of thalamic inhibition from the dual perspectives of feature detection and information theory. For example, we describe how inhibition sharpens tuning for spatial and temporal features of the stimulus and how it might enhance image perception. We also discuss how inhibitory circuits help to reduce redundancy in signals sent downstream and, at the same time, are adapted to maximize the amount of information conveyed to the cortex.
Assuntos
Inibição Neural/fisiologia , Tálamo/fisiologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Animais , Corpos Geniculados/fisiologia , Interneurônios/fisiologia , Córtex Visual/fisiologiaRESUMO
We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations. Combining these decoding techniques with interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks.
RESUMO
Energy-based models (EBMs) assign an unnormalized log probability to data samples. This functionality has a variety of applications, such as sample synthesis, data denoising, sample restoration, outlier detection, Bayesian reasoning and many more. But, the training of EBMs using standard maximum likelihood is extremely slow because it requires sampling from the model distribution. Score matching potentially alleviates this problem. In particular, denoising-score matching has been successfully used to train EBMs. Using noisy data samples with one fixed noise level, these models learn fast and yield good results in data denoising. However, demonstrations of such models in the high-quality sample synthesis of high-dimensional data were lacking. Recently, a paper showed that a generative model trained by denoising-score matching accomplishes excellent sample synthesis when trained with data samples corrupted with multiple levels of noise. Here we provide an analysis and empirical evidence showing that training with multiple noise levels is necessary when the data dimension is high. Leveraging this insight, we propose a novel EBM trained with multiscale denoising-score matching. Our model exhibits a data-generation performance comparable to state-of-the-art techniques such as GANs and sets a new baseline for EBMs. The proposed model also provides density information and performs well on an image-inpainting task.
RESUMO
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.
RESUMO
Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices.
Assuntos
Potenciais de Ação/fisiologia , Interneurônios/fisiologia , Memória/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , HumanosRESUMO
Even though the lateral geniculate nucleus of the thalamus (LGN) is associated with form vision, that is not its sole role. Only the dorsal portion of LGN (dLGN) projects to V1. The ventral division (vLGN) connects subcortically, sending inhibitory projections to sensorimotor structures, including the superior colliculus (SC) and regions associated with certain behavioral states, such as fear (Monavarfeshani et al., 2017; Salay et al., 2018). We combined computational, physiological, and anatomical approaches to explore visual processing in vLGN of mice of both sexes, making comparisons to dLGN and SC for perspective. Compatible with past, qualitative descriptions, the receptive fields we quantified in vLGN were larger than those in dLGN, and most cells preferred bright versus dark stimuli (Harrington, 1997). Dendritic arbors spanned the length and/or width of vLGN and were often asymmetric, positioned to collect input from large but discrete territories. By contrast, arbors in dLGN are compact (Krahe et al., 2011). Consistent with spatially coarse receptive fields in vLGN, visually evoked changes in spike timing were less precise than for dLGN and SC. Notably, however, the membrane currents and spikes of some cells in vLGN displayed gamma oscillations whose phase and strength varied with stimulus pattern, as for SC (Stitt et al., 2013). Thus, vLGN can engage its targets using oscillation-based and conventional rate codes. Finally, dark shadows activate SC and drive escape responses, whereas vLGN prefers bright stimuli. Thus, one function of long-range inhibitory projections from vLGN might be to enable movement by releasing motor targets, such as SC, from suppression.SIGNIFICANCE STATEMENT Only the dorsal lateral geniculate nucleus (dLGN) connects to cortex to serve form vision; the ventral division (vLGN) projects subcortically to sensorimotor nuclei, including the superior colliculus (SC), via long-range inhibitory connections. Here, we asked how vLGN processes visual information, making comparisons with dLGN and SC for perspective. Cells in vLGN versus dLGN had wider dendritic arbors, larger receptive fields, and fired with lower temporal precision, consistent with a modulatory role. Like SC, but not dLGN, visual stimuli entrained oscillations in vLGN, perhaps reflecting shared strategies for visuomotor processing. Finally, most neurons in vLGN preferred bright shapes, whereas dark stimuli activate SC and drive escape behaviors, suggesting that vLGN enables rapid movement by releasing target motor structures from inhibition.
Assuntos
Corpos Geniculados/fisiologia , Percepção Visual/fisiologia , Animais , Potenciais Evocados Visuais/fisiologia , Feminino , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Vias Visuais/fisiologiaRESUMO
Markov Chain Monte Carlo (MCMC) methods sample from unnormalized probability distributions and offer guarantees of exact sampling. However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods. Augmenting samplers with neural networks can potentially improve their efficiency. Previous neural network-based samplers were trained with objectives that either did not explicitly encourage exploration, or contained a term that encouraged exploration but only for well structured distributions. Here we propose to maximize proposal entropy for adapting the proposal to distributions of any shape. To optimize proposal entropy directly, we devised a neural network MCMC sampler that has a flexible and tractable proposal distribution. Specifically, our network architecture utilizes the gradient of the target distribution for generating proposals. Our model achieved significantly higher efficiency than previous neural network MCMC techniques in a variety of sampling tasks, sometimes by more than an order magnitude. Further, the sampler was demonstrated through the training of a convergent energy-based model of natural images. The adaptive sampler achieved unbiased sampling with significantly higher proposal entropy than a Langevin dynamics sample. The trained sampler also achieved better sample quality.
RESUMO
We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed by the Hadamard product between a discrete set of high-dimensional vectors, a resonator network can efficiently decompose the composite into these factors. We compare the performance of resonator networks against optimization-based methods, including Alternating Least Squares and several gradient-based algorithms, showing that resonator networks are superior in several important ways. This advantage is achieved by leveraging a combination of nonlinear dynamics and searching in superposition, by which estimates of the correct solution are formed from a weighted superposition of all possible solutions. While the alternative methods also search in superposition, the dynamics of resonator networks allow them to strike a more effective balance between exploring the solution space and exploiting local information to drive the network toward probable solutions. Resonator networks are not guaranteed to converge, but within a particular regime they almost always do. In exchange for relaxing the guarantee of global convergence, resonator networks are dramatically more effective at finding factorizations than all alternative approaches considered.
Assuntos
Encéfalo/fisiologia , Cognição/fisiologia , Redes Neurais de Computação , Animais , HumanosRESUMO
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSAs) (Plate, 1991; Gayler, 1998; Kanerva, 1996), whereby data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations. In particular, we propose an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. We show in two examples-parsing of a tree-like data structure and parsing of a visual scene-how the factorization problem arises and how the resonator network can solve it. More broadly, resonator networks open the possibility of applying VSAs to myriad artificial intelligence problems in real-world domains. The companion article in this issue (Kent, Frady, Sommer, & Olshausen, 2020) presents a rigorous analysis and evaluation of the performance of resonator networks, showing it outperforms alternative approaches.
Assuntos
Encéfalo/fisiologia , Cognição/fisiologia , Redes Neurais de Computação , Animais , HumanosRESUMO
An outstanding problem in neuroscience is to understand how information is integrated across the many modules of the brain. While classic information-theoretic measures have transformed our understanding of feedforward information processing in the brain's sensory periphery, comparable measures for information flow in the massively recurrent networks of the rest of the brain have been lacking. To address this, recent work in information theory has produced a sound measure of network-wide "integrated information", which can be estimated from time-series data. But, a computational hurdle has stymied attempts to measure large-scale information integration in real brains. Specifically, the measurement of integrated information involves a combinatorial search for the informational "weakest link" of a network, a process whose computation time explodes super-exponentially with network size. Here, we show that spectral clustering, applied on the correlation matrix of time-series data, provides an approximate but robust solution to the search for the informational weakest link of large networks. This reduces the computation time for integrated information in large systems from longer than the lifespan of the universe to just minutes. We evaluate this solution in brain-like systems of coupled oscillators as well as in high-density electrocortigraphy data from two macaque monkeys, and show that the informational "weakest link" of the monkey cortex splits posterior sensory areas from anterior association areas. Finally, we use our solution to provide evidence in support of the long-standing hypothesis that information integration is maximized by networks with a high global efficiency, and that modular network structures promote the segregation of information.
Assuntos
Córtex Cerebral/fisiologia , Teoria da Informação , Modelos Neurológicos , Rede Nervosa/fisiologia , Animais , Biologia Computacional , MacacaRESUMO
To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
Assuntos
Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Algoritmos , Simulação por Computador , HumanosRESUMO
Comparative physiological and anatomical studies have greatly advanced our understanding of sensory systems. Many lines of evidence show that the murine lateral geniculate nucleus (LGN) has unique attributes, compared with other species such as cat and monkey. For example, in rodent, thalamic receptive field structure is markedly diverse, and many cells are sensitive to stimulus orientation and direction. To explore shared and different strategies of synaptic integration across species, we made whole-cell recordings in vivo from the murine LGN during the presentation of visual stimuli, analyzed the results with different computational approaches, and compared our findings with those from cat. As for carnivores, murine cells with classical center-surround receptive fields had a "push-pull" structure of excitation and inhibition within a given On or Off subregion. These cells compose the largest single population in the murine LGN (â¼40%), indicating that push-pull is key in the form vision pathway across species. For two cell types with overlapping On and Off responses, which recalled either W3 or suppressed-by-contrast ganglion cells in murine retina, inhibition took a different form and was most pronounced for spatially extensive stimuli. Other On-Off cells were selective for stimulus orientation and direction. In these cases, retinal inputs were tuned and, for oriented cells, the second-order subunit of the receptive field predicted the preferred angle. By contrast, suppression was not tuned and appeared to sharpen stimulus selectivity. Together, our results provide new perspectives on the role of excitation and inhibition in retinothalamic processing. SIGNIFICANCE STATEMENT: We explored the murine lateral geniculate nucleus from a comparative physiological perspective. In cat, most retinal cells have center-surround receptive fields and push-pull excitation and inhibition, including neurons with the smallest (highest acuity) receptive fields. The same is true for thalamic relay cells. In mouse retina, the most numerous cell type has the smallest receptive fields but lacks push-pull. The most common receptive field in rodent thalamus, however, is center-surround with push-pull. Thus, receptive field structure supersedes size per se for form vision. Further, for many orientation-selective cells, the second-order component of the receptive field aligned with stimulus preference, whereas suppression was untuned. Thus, inhibition may improve spatial resolution and sharpen other forms of selectivity in rodent lateral geniculate nucleus.
Assuntos
Corpos Geniculados/fisiologia , Rede Nervosa/fisiologia , Sinapses/fisiologia , Campos Visuais/fisiologia , Vias Visuais/fisiopatologia , Percepção Visual/fisiologia , Animais , Mapeamento Encefálico , Gatos , Feminino , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Modelos Neurológicos , Inibição Neural/fisiologia , Ratos , Ratos Long-Evans , Células Ganglionares da Retina/fisiologia , Especificidade da Espécie , Transmissão Sináptica/fisiologiaRESUMO
We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues represented by high-dimensional, complex-valued vectors. These are composed into a single vector representing position by a similarity-preserving, conjunctive vector-binding operation. Self-consistency between the representations of the overall position and of the individual residues is enforced by a modular attractor network whose modules correspond to the grid cell modules in entorhinal cortex. The vector binding operation can also associate different contexts to spatial representations, yielding a model for entorhinal cortex and hippocampus. We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension, robust error correction, and hexagonal, carry-free encoding of spatial position. These properties in turn enable robust path integration and association with sensory inputs. More generally, the model formalizes how compositional computations could occur in the hippocampal formation and leads to testable experimental predictions.
RESUMO
Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks.
Assuntos
Cognição , Redes Neurais de Computação , Encéfalo , Resolução de Problemas , Neurônios/fisiologiaRESUMO
Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model.
RESUMO
A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., hardware implementations of networks of interacting binary spin variables. Most Ising machines leverage second-order interactions although important classes of optimization problems, such as satisfiability problems, map more seamlessly to Ising networks with higher-order interactions. Here, we demonstrate that higher-order Ising machines can solve satisfiability problems more resource-efficiently in terms of the number of spin variables and their connections when compared to traditional second-order Ising machines. Further, our results show on a benchmark dataset of Boolean k-satisfiability problems that higher-order Ising machines implemented with coupled oscillators rapidly find solutions that are better than second-order Ising machines, thus, improving the current state-of-the-art for Ising machines.
RESUMO
Local field potentials (LFPs) reflect the collective dynamics of neural populations, yet their exact relationship to neural codes remains unknown1. One notable exception is the theta rhythm of the rodent hippocampus, which seems to provide a reference clock to decode the animal's position from spatiotemporal patterns of neuronal spiking2 or LFPs3. But when the animal stops, theta becomes irregular4, potentially indicating the breakdown of temporal coding by neural populations. Here we show that no such breakdown occurs, introducing an artificial neural network that can recover position-tuned rhythmic patterns (pThetas) without relying on the more prominent theta rhythm as a reference clock. pTheta and theta preferentially correlate with place cell and interneuron spiking, respectively. When rats forage in an open field, pTheta is jointly tuned to position and head orientation, a property not seen in individual place cells but expected to emerge from place cell sequences5. Our work demonstrates that weak and intermittent oscillations, as seen in many brain regions and species, can carry behavioral information commensurate with population spike codes.
RESUMO
By influencing the type and quality of information that relay cells transmit, local interneurons in thalamus have a powerful impact on cortex. To define the sensory features that these inhibitory neurons encode, we mapped receptive fields of optogenetically identified cells in the murine dorsolateral geniculate nucleus. Although few in number, local interneurons had diverse types of receptive fields, like their counterpart relay cells. This result differs markedly from visual cortex, where inhibitory cells are typically less selective than excitatory cells. To explore how thalamic interneurons might converge on relay cells, we took a computational approach. Using an evolutionary algorithm to search through a library of interneuron models generated from our results, we show that aggregated output from different groups of local interneurons can simulate the inhibitory component of the relay cell's receptive field. Thus, our work provides proof-of-concept that groups of diverse interneurons can supply feature-specific inhibition to relay cells.
RESUMO
We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.