Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
ArXiv ; 2023 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-37986727

RESUMEN

We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.

2.
Entropy (Basel) ; 25(10)2023 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-37895489

RESUMEN

Energy-based models (EBMs) assign an unnormalized log probability to data samples. This functionality has a variety of applications, such as sample synthesis, data denoising, sample restoration, outlier detection, Bayesian reasoning and many more. But, the training of EBMs using standard maximum likelihood is extremely slow because it requires sampling from the model distribution. Score matching potentially alleviates this problem. In particular, denoising-score matching has been successfully used to train EBMs. Using noisy data samples with one fixed noise level, these models learn fast and yield good results in data denoising. However, demonstrations of such models in the high-quality sample synthesis of high-dimensional data were lacking. Recently, a paper showed that a generative model trained by denoising-score matching accomplishes excellent sample synthesis when trained with data samples corrupted with multiple levels of noise. Here we provide an analysis and empirical evidence showing that training with multiple noise levels is necessary when the data dimension is high. Leveraging this insight, we propose a novel EBM trained with multiscale denoising-score matching. Our model exhibits a data-generation performance comparable to state-of-the-art techniques such as GANs and sets a new baseline for EBMs. The proposed model also provides density information and performs well on an image-inpainting task.

3.
Nat Commun ; 14(1): 6033, 2023 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-37758716

RESUMEN

A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i.e., hardware implementations of networks of interacting binary spin variables. Most Ising machines leverage second-order interactions although important classes of optimization problems, such as satisfiability problems, map more seamlessly to Ising networks with higher-order interactions. Here, we demonstrate that higher-order Ising machines can solve satisfiability problems more resource-efficiently in terms of the number of spin variables and their connections when compared to traditional second-order Ising machines. Further, our results show on a benchmark dataset of Boolean k-satisfiability problems that higher-order Ising machines implemented with coupled oscillators rapidly find solutions that are better than second-order Ising machines, thus, improving the current state-of-the-art for Ising machines.

4.
bioRxiv ; 2023 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-37609295

RESUMEN

By influencing the type and quality of information that relay cells transmit, local interneurons in thalamus have a powerful impact on cortex. To define the sensory features that these inhibitory neurons encode, we mapped receptive fields of optogenetically identified cells in the murine dorsolateral geniculate nucleus. Although few in number, local interneurons had diverse types of receptive fields, like their counterpart relay cells. This result differs markedly from visual cortex, where inhibitory cells are typically less selective than excitatory cells. To explore how thalamic interneurons might converge on relay cells, we took a computational approach. Using an evolutionary algorithm to search through a library of interneuron models generated from our results, we show that aggregated output from different groups of local interneurons can simulate the inhibitory component of the relay cell's receptive field. Thus, our work provides proof-of-concept that groups of diverse interneurons can supply feature-specific inhibition to relay cells.

5.
Neural Comput ; 35(7): 1159-1186, 2023 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-37187162

RESUMEN

We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate the considered techniques in several settings that involve, for example, inclusion of external noise and storage elements with reduced precision. In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for hyperdimensional computing/vector symbolic architectures) are also well suited for decoding information from the compositional distributed representations. Combining these decoding techniques with interference cancellation ideas from communications improves previously reported bounds (Hersche et al., 2021) of the information rate of the distributed representations from 1.20 to 1.40 bits per dimension for smaller codebooks and from 0.60 to 1.26 bits per dimension for larger codebooks.

6.
Artículo en Inglés | MEDLINE | ID: mdl-37022402

RESUMEN

Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model.

7.
IEEE Trans Neural Netw Learn Syst ; 34(5): 2191-2204, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-34478381

RESUMEN

Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks.


Asunto(s)
Cognición , Redes Neurales de la Computación , Encéfalo , Solución de Problemas , Neuronas/fisiología
8.
bioRxiv ; 2023 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-38187593

RESUMEN

Local field potentials (LFPs) reflect the collective dynamics of neural populations, yet their exact relationship to neural codes remains unknown1. One notable exception is the theta rhythm of the rodent hippocampus, which seems to provide a reference clock to decode the animal's position from spatiotemporal patterns of neuronal spiking2 or LFPs3. But when the animal stops, theta becomes irregular4, potentially indicating the breakdown of temporal coding by neural populations. Here we show that no such breakdown occurs, introducing an artificial neural network that can recover position-tuned rhythmic patterns (pThetas) without relying on the more prominent theta rhythm as a reference clock. pTheta and theta preferentially correlate with place cell and interneuron spiking, respectively. When rats forage in an open field, pTheta is jointly tuned to position and head orientation, a property not seen in individual place cells but expected to emerge from place cell sequences5. Our work demonstrates that weak and intermittent oscillations, as seen in many brain regions and species, can carry behavioral information commensurate with population spike codes.

9.
Proc Natl Acad Sci U S A ; 119(7)2022 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-35145021

RESUMEN

Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying the recently developed modified 0-1 chaos test to electrocorticography (ECoG) and magnetoencephalography (MEG) recordings from the cortices of humans and macaques across normal waking, generalized seizure, anesthesia, and psychedelic states. Our evidence suggests that cortical information processing is disrupted during unconscious states because of a transition of low-frequency cortical electric oscillations away from this critical point; conversely, we show that psychedelics may increase the information richness of cortical activity by tuning low-frequency cortical oscillations closer to this critical point. Finally, we analyze clinical electroencephalography (EEG) recordings from patients with disorders of consciousness (DOC) and show that assessing the proximity of slow cortical oscillatory electrodynamics to the edge-of-chaos critical point may be useful as an index of consciousness in the clinical setting.


Asunto(s)
Corteza Cerebral/fisiología , Estado de Conciencia/fisiología , Fenómenos Electrofisiológicos , Animales , Mapeo Encefálico , Humanos
10.
Neuroinformatics ; 20(2): 507-512, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35061216

RESUMEN

In this perspective article, we consider the critical issue of data and other research object standardisation and, specifically, how international collaboration, and organizations such as the International Neuroinformatics Coordinating Facility (INCF) can encourage that emerging neuroscience data be Findable, Accessible, Interoperable, and Reusable (FAIR). As neuroscientists engaged in the sharing and integration of multi-modal and multiscale data, we see the current insufficiency of standards as a major impediment in the Interoperability and Reusability of research results. We call for increased international collaborative standardisation of neuroscience data to foster integration and efficient reuse of research objects.


Asunto(s)
Recolección de Datos , Neurociencias
11.
Proc IEEE Inst Electr Electron Eng ; 110(10): 1538-1571, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37868615

RESUMEN

This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.

12.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2701-2713, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34699370

RESUMEN

Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns-rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.

13.
Entropy (Basel) ; 23(3)2021 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-33668743

RESUMEN

Markov Chain Monte Carlo (MCMC) methods sample from unnormalized probability distributions and offer guarantees of exact sampling. However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods. Augmenting samplers with neural networks can potentially improve their efficiency. Previous neural network-based samplers were trained with objectives that either did not explicitly encourage exploration, or contained a term that encouraged exploration but only for well structured distributions. Here we propose to maximize proposal entropy for adapting the proposal to distributions of any shape. To optimize proposal entropy directly, we devised a neural network MCMC sampler that has a flexible and tractable proposal distribution. Specifically, our network architecture utilizes the gradient of the target distribution for generating proposals. Our model achieved significantly higher efficiency than previous neural network MCMC techniques in a variety of sampling tasks, sometimes by more than an order magnitude. Further, the sampler was demonstrated through the training of a convergent energy-based model of natural images. The adaptive sampler achieved unbiased sampling with significantly higher proposal entropy than a Langevin dynamics sample. The trained sampler also achieved better sample quality.

14.
Neural Comput ; 32(12): 2332-2388, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33080160

RESUMEN

We develop theoretical foundations of resonator networks, a new type of recurrent neural network introduced in Frady, Kent, Olshausen, and Sommer (2020), a companion article in this issue, to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures. Given a composite vector formed by the Hadamard product between a discrete set of high-dimensional vectors, a resonator network can efficiently decompose the composite into these factors. We compare the performance of resonator networks against optimization-based methods, including Alternating Least Squares and several gradient-based algorithms, showing that resonator networks are superior in several important ways. This advantage is achieved by leveraging a combination of nonlinear dynamics and searching in superposition, by which estimates of the correct solution are formed from a weighted superposition of all possible solutions. While the alternative methods also search in superposition, the dynamics of resonator networks allow them to strike a more effective balance between exploring the solution space and exploiting local information to drive the network toward probable solutions. Resonator networks are not guaranteed to converge, but within a particular regime they almost always do. In exchange for relaxing the guarantee of global convergence, resonator networks are dramatically more effective at finding factorizations than all alternative approaches considered.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Redes Neurales de la Computación , Animales , Humanos
15.
Neural Comput ; 32(12): 2311-2331, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33080162

RESUMEN

The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSAs) (Plate, 1991; Gayler, 1998; Kanerva, 1996), whereby data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations. In particular, we propose an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. We show in two examples-parsing of a tree-like data structure and parsing of a visual scene-how the factorization problem arises and how the resonator network can solve it. More broadly, resonator networks open the possibility of applying VSAs to myriad artificial intelligence problems in real-world domains. The companion article in this issue (Kent, Frady, Sommer, & Olshausen, 2020) presents a rigorous analysis and evaluation of the performance of resonator networks, showing it outperforms alternative approaches.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Redes Neurales de la Computación , Animales , Humanos
16.
Front Neuroinform ; 14: 27, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33041776

RESUMEN

The Neurodata Without Borders (abbreviation NWB) format is a current technology for storing neurophysiology data along with the associated metadata. Data stored in the format is organized into separate HDF5 files, each file usually storing the data associated with a single recording session. While the NWB format provides a structured method for storing data, so far there have not been tools which enable searching a collection of NWB files in order to find data of interest for a particular purpose. We describe here three tools to enable searching NWB files. The tools have different features making each of them most useful for a particular task. The first tool, called the NWB Query Engine, is written in Java. It allows searching the complete content of NWB files. It was designed for the first version of NWB (NWB 1) and supports most (but not all) features of the most recent version (NWB 2). For some searches, it is the fastest tool. The second tool, called "search_nwb" is written in Python and also allow searching the complete contents of NWB files. It works with both NWB 1 and NWB 2, as does the third tool. The third tool, called "nwbindexer" enables searching a collection of NWB files using a two-step process. In the first step, a utility is run which creates an SQLite database containing the metadata in a collection of NWB files. This database is then searched in the second step, using another utility. Once the index is built, this two-step processes allows faster searches than are done by the other tools, but does not enable as complete of searches. All three tools use a simple query language which was developed for this project. Software integrating the three tools into a web-interface is provided which enables searching NWB files by submitting a web form.

17.
J Neurosci ; 40(26): 5019-5032, 2020 06 24.
Artículo en Inglés | MEDLINE | ID: mdl-32350041

RESUMEN

Even though the lateral geniculate nucleus of the thalamus (LGN) is associated with form vision, that is not its sole role. Only the dorsal portion of LGN (dLGN) projects to V1. The ventral division (vLGN) connects subcortically, sending inhibitory projections to sensorimotor structures, including the superior colliculus (SC) and regions associated with certain behavioral states, such as fear (Monavarfeshani et al., 2017; Salay et al., 2018). We combined computational, physiological, and anatomical approaches to explore visual processing in vLGN of mice of both sexes, making comparisons to dLGN and SC for perspective. Compatible with past, qualitative descriptions, the receptive fields we quantified in vLGN were larger than those in dLGN, and most cells preferred bright versus dark stimuli (Harrington, 1997). Dendritic arbors spanned the length and/or width of vLGN and were often asymmetric, positioned to collect input from large but discrete territories. By contrast, arbors in dLGN are compact (Krahe et al., 2011). Consistent with spatially coarse receptive fields in vLGN, visually evoked changes in spike timing were less precise than for dLGN and SC. Notably, however, the membrane currents and spikes of some cells in vLGN displayed gamma oscillations whose phase and strength varied with stimulus pattern, as for SC (Stitt et al., 2013). Thus, vLGN can engage its targets using oscillation-based and conventional rate codes. Finally, dark shadows activate SC and drive escape responses, whereas vLGN prefers bright stimuli. Thus, one function of long-range inhibitory projections from vLGN might be to enable movement by releasing motor targets, such as SC, from suppression.SIGNIFICANCE STATEMENT Only the dorsal lateral geniculate nucleus (dLGN) connects to cortex to serve form vision; the ventral division (vLGN) projects subcortically to sensorimotor nuclei, including the superior colliculus (SC), via long-range inhibitory connections. Here, we asked how vLGN processes visual information, making comparisons with dLGN and SC for perspective. Cells in vLGN versus dLGN had wider dendritic arbors, larger receptive fields, and fired with lower temporal precision, consistent with a modulatory role. Like SC, but not dLGN, visual stimuli entrained oscillations in vLGN, perhaps reflecting shared strategies for visuomotor processing. Finally, most neurons in vLGN preferred bright shapes, whereas dark stimuli activate SC and drive escape behaviors, suggesting that vLGN enables rapid movement by releasing target motor structures from inhibition.


Asunto(s)
Cuerpos Geniculados/fisiología , Percepción Visual/fisiología , Animales , Potenciales Evocados Visuales/fisiología , Femenino , Masculino , Ratones , Ratones Endogámicos C57BL , Vías Visuales/fisiología
18.
Commun Biol ; 3: 11, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31909203

RESUMEN

Chaos, or exponential sensitivity to small perturbations, appears everywhere in nature. Moreover, chaos is predicted to play diverse functional roles in living systems. A method for detecting chaos from empirical measurements should therefore be a key component of the biologist's toolkit. But, classic chaos-detection tools are highly sensitive to measurement noise and break down for common edge cases, making it difficult to detect chaos in domains, like biology, where measurements are noisy. However, newer tools promise to overcome these limitations. Here, we combine several such tools into an automated processing pipeline, and show that our pipeline can detect the presence (or absence) of chaos in noisy recordings, even for difficult edge cases. As a first-pass application of our pipeline, we show that heart rate variability is not chaotic as some have proposed, and instead reflects a stochastic process in both health and disease. Our tool is easy-to-use and freely available.


Asunto(s)
Determinación de la Frecuencia Cardíaca/instrumentación , Dinámicas no Lineales , Procesos Estocásticos , Humanos
19.
Proc Natl Acad Sci U S A ; 116(36): 18050-18059, 2019 09 03.
Artículo en Inglés | MEDLINE | ID: mdl-31431524

RESUMEN

Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices.


Asunto(s)
Potenciales de Acción/fisiología , Interneuronas/fisiología , Memoria/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Redes Neurales de la Computación , Humanos
20.
PLoS Comput Biol ; 15(2): e1006807, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30730907

RESUMEN

An outstanding problem in neuroscience is to understand how information is integrated across the many modules of the brain. While classic information-theoretic measures have transformed our understanding of feedforward information processing in the brain's sensory periphery, comparable measures for information flow in the massively recurrent networks of the rest of the brain have been lacking. To address this, recent work in information theory has produced a sound measure of network-wide "integrated information", which can be estimated from time-series data. But, a computational hurdle has stymied attempts to measure large-scale information integration in real brains. Specifically, the measurement of integrated information involves a combinatorial search for the informational "weakest link" of a network, a process whose computation time explodes super-exponentially with network size. Here, we show that spectral clustering, applied on the correlation matrix of time-series data, provides an approximate but robust solution to the search for the informational weakest link of large networks. This reduces the computation time for integrated information in large systems from longer than the lifespan of the universe to just minutes. We evaluate this solution in brain-like systems of coupled oscillators as well as in high-density electrocortigraphy data from two macaque monkeys, and show that the informational "weakest link" of the monkey cortex splits posterior sensory areas from anterior association areas. Finally, we use our solution to provide evidence in support of the long-standing hypothesis that information integration is maximized by networks with a high global efficiency, and that modular network structures promote the segregation of information.


Asunto(s)
Corteza Cerebral/fisiología , Teoría de la Información , Modelos Neurológicos , Red Nerviosa/fisiología , Animales , Biología Computacional , Macaca
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...