RESUMO
The directionality of network information flow dictates how networks process information. A central component of information processing in both biological and artificial neural networks is their ability to perform synergistic integration-a type of computation. We established previously that synergistic integration varies directly with the strength of feedforward information flow. However, the relationships between both recurrent and feedback information flow and synergistic integration remain unknown. To address this, we analyzed the spiking activity of hundreds of neurons in organotypic cultures of mouse cortex. We asked how empirically observed synergistic integration-determined from partial information decomposition-varied with local functional network structure that was categorized into motifs with varying recurrent and feedback information flow. We found that synergistic integration was elevated in motifs with greater recurrent information flow beyond that expected from the local feedforward information flow. Feedback information flow was interrelated with feedforward information flow and was associated with decreased synergistic integration. Our results indicate that synergistic integration is distinctly influenced by the directionality of local information flow.
Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Córtex Somatossensorial/fisiologia , Potenciais de Ação/fisiologia , Animais , Biologia Computacional , Retroalimentação Fisiológica , Camundongos , Neurônios/fisiologia , Técnicas de Cultura de Órgãos , Transmissão Sináptica/fisiologiaRESUMO
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the "higher-order" information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure-function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.
RESUMO
Much evidence seems to suggest the cortex operates near a critical point, yet a single set of exponents defining its universality class has not been found. In fact, when critical exponents are estimated from data, they widely differ across species, individuals of the same species, and even over time, or depending on stimulus. Interestingly, these exponents still approximately hold to a dynamical scaling relation. Here we show that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality along a Widom line with exponents that decrease in absolute value, while still holding approximately to a dynamical scaling relation. We use simulations and experimental data to confirm these predictions and describe new ones that could be tested soon.
Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Processos EstocásticosRESUMO
Detecting synaptic connections using large-scale extracellular spike recordings presents a statistical challenge. Although previous methods often treat the detection of each putative connection as a separate hypothesis test, here we develop a modeling approach that infers synaptic connections while incorporating circuit properties learned from the whole network. We use an extension of the generalized linear model framework to describe the cross-correlograms between pairs of neurons and separate correlograms into two parts: a slowly varying effect due to background fluctuations and a fast, transient effect due to the synapse. We then use the observations from all putative connections in the recording to estimate two network properties: the presynaptic neuron type (excitatory or inhibitory) and the relationship between synaptic latency and distance between neurons. Constraining the presynaptic neuron's type, synaptic latencies, and time constants improves synapse detection. In data from simulated networks, this model outperforms two previously developed synapse detection methods, especially on the weak connections. We also apply our model to in vitro multielectrode array recordings from the mouse somatosensory cortex. Here, our model automatically recovers plausible connections from hundreds of neurons, and the properties of the putative connections are largely consistent with previous research.NEW & NOTEWORTHY Detecting synaptic connections using large-scale extracellular spike recordings is a difficult statistical problem. Here, we develop an extension of a generalized linear model that explicitly separates fast synaptic effects and slow background fluctuations in cross-correlograms between pairs of neurons while incorporating circuit properties learned from the whole network. This model outperforms two previously developed synapse detection methods in the simulated networks and recovers plausible connections from hundreds of neurons in in vitro multielectrode array data.
Assuntos
Potenciais de Ação/fisiologia , Modelos Teóricos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Córtex Somatossensorial/fisiologia , Sinapses/fisiologia , Transmissão Sináptica/fisiologia , Animais , Camundongos , Redes Neurais de ComputaçãoRESUMO
Prenatal cannabis exposure (PCE) influences human brain development, but it is challenging to model PCE using animals and current cell culture techniques. Here, we developed a one-stop microfluidic platform to assemble and culture human cerebral organoids from human embryonic stem cells (hESC) to investigate the effect of PCE on early human brain development. By incorporating perfusable culture chambers, air-liquid interface, and one-stop protocol, this microfluidic platform can simplify the fabrication procedure and produce a large number of organoids (169 organoids per 3.5 cm × 3.5 cm device area) without fusion, as compared with conventional fabrication methods. These one-stop microfluidic assembled cerebral organoids not only recapitulate early human brain structure, biology, and electrophysiology but also have minimal size variation and hypoxia. Under on-chip exposure to the psychoactive cannabinoid, Δ-9-tetrahydrocannabinol (THC), cerebral organoids exhibited reduced neuronal maturation, downregulation of cannabinoid receptor type 1 (CB1) receptors, and impaired neurite outgrowth. Moreover, transient on-chip THC treatment also decreased spontaneous firing in these organoids. This one-stop microfluidic technique enables a simple, scalable, and repeatable organoid culture method that can be used not only for human brain organoids but also for many other human organoids including liver, kidney, retina, and tumor organoids. This technology could be widely used in modeling brain and other organ development, developmental disorders, developmental pharmacology and toxicology, and drug screening.
Assuntos
Encéfalo/efeitos dos fármacos , Cannabis/efeitos adversos , Dispositivos Lab-On-A-Chip , Modelos Biológicos , Organoides/efeitos dos fármacos , Encéfalo/diagnóstico por imagem , Células Cultivadas , Eletrodos , Células-Tronco Embrionárias/efeitos dos fármacos , Feminino , Humanos , Hipóxia/diagnóstico por imagem , Organoides/diagnóstico por imagem , Gravidez , Efeitos Tardios da Exposição Pré-Natal/induzido quimicamenteRESUMO
The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 µm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a "rich club." We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. Significance statement: Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, â¼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases.
Assuntos
Rede Nervosa/citologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Córtex Somatossensorial/citologia , Córtex Somatossensorial/fisiologia , Animais , Animais Recém-Nascidos , Camundongos , Camundongos Endogâmicos C57BL , Técnicas de Cultura de ÓrgãosRESUMO
Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network.
Assuntos
Córtex Cerebral/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Transmissão Sináptica/fisiologia , Potenciais de Ação , Animais , Córtex Cerebral/citologia , Biologia Computacional , Retroalimentação Fisiológica , Hipocampo/citologia , Hipocampo/fisiologia , Teoria da Informação , Camundongos , Camundongos Endogâmicos C57BL , Análise Multivariada , Rede Nervosa/citologia , Rede Nervosa/fisiologiaRESUMO
Although relationships between networks of different scales have been observed in macroscopic brain studies, relationships between structures of different scales in networks of neurons are unknown. To address this, we recorded from up to 500 neurons simultaneously from slice cultures of rodent somatosensory cortex. We then measured directed effective networks with transfer entropy, previously validated in simulated cortical networks. These effective networks enabled us to evaluate distinctive nonrandom structures of connectivity at 2 different scales. We have 4 main findings. First, at the scale of 3-6 neurons (clusters), we found that high numbers of connections occurred significantly more often than expected by chance. Second, the distribution of the number of connections per neuron (degree distribution) had a long tail, indicating that the network contained distinctively high-degree neurons, or hubs. Third, at the scale of tens to hundreds of neurons, we typically found 2-3 significantly large communities. Finally, we demonstrated that communities were relatively more robust than clusters against shuffling of connections. We conclude the microconnectome of the cortex has specific organization at different scales, as revealed by differences in robustness. We suggest that this information will help us to understand how the microconnectome is robust against damage.
Assuntos
Conectoma , Neurônios/fisiologia , Córtex Somatossensorial/anatomia & histologia , Córtex Somatossensorial/fisiologia , Animais , Camundongos , Modelos Neurológicos , Rede Nervosa/anatomia & histologia , Rede Nervosa/fisiologia , Técnicas de Cultura de ÓrgãosRESUMO
Information theory has long been used to quantify interactions between two variables. With the rise of complex systems research, multivariate information measures have been increasingly used to investigate interactions between groups of three or more variables, often with an emphasis on so called synergistic and redundant interactions. While bivariate information measures are commonly agreed upon, the multivariate information measures in use today have been developed by many different groups, and differ in subtle, yet significant ways. Here, we will review these multivariate information measures with special emphasis paid to their relationship to synergy and redundancy, as well as examine the differences between these measures by applying them to several simple model systems. In addition to these systems, we will illustrate the usefulness of the information measures by analyzing neural spiking data from a dissociated culture through early stages of its development. Our aim is that this work will aid other researchers as they seek the best multivariate information measure for their specific research goals and system. Finally, we have made software available online which allows the user to calculate all of the information measures discussed within this paper.
Assuntos
Teoria da Informação , Modelos Neurológicos , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Animais , Processamento Eletrônico de Dados , Entropia , Humanos , ProbabilidadeRESUMO
Motivated by the unexplored potential of in vitro neural systems for computing and by the corresponding need of versatile, scalable interfaces for multimodal interaction, an accurate, modular, fully customizable, and portable recording/stimulation solution that can be easily fabricated, robustly operated, and broadly disseminated is presented. This approach entails a reconfigurable platform that works across multiple industry standards and that enables a complete signal chain, from neural substrates sampled through micro-electrode arrays (MEAs) to data acquisition, downstream analysis, and cloud storage. Built-in modularity supports the seamless integration of electrical/optical stimulation and fluidic interfaces. Custom MEA fabrication leverages maskless photolithography, favoring the rapid prototyping of a variety of configurations, spatial topologies, and constitutive materials. Through a dedicated analysis and management software suite, the utility and robustness of this system are demonstrated across neural cultures and applications, including embryonic stem cell-derived and primary neurons, organotypic brain slices, 3D engineered tissue mimics, concurrent calcium imaging, and long-term recording. Overall, this technology, termed "mind in vitro" to underscore the computing inspiration, provides an end-to-end solution that can be widely deployed due to its affordable (>10× cost reduction) and open-source nature, catering to the expanding needs of both conventional and unconventional electrophysiology.
Assuntos
Encéfalo , Neurônios , Eletrodos , Encéfalo/fisiologia , Neurônios/fisiologia , Estimulação Elétrica , Fenômenos Eletrofisiológicos/fisiologiaRESUMO
The tasks of neural computation are remarkably diverse. To function optimally, neuronal networks have been hypothesized to operate near a nonequilibrium critical point. However, experimental evidence for critical dynamics has been inconclusive. Here, we show that the dynamics of cultured cortical networks are critical. We analyze neuronal network data collected at the individual neuron level using the framework of nonequilibrium phase transitions. Among the most striking predictions confirmed is that the mean temporal profiles of avalanches of widely varying durations are quantitatively described by a single universal scaling function. We also show that the data have three additional features predicted by critical phenomena: approximate power law distributions of avalanche sizes and durations, samples in subcritical and supercritical phases, and scaling laws between anomalous exponents.
Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Animais , Células Cultivadas , RatosRESUMO
The hypothesis that living neural networks operate near a critical phase transition point has received substantial discussion. This "criticality hypothesis" is potentially important because experiments and theory show that optimal information processing and health are associated with operating near the critical point. Despite the promise of this idea, there have been several objections to it. While earlier objections have been addressed already, the more recent critiques of Touboul and Destexhe have not yet been fully met. The purpose of this paper is to describe their objections and offer responses. Their first objection is that the well-known Brunel model for cortical networks does not display a peak in mutual information near its phase transition, in apparent contradiction to the criticality hypothesis. In response I show that it does have such a peak near the phase transition point, provided it is not strongly driven by random inputs. Their second objection is that even simple models like a coin flip can satisfy multiple criteria of criticality. This suggests that the emergent criticality claimed to exist in cortical networks is just the consequence of a random walk put through a threshold. In response I show that while such processes can produce many signatures criticality, these signatures (1) do not emerge from collective interactions, (2) do not support information processing, and (3) do not have long-range temporal correlations. Because experiments show these three features are consistently present in living neural networks, such random walk models are inadequate. Nevertheless, I conclude that these objections have been valuable for refining research questions and should always be welcomed as a part of the scientific process.
RESUMO
The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11571-021-09703-z.
RESUMO
Aging impacts the brain's structural and functional organization and over time leads to various disorders, such as Alzheimer's disease and cognitive impairment. The process also impacts sensory function, bringing about a general slowing in various perceptual and cognitive functions. Here, we analyze the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) resting-state magnetoencephalography (MEG) dataset-the largest aging cohort available-in light of the quasicriticality framework, a novel organizing principle for brain functionality which relates information processing and scaling properties of brain activity to brain connectivity and stimulus. Examination of the data using this framework reveals interesting correlations with age and gender of test subjects. Using simulated data as verification, our results suggest a link between changes to brain connectivity due to aging and increased dynamical fluctuations of neuronal firing rates. Our findings suggest a platform to develop biomarkers of neurological health.
RESUMO
Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
Assuntos
Teoria da Informação , Modelos Teóricos , Entropia , Fatores de TempoRESUMO
BACKGROUND: How living neural networks retain information is still incompletely understood. Two prominent ideas on this topic have developed in parallel, but have remained somewhat unconnected. The first of these, the "synaptic hypothesis," holds that information can be retained in synaptic connection strengths, or weights, between neurons. Recent work inspired by statistical mechanics has suggested that networks will retain the most information when their weights are distributed in a skewed manner, with many weak weights and only a few strong ones. The second of these ideas is that information can be represented by stable activity patterns. Multineuron recordings have shown that sequences of neural activity distributed over many neurons are repeated above chance levels when animals perform well-learned tasks. Although these two ideas are compelling, no one to our knowledge has yet linked the predicted optimum distribution of weights to stable activity patterns actually observed in living neural networks. RESULTS: Here, we explore this link by comparing stable activity patterns from cortical slice networks recorded with multielectrode arrays to stable patterns produced by a model with a tunable weight distribution. This model was previously shown to capture central features of the dynamics in these slice networks, including neuronal avalanche cascades. We find that when the model weight distribution is appropriately skewed, it correctly matches the distribution of repeating patterns observed in the data. In addition, this same distribution of weights maximizes the capacity of the network model to retain stable activity patterns. Thus, the distribution that best fits the data is also the distribution that maximizes the number of stable patterns. CONCLUSIONS: We conclude that local cortical networks are very likely to use a highly skewed weight distribution to optimize information retention, as predicted by theory. Fixed distributions impose constraints on learning, however. The network must have mechanisms for preserving the overall weight distribution while allowing individual connection strengths to change with learning.
Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Córtex Somatossensorial/fisiologia , Sinapses/fisiologia , Transmissão Sináptica/fisiologia , Algoritmos , Animais , Células Cultivadas , Técnicas In Vitro , Teoria da Informação , Microeletrodos , Vias Neurais/fisiologia , Probabilidade , Ratos , Ratos Sprague-DawleyRESUMO
OBJECTIVE: Many neural systems display spontaneous, spatiotemporal patterns of neural activity that are crucial for information processing. While these cascading patterns presumably arise from the underlying network of synaptic connections between neurons, the precise contribution of the network's local and global connectivity to these patterns and information processing remains largely unknown. APPROACH: Here, we demonstrate how network structure supports information processing through network dynamics in empirical and simulated spiking neurons using mathematical tools from linear systems theory, network control theory, and information theory. MAIN RESULTS: In particular, we show that activity, and the information that it contains, travels through cycles in real and simulated networks. SIGNIFICANCE: Broadly, our results demonstrate how cascading neural networks could contribute to cognitive faculties that require lasting activation of neuronal patterns, such as working memory or attention.
Assuntos
Redes Neurais de Computação , Neurônios , Potenciais de Ação , Modelos Neurológicos , Rede NervosaRESUMO
Neural information processing is widely understood to depend on correlations in neuronal activity. However, whether correlation is favorable or not is contentious. Here, we sought to determine how correlated activity and information processing are related in cortical circuits. Using recordings of hundreds of spiking neurons in organotypic cultures of mouse neocortex, we asked whether mutual information between neurons that feed into a common third neuron increased synergistic information processing by the receiving neuron. We found that mutual information and synergistic processing were positively related at synaptic timescales (0.05-14 ms), where mutual information values were low. This effect was mediated by the increase in information transmission-of which synergistic processing is a component-that resulted as mutual information grew. However, at extrasynaptic windows (up to 3,000 ms), where mutual information values were high, the relationship between mutual information and synergistic processing became negative. In this regime, greater mutual information resulted in a disproportionate increase in redundancy relative to information transmission. These results indicate that the emergence of synergistic processing from correlated activity differs according to timescale and correlation regime. In a low-correlation regime, synergistic processing increases with greater correlation, and in a high-correlation regime, synergistic processing decreases with greater correlation.
RESUMO
Multineuron firing patterns are often observed, yet are predicted to be rare by models that assume independent firing. To explain these correlated network states, two groups recently applied a second-order maximum entropy model that used only observed firing rates and pairwise interactions as parameters (Schneidman et al., 2006; Shlens et al., 2006). Interestingly, with these minimal assumptions they predicted 90-99% of network correlations. If generally applicable, this approach could vastly simplify analyses of complex networks. However, this initial work was done largely on retinal tissue, and its applicability to cortical circuits is mostly unknown. This work also did not address the temporal evolution of correlated states. To investigate these issues, we applied the model to multielectrode data containing spontaneous spikes or local field potentials from cortical slices and cultures. The model worked slightly less well in cortex than in retina, accounting for 88 +/- 7% (mean +/- SD) of network correlations. In addition, in 8 of 13 preparations, the observed sequences of correlated states were significantly longer than predicted by concatenating states from the model. This suggested that temporal dependencies are a common feature of cortical network activity, and should be considered in future models. We found a significant relationship between strong pairwise temporal correlations and observed sequence length, suggesting that pairwise temporal correlations may allow the model to be extended into the temporal domain. We conclude that although a second-order maximum entropy model successfully predicts correlated states in cortical networks, it should be extended to account for temporal correlations observed between states.
Assuntos
Córtex Cerebral/citologia , Entropia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Análise de Variância , Animais , Células Cultivadas , Simulação por Computador , Embrião de Mamíferos , Hipocampo/citologia , Humanos , Técnicas In Vitro , Potenciais da Membrana/fisiologia , Potenciais da Membrana/efeitos da radiação , Técnicas de Patch-Clamp , Ratos , Ratos Sprague-DawleyRESUMO
The criticality hypothesis predicts that cortex operates near a critical point for optimum information processing. In this issue of Neuron, Ma et al. (2019) find evidence consistent with a mechanism that tunes cortex to criticality, even in the face of a strong perturbation over several days.