Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
Add more filters










Publication year range
1.
Adv Sci (Weinh) ; 11(11): e2306826, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38161217

ABSTRACT

Motivated by the unexplored potential of in vitro neural systems for computing and by the corresponding need of versatile, scalable interfaces for multimodal interaction, an accurate, modular, fully customizable, and portable recording/stimulation solution that can be easily fabricated, robustly operated, and broadly disseminated is presented. This approach entails a reconfigurable platform that works across multiple industry standards and that enables a complete signal chain, from neural substrates sampled through micro-electrode arrays (MEAs) to data acquisition, downstream analysis, and cloud storage. Built-in modularity supports the seamless integration of electrical/optical stimulation and fluidic interfaces. Custom MEA fabrication leverages maskless photolithography, favoring the rapid prototyping of a variety of configurations, spatial topologies, and constitutive materials. Through a dedicated analysis and management software suite, the utility and robustness of this system are demonstrated across neural cultures and applications, including embryonic stem cell-derived and primary neurons, organotypic brain slices, 3D engineered tissue mimics, concurrent calcium imaging, and long-term recording. Overall, this technology, termed "mind in vitro" to underscore the computing inspiration, provides an end-to-end solution that can be widely deployed due to its affordable (>10× cost reduction) and open-source nature, catering to the expanding needs of both conventional and unconventional electrophysiology.


Subject(s)
Brain , Neurons , Electrodes , Brain/physiology , Neurons/physiology , Electric Stimulation , Electrophysiological Phenomena/physiology
2.
Front Comput Neurosci ; 16: 1037550, 2022.
Article in English | MEDLINE | ID: mdl-36532868

ABSTRACT

Aging impacts the brain's structural and functional organization and over time leads to various disorders, such as Alzheimer's disease and cognitive impairment. The process also impacts sensory function, bringing about a general slowing in various perceptual and cognitive functions. Here, we analyze the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) resting-state magnetoencephalography (MEG) dataset-the largest aging cohort available-in light of the quasicriticality framework, a novel organizing principle for brain functionality which relates information processing and scaling properties of brain activity to brain connectivity and stimulus. Examination of the data using this framework reveals interesting correlations with age and gender of test subjects. Using simulated data as verification, our results suggest a link between changes to brain connectivity due to aging and increased dynamical fluctuations of neuronal firing rates. Our findings suggest a platform to develop biomarkers of neurological health.

3.
Front Comput Neurosci ; 16: 703865, 2022.
Article in English | MEDLINE | ID: mdl-36185712

ABSTRACT

The hypothesis that living neural networks operate near a critical phase transition point has received substantial discussion. This "criticality hypothesis" is potentially important because experiments and theory show that optimal information processing and health are associated with operating near the critical point. Despite the promise of this idea, there have been several objections to it. While earlier objections have been addressed already, the more recent critiques of Touboul and Destexhe have not yet been fully met. The purpose of this paper is to describe their objections and offer responses. Their first objection is that the well-known Brunel model for cortical networks does not display a peak in mutual information near its phase transition, in apparent contradiction to the criticality hypothesis. In response I show that it does have such a peak near the phase transition point, provided it is not strongly driven by random inputs. Their second objection is that even simple models like a coin flip can satisfy multiple criteria of criticality. This suggests that the emergent criticality claimed to exist in cortical networks is just the consequence of a random walk put through a threshold. In response I show that while such processes can produce many signatures criticality, these signatures (1) do not emerge from collective interactions, (2) do not support information processing, and (3) do not have long-range temporal correlations. Because experiments show these three features are consistently present in living neural networks, such random walk models are inadequate. Nevertheless, I conclude that these objections have been valuable for refining research questions and should always be welcomed as a part of the scientific process.

4.
Entropy (Basel) ; 24(7)2022 Jul 05.
Article in English | MEDLINE | ID: mdl-35885153

ABSTRACT

The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the "higher-order" information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure-function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.

5.
Cogn Neurodyn ; 16(1): 149-165, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35126775

ABSTRACT

The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11571-021-09703-z.

6.
PLoS Comput Biol ; 17(7): e1009196, 2021 07.
Article in English | MEDLINE | ID: mdl-34252081

ABSTRACT

The directionality of network information flow dictates how networks process information. A central component of information processing in both biological and artificial neural networks is their ability to perform synergistic integration-a type of computation. We established previously that synergistic integration varies directly with the strength of feedforward information flow. However, the relationships between both recurrent and feedback information flow and synergistic integration remain unknown. To address this, we analyzed the spiking activity of hundreds of neurons in organotypic cultures of mouse cortex. We asked how empirically observed synergistic integration-determined from partial information decomposition-varied with local functional network structure that was categorized into motifs with varying recurrent and feedback information flow. We found that synergistic integration was elevated in motifs with greater recurrent information flow beyond that expected from the local feedforward information flow. Feedback information flow was interrelated with feedforward information flow and was associated with decreased synergistic integration. Our results indicate that synergistic integration is distinctly influenced by the directionality of local information flow.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Somatosensory Cortex/physiology , Action Potentials/physiology , Animals , Computational Biology , Feedback, Physiological , Mice , Neurons/physiology , Organ Culture Techniques , Synaptic Transmission/physiology
7.
Phys Rev Lett ; 126(9): 098101, 2021 Mar 05.
Article in English | MEDLINE | ID: mdl-33750159

ABSTRACT

Much evidence seems to suggest the cortex operates near a critical point, yet a single set of exponents defining its universality class has not been found. In fact, when critical exponents are estimated from data, they widely differ across species, individuals of the same species, and even over time, or depending on stimulus. Interestingly, these exponents still approximately hold to a dynamical scaling relation. Here we show that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality along a Widom line with exponents that decrease in absolute value, while still holding approximately to a dynamical scaling relation. We use simulations and experimental data to confirm these predictions and describe new ones that could be tested soon.


Subject(s)
Brain/physiology , Models, Neurological , Stochastic Processes
8.
J Neural Eng ; 17(5): 056045, 2020 11 04.
Article in English | MEDLINE | ID: mdl-33036007

ABSTRACT

OBJECTIVE: Many neural systems display spontaneous, spatiotemporal patterns of neural activity that are crucial for information processing. While these cascading patterns presumably arise from the underlying network of synaptic connections between neurons, the precise contribution of the network's local and global connectivity to these patterns and information processing remains largely unknown. APPROACH: Here, we demonstrate how network structure supports information processing through network dynamics in empirical and simulated spiking neurons using mathematical tools from linear systems theory, network control theory, and information theory. MAIN RESULTS: In particular, we show that activity, and the information that it contains, travels through cycles in real and simulated networks. SIGNIFICANCE: Broadly, our results demonstrate how cascading neural networks could contribute to cognitive faculties that require lasting activation of neuronal patterns, such as working memory or attention.


Subject(s)
Neural Networks, Computer , Neurons , Action Potentials , Models, Neurological , Nerve Net
9.
J Neurophysiol ; 124(6): 1588-1604, 2020 12 01.
Article in English | MEDLINE | ID: mdl-32937091

ABSTRACT

Detecting synaptic connections using large-scale extracellular spike recordings presents a statistical challenge. Although previous methods often treat the detection of each putative connection as a separate hypothesis test, here we develop a modeling approach that infers synaptic connections while incorporating circuit properties learned from the whole network. We use an extension of the generalized linear model framework to describe the cross-correlograms between pairs of neurons and separate correlograms into two parts: a slowly varying effect due to background fluctuations and a fast, transient effect due to the synapse. We then use the observations from all putative connections in the recording to estimate two network properties: the presynaptic neuron type (excitatory or inhibitory) and the relationship between synaptic latency and distance between neurons. Constraining the presynaptic neuron's type, synaptic latencies, and time constants improves synapse detection. In data from simulated networks, this model outperforms two previously developed synapse detection methods, especially on the weak connections. We also apply our model to in vitro multielectrode array recordings from the mouse somatosensory cortex. Here, our model automatically recovers plausible connections from hundreds of neurons, and the properties of the putative connections are largely consistent with previous research.NEW & NOTEWORTHY Detecting synaptic connections using large-scale extracellular spike recordings is a difficult statistical problem. Here, we develop an extension of a generalized linear model that explicitly separates fast synaptic effects and slow background fluctuations in cross-correlograms between pairs of neurons while incorporating circuit properties learned from the whole network. This model outperforms two previously developed synapse detection methods in the simulated networks and recovers plausible connections from hundreds of neurons in in vitro multielectrode array data.


Subject(s)
Action Potentials/physiology , Models, Theoretical , Nerve Net/physiology , Neurons/physiology , Somatosensory Cortex/physiology , Synapses/physiology , Synaptic Transmission/physiology , Animals , Mice , Neural Networks, Computer
10.
Netw Neurosci ; 4(3): 678-697, 2020.
Article in English | MEDLINE | ID: mdl-32885121

ABSTRACT

Neural information processing is widely understood to depend on correlations in neuronal activity. However, whether correlation is favorable or not is contentious. Here, we sought to determine how correlated activity and information processing are related in cortical circuits. Using recordings of hundreds of spiking neurons in organotypic cultures of mouse neocortex, we asked whether mutual information between neurons that feed into a common third neuron increased synergistic information processing by the receiving neuron. We found that mutual information and synergistic processing were positively related at synaptic timescales (0.05-14 ms), where mutual information values were low. This effect was mediated by the increase in information transmission-of which synergistic processing is a component-that resulted as mutual information grew. However, at extrasynaptic windows (up to 3,000 ms), where mutual information values were high, the relationship between mutual information and synergistic processing became negative. In this regime, greater mutual information resulted in a disproportionate increase in redundancy relative to information transmission. These results indicate that the emergence of synergistic processing from correlated activity differs according to timescale and correlation regime. In a low-correlation regime, synergistic processing increases with greater correlation, and in a high-correlation regime, synergistic processing decreases with greater correlation.

11.
Anal Chem ; 92(6): 4630-4638, 2020 03 17.
Article in English | MEDLINE | ID: mdl-32070103

ABSTRACT

Prenatal cannabis exposure (PCE) influences human brain development, but it is challenging to model PCE using animals and current cell culture techniques. Here, we developed a one-stop microfluidic platform to assemble and culture human cerebral organoids from human embryonic stem cells (hESC) to investigate the effect of PCE on early human brain development. By incorporating perfusable culture chambers, air-liquid interface, and one-stop protocol, this microfluidic platform can simplify the fabrication procedure and produce a large number of organoids (169 organoids per 3.5 cm × 3.5 cm device area) without fusion, as compared with conventional fabrication methods. These one-stop microfluidic assembled cerebral organoids not only recapitulate early human brain structure, biology, and electrophysiology but also have minimal size variation and hypoxia. Under on-chip exposure to the psychoactive cannabinoid, Δ-9-tetrahydrocannabinol (THC), cerebral organoids exhibited reduced neuronal maturation, downregulation of cannabinoid receptor type 1 (CB1) receptors, and impaired neurite outgrowth. Moreover, transient on-chip THC treatment also decreased spontaneous firing in these organoids. This one-stop microfluidic technique enables a simple, scalable, and repeatable organoid culture method that can be used not only for human brain organoids but also for many other human organoids including liver, kidney, retina, and tumor organoids. This technology could be widely used in modeling brain and other organ development, developmental disorders, developmental pharmacology and toxicology, and drug screening.


Subject(s)
Brain/drug effects , Cannabis/adverse effects , Lab-On-A-Chip Devices , Models, Biological , Organoids/drug effects , Brain/diagnostic imaging , Cells, Cultured , Electrodes , Embryonic Stem Cells/drug effects , Female , Humans , Hypoxia/diagnostic imaging , Organoids/diagnostic imaging , Pregnancy , Prenatal Exposure Delayed Effects/chemically induced
12.
Neuron ; 104(4): 623-624, 2019 11 20.
Article in English | MEDLINE | ID: mdl-31751539

ABSTRACT

The criticality hypothesis predicts that cortex operates near a critical point for optimum information processing. In this issue of Neuron, Ma et al. (2019) find evidence consistent with a mechanism that tunes cortex to criticality, even in the face of a strong perturbation over several days.


Subject(s)
Cerebral Cortex , Neurons , Cerebral Cortex/physiology
13.
Netw Neurosci ; 3(2): 384-404, 2019.
Article in English | MEDLINE | ID: mdl-30793088

ABSTRACT

To understand how neural circuits process information, it is essential to identify the relationship between computation and circuit organization. Rich clubs, highly interconnected sets of neurons, are known to propagate a disproportionate amount of information within cortical circuits. Here, we test the hypothesis that rich clubs also perform a disproportionate amount of computation. To do so, we recorded the spiking activity of on average ∼300 well-isolated individual neurons from organotypic cortical cultures. We then constructed weighted, directed networks reflecting the effective connectivity between the neurons. For each neuron, we quantified the amount of computation it performed based on its inputs. We found that rich-club neurons compute ∼160% more information than neurons outside of the rich club. The amount of computation performed in the rich club was proportional to the amount of information propagation by the same neurons. This suggests that in these circuits, information propagation drives computation. In total, our findings indicate that rich-club organization in effective cortical circuits supports not only information propagation but also neural computation.

14.
Front Physiol ; 7: 425, 2016.
Article in English | MEDLINE | ID: mdl-27729870

ABSTRACT

The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity-an information theoretic measure-has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online.

15.
Front Physiol ; 7: 250, 2016.
Article in English | MEDLINE | ID: mdl-27445842

ABSTRACT

Neural systems include interactions that occur across many scales. Two divergent methods for characterizing such interactions have drawn on the physical analysis of critical phenomena and the mathematical study of information. Inferring criticality in neural systems has traditionally rested on fitting power laws to the property distributions of "neural avalanches" (contiguous bursts of activity), but the fractal nature of avalanche shapes has recently emerged as another signature of criticality. On the other hand, neural complexity, an information theoretic measure, has been used to capture the interplay between the functional localization of brain regions and their integration for higher cognitive functions. Unfortunately, treatments of all three methods-power-law fitting, avalanche shape collapse, and neural complexity-have suffered from shortcomings. Empirical data often contain biases that introduce deviations from true power law in the tail and head of the distribution, but deviations in the tail have often been unconsidered; avalanche shape collapse has required manual parameter tuning; and the estimation of neural complexity has relied on small data sets or statistical assumptions for the sake of computational efficiency. In this paper we present technical advancements in the analysis of criticality and complexity in neural systems. We use maximum-likelihood estimation to automatically fit power laws with left and right cutoffs, present the first automated shape collapse algorithm, and describe new techniques to account for large numbers of neural variables and small data sets in the calculation of neural complexity. In order to facilitate future research in criticality and complexity, we have made the software utilized in this analysis freely available online in the MATLAB NCC (Neural Complexity and Criticality) Toolbox.

16.
PLoS Comput Biol ; 12(5): e1004858, 2016 05.
Article in English | MEDLINE | ID: mdl-27159884

ABSTRACT

Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network.


Subject(s)
Cerebral Cortex/physiology , Models, Neurological , Neurons/physiology , Synaptic Transmission/physiology , Action Potentials , Animals , Cerebral Cortex/cytology , Computational Biology , Feedback, Physiological , Hippocampus/cytology , Hippocampus/physiology , Information Theory , Mice , Mice, Inbred C57BL , Multivariate Analysis , Nerve Net/cytology , Nerve Net/physiology
17.
J Neurosci ; 36(3): 670-84, 2016 Jan 20.
Article in English | MEDLINE | ID: mdl-26791200

ABSTRACT

The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 µm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a "rich club." We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. Significance statement: Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases.


Subject(s)
Nerve Net/cytology , Nerve Net/physiology , Neurons/physiology , Somatosensory Cortex/cytology , Somatosensory Cortex/physiology , Animals , Animals, Newborn , Mice , Mice, Inbred C57BL , Organ Culture Techniques
18.
Cereb Cortex ; 25(10): 3743-57, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25336598

ABSTRACT

Although relationships between networks of different scales have been observed in macroscopic brain studies, relationships between structures of different scales in networks of neurons are unknown. To address this, we recorded from up to 500 neurons simultaneously from slice cultures of rodent somatosensory cortex. We then measured directed effective networks with transfer entropy, previously validated in simulated cortical networks. These effective networks enabled us to evaluate distinctive nonrandom structures of connectivity at 2 different scales. We have 4 main findings. First, at the scale of 3-6 neurons (clusters), we found that high numbers of connections occurred significantly more often than expected by chance. Second, the distribution of the number of connections per neuron (degree distribution) had a long tail, indicating that the network contained distinctively high-degree neurons, or hubs. Third, at the scale of tens to hundreds of neurons, we typically found 2-3 significantly large communities. Finally, we demonstrated that communities were relatively more robust than clusters against shuffling of connections. We conclude the microconnectome of the cortex has specific organization at different scales, as revealed by differences in robustness. We suggest that this information will help us to understand how the microconnectome is robust against damage.


Subject(s)
Connectome , Neurons/physiology , Somatosensory Cortex/anatomy & histology , Somatosensory Cortex/physiology , Animals , Mice , Models, Neurological , Nerve Net/anatomy & histology , Nerve Net/physiology , Organ Culture Techniques
19.
PLoS One ; 9(12): e115764, 2014.
Article in English | MEDLINE | ID: mdl-25536059

ABSTRACT

Recent studies have emphasized the importance of multiplex networks--interdependent networks with shared nodes and different types of connections--in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multiplex perspective. Given the wide range of temporally dependent rhythms and phenomena present in neural systems, we chose to examine multiplex networks of individual neurons with time scale dependent connections. To study these networks, we used transfer entropy--an information theoretic quantity that can be used to measure linear and nonlinear interactions--to systematically measure the connectivity between individual neurons at different time scales in cortical and hippocampal slice cultures. We recorded the spiking activity of almost 12,000 neurons across 60 tissue samples using a 512-electrode array with 60 micrometer inter-electrode spacing and 50 microsecond temporal resolution. To the best of our knowledge, this preparation and recording method represents a superior combination of number of recorded neurons and temporal and spatial recording resolutions to any currently available in vivo system. We found that highly connected neurons ("hubs") were localized to certain time scales, which, we hypothesize, increases the fault tolerance of the network. Conversely, a large proportion of non-hub neurons were not localized to certain time scales. In addition, we found that long and short time scale connectivity was uncorrelated. Finally, we found that long time scale networks were significantly less modular and more disassortative than short time scale networks in both tissue types. As far as we are aware, this analysis represents the first systematic study of temporally dependent multiplex networks among individual neurons.


Subject(s)
Cerebral Cortex/cytology , Hippocampus/cytology , Nerve Net/cytology , Neurons/cytology , Action Potentials , Animals , Cerebral Cortex/physiology , Entropy , Hippocampus/physiology , Mice, Inbred C57BL , Models, Neurological , Nerve Net/physiology , Neurons/physiology
20.
PLoS One ; 9(8): e105324, 2014.
Article in English | MEDLINE | ID: mdl-25126851

ABSTRACT

Understanding the detailed circuitry of functioning neuronal networks is one of the major goals of neuroscience. Recent improvements in neuronal recording techniques have made it possible to record the spiking activity from hundreds of neurons simultaneously with sub-millisecond temporal resolution. Here we used a 512-channel multielectrode array system to record the activity from hundreds of neurons in organotypic cultures of cortico-hippocampal brain slices from mice. To probe the network structure, we employed a wavelet transform of the cross-correlogram to categorize the functional connectivity in different frequency ranges. With this method we directly compare, for the first time, in any preparation, the neuronal network structures of cortex and hippocampus, on the scale of hundreds of neurons, with sub-millisecond time resolution. Among the three frequency ranges that we investigated, the lower two frequency ranges (gamma (30-80 Hz) and beta (12-30 Hz) range) showed similar network structure between cortex and hippocampus, but there were many significant differences between these structures in the high frequency range (100-1000 Hz). The high frequency networks in cortex showed short tailed degree-distributions, shorter decay length of connectivity density, smaller clustering coefficients, and positive assortativity. Our results suggest that our method can characterize frequency dependent differences of network architecture from different brain regions. Crucially, because these differences between brain regions require millisecond temporal scales to be observed and characterized, these results underscore the importance of high temporal resolution recordings for the understanding of functional networks in neuronal systems.


Subject(s)
Hippocampus/physiology , Nerve Net , Action Potentials , Animals , Electroencephalography , Female , Hippocampus/cytology , Male , Mice, Inbred C57BL , Tissue Culture Techniques
SELECTION OF CITATIONS
SEARCH DETAIL
...