RESUMO
We can now measure the connectivity of every neuron in a neural circuit1-9, but we cannot measure other biological details, including the dynamical characteristics of each neuron. The degree to which measurements of connectivity alone can inform the understanding of neural computation is an open question10. Here we show that with experimental measurements of only the connectivity of a biological neural network, we can predict the neural activity underlying a specified neural computation. We constructed a model neural network with the experimentally determined connectivity for 64 cell types in the motion pathways of the fruit fly optic lobe1-5 but with unknown parameters for the single-neuron and single-synapse properties. We then optimized the values of these unknown parameters using techniques from deep learning11, to allow the model network to detect visual motion12. Our mechanistic model makes detailed, experimentally testable predictions for each neuron in the connectome. We found that model predictions agreed with experimental measurements of neural activity across 26 studies. Our work demonstrates a strategy for generating detailed hypotheses about the mechanisms of neural circuit function from connectivity measurements. We show that this strategy is more likely to be successful when neurons are sparsely connected-a universally observed feature of biological neural networks across species and brain regions.
RESUMO
Forming a complete picture of the relationship between neural activity and skeletal kinematics requires quantification of skeletal joint biomechanics during free behavior; however, without detailed knowledge of the underlying skeletal motion, inferring limb kinematics using surface-tracking approaches is difficult, especially for animals where the relationship between the surface and underlying skeleton changes during motion. Here we developed a videography-based method enabling detailed three-dimensional kinematic quantification of an anatomically defined skeleton in untethered freely behaving rats and mice. This skeleton-based model was constrained using anatomical principles and joint motion limits and provided skeletal pose estimates for a range of body sizes, even when limbs were occluded. Model-inferred limb positions and joint kinematics during gait and gap-crossing behaviors were verified by direct measurement of either limb placement or limb kinematics using inertial measurement units. Together we show that complex decision-making behaviors can be accurately reconstructed at the level of skeletal kinematics using our anatomically constrained model.
Assuntos
Marcha , Roedores , Animais , Ratos , Camundongos , Fenômenos Biomecânicos , Amplitude de Movimento ArticularRESUMO
Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.
Assuntos
Encéfalo , Memória de Curto Prazo , Redes Neurais de ComputaçãoRESUMO
Neural circuits can produce similar activity patterns from vastly different combinations of channel and synaptic conductances. These conductances are tuned for specific activity patterns but might also reflect additional constraints, such as metabolic cost or robustness to perturbations. How do such constraints influence the range of permissible conductances? Here we investigate how metabolic cost affects the parameters of neural circuits with similar activity in a model of the pyloric network of the crab Cancer borealis. We present a machine learning method that can identify a range of network models that generate activity patterns matching experimental data and find that neural circuits can consume largely different amounts of energy despite similar circuit activity. Furthermore, a reduced but still significant range of circuit parameters gives rise to energy-efficient circuits. We then examine the space of parameters of energy-efficient circuits and identify potential tuning strategies for low metabolic cost. Finally, we investigate the interaction between metabolic cost and temperature robustness. We show that metabolic cost can vary across temperatures but that robustness to temperature changes does not necessarily incur an increased metabolic cost. Our analyses show that despite metabolic efficiency and temperature robustness constraining circuit parameters, neural systems can generate functional, efficient, and robust network activity with widely disparate sets of conductances.
Assuntos
Piloro , TemperaturaRESUMO
Single-molecule localization microscopy (SMLM) has had remarkable success in imaging cellular structures with nanometer resolution, but standard analysis algorithms require sparse emitters, which limits imaging speed and labeling density. Here, we overcome this major limitation using deep learning. We developed DECODE (deep context dependent), a computational tool that can localize single emitters at high density in three dimensions with highest accuracy for a large range of imaging modalities and conditions. In a public software benchmark competition, it outperformed all other fitters on 12 out of 12 datasets when comparing both detection accuracy and localization error, often by a substantial margin. DECODE allowed us to acquire fast dynamic live-cell SMLM data with reduced light exposure and to image microtubules at ultra-high labeling density. Packaged for simple installation and use, DECODE will enable many laboratories to reduce imaging times and increase localization density in SMLM.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imagem Individual de Molécula/métodos , Animais , Células COS , Chlorocebus aethiops , Bases de Dados Factuais , SoftwareRESUMO
Recent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neuronal networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters, and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a fixed wiring rule to fit the empirical data, SBI considers many parametrizations of a rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rule parameters and relies on machine learning methods to estimate a probability distribution (the 'posterior distribution over parameters conditioned on the data') that characterizes all data-compatible parameters. We demonstrate how to apply SBI in computational connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data.
Assuntos
Conectoma , Animais , Ratos , Conectoma/métodos , Teorema de Bayes , Simulação por Computador , Neurônios/fisiologia , Aprendizado de MáquinaRESUMO
We combine amortized neural posterior estimation with importance sampling for fast and accurate gravitational-wave inference. We first generate a rapid proposal for the Bayesian posterior using neural networks, and then attach importance weights based on the underlying likelihood and prior. This provides (1) a corrected posterior free from network inaccuracies, (2) a performance diagnostic (the sample efficiency) for assessing the proposal and identifying failure cases, and (3) an unbiased estimate of the Bayesian evidence. By establishing this independent verification and correction mechanism we address some of the most frequent criticisms against deep learning for scientific inference. We carry out a large study analyzing 42 binary black hole mergers observed by LIGO and Virgo with the SEOBNRv4PHM and IMRPhenomXPHM waveform models. This shows a median sample efficiency of ≈10% (2 orders of magnitude better than standard samplers) as well as a tenfold reduction in the statistical uncertainty in the log evidence. Given these advantages, we expect a significant impact on gravitational-wave inference, and for this approach to serve as a paradigm for harnessing deep learning methods in scientific applications.
RESUMO
BACKGROUND: Stroke is one of the most frequent diseases, and half of the stroke survivors are left with permanent impairment. Prediction of individual outcome is still difficult. Many but not all patients with stroke improve by approximately 1.7 times the initial impairment, that has been termed proportional recovery rule. The present study aims at identifying factors predicting motor outcome after stroke more accurately than before, and observe associations of rehabilitation treatment with outcome. METHODS: The study is designed as a multi-centre prospective clinical observational trial. An extensive primary data set of clinical, neuroimaging, electrophysiological, and laboratory data will be collected within 96 h of stroke onset from patients with relevant upper extremity deficit, as indexed by a Fugl-Meyer-Upper Extremity (FM-UE) score ≤ 50. At least 200 patients will be recruited. Clinical scores will include the FM-UE score (range 0-66, unimpaired function is indicated by a score of 66), Action Research Arm Test, modified Rankin Scale, Barthel Index and Stroke-Specific Quality of Life Scale. Follow-up clinical scores and applied types and amount of rehabilitation treatment will be documented in the rehabilitation hospitals. Final follow-up clinical scoring will be performed 90 days after the stroke event. The primary endpoint is the change in FM-UE defined as 90 days FM-UE minus initial FM-UE, divided by initial FM-UE impairment. Changes in the other clinical scores serve as secondary endpoints. Machine learning methods will be employed to analyze the data and predict primary and secondary endpoints based on the primary data set and the different rehabilitation treatments. DISCUSSION: If successful, outcome and relation to rehabilitation treatment in patients with acute motor stroke will be predictable more reliably than currently possible, leading to personalized neurorehabilitation. An important regulatory aspect of this trial is the first-time implementation of systematic patient data transfer between emergency and rehabilitation hospitals, which are divided institutions in Germany. TRIAL REGISTRATION: This study was registered at ClinicalTrials.gov ( NCT04688970 ) on 30 December 2020.
Assuntos
Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral , Humanos , Medicina de Precisão , Estudos Prospectivos , Qualidade de Vida , Recuperação de Função Fisiológica/fisiologia , Acidente Vascular Cerebral/complicações , Reabilitação do Acidente Vascular Cerebral/métodos , Extremidade SuperiorRESUMO
We demonstrate unprecedented accuracy for rapid gravitational wave parameter estimation with deep learning. Using neural networks as surrogates for Bayesian posterior distributions, we analyze eight gravitational wave events from the first LIGO-Virgo Gravitational-Wave Transient Catalog and find very close quantitative agreement with standard inference codes, but with inference times reduced from O(day) to 20 s per event. Our networks are trained using simulated data, including an estimate of the detector noise characteristics near the event. This encodes the signal and noise models within millions of neural-network parameters and enables inference for any observed data consistent with the training distribution, accounting for noise nonstationarity from event to event. Our algorithm-called "DINGO"-sets a new standard in fast and accurate inference of physical parameters of detected gravitational wave events, which should enable real-time data analysis without sacrificing accuracy.
RESUMO
Understanding how rich dynamics emerge in neural populations requires models exhibiting a wide range of behaviors while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single-neuron scale to empirical population data. To close this gap, we propose to fit such data at a mesoscale, using a mechanistic but low-dimensional and, hence, statistically tractable model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous pools of neurons and modeling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to optimize parameters by gradient ascent on the log likelihood or perform Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. We illustrate this approach using a model of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived and show that both single-neuron and connectivity parameters can be recovered from simulated data. In particular, our inference method extracts posterior correlations between model parameters, which define parameter subsets able to reproduce the data. We compute the Bayesian posterior for combinations of parameters using MCMC sampling and investigate how the approximations inherent in a mesoscopic population model affect the accuracy of the inferred single-neuron parameters.
RESUMO
During perceptual decisions the activity of sensory neurons covaries with choice, a covariation often quantified as "choice-probability". Moreover, choices are influenced by a subject's previous choice (serial dependence) and neuronal activity often shows temporal correlations on long (seconds) timescales. Here, we test whether these findings are linked. Using generalized linear models, we analyze simultaneous measurements of behavior and V2 neural activity in macaques performing a visual discrimination task. Both, decisions and spiking activity show substantial temporal correlations and cross-correlations but seem to reflect two mostly separate processes. Indeed, removing history effects using semipartial correlation analysis leaves choice probabilities largely unchanged. The serial dependencies in choices and neural activity therefore cannot explain the observed choice probability. Rather, serial dependencies in choices and spiking activity reflect two predominantly separate but parallel processes, which are coupled on each trial by covariations between choices and activity. These findings provide important constraints for computational models of perceptual decision-making that include feedback signals.SIGNIFICANCE STATEMENT Correlations, unexplained by the sensory input, between the activity of sensory neurons and an animal's perceptual choice ("choice probabilities") have received attention from both a systems and computational neuroscience perspective. Conversely, whereas temporal correlations for both spiking activity ("non-stationarities") and for a subject's choices in perceptual tasks ("serial dependencies") have long been established, they have typically been ignored when measuring choice probabilities. Some accounts of choice probabilities incorporating feedback predict that these observations are linked. Here, we explore the extent to which this is the case. We find that, contrasting with these predictions, choice probabilities are largely independent of serial dependencies, which adds new constraints to accounts of choice probabilities that include feedback.
Assuntos
Comportamento de Escolha , Modelos Neurológicos , Córtex Visual/fisiologia , Animais , Discriminação Psicológica , Macaca mulatta , Masculino , Percepção VisualRESUMO
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
Assuntos
Potenciais de Ação/fisiologia , Cálcio/metabolismo , Biologia Computacional/métodos , Modelos Neurológicos , Algoritmos , Animais , Cálcio/química , Cálcio/fisiologia , Bases de Dados Factuais , Camundongos , Imagem Molecular , Imagem Óptica , Retina/citologia , Neurônios Retinianos/citologia , Neurônios Retinianos/metabolismoRESUMO
The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat-a measure of population statistics derived from thermodynamics-has been used to suggest that neural populations are optimized to operate at a "critical point". However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect "signatures of criticality", and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.
Assuntos
Biologia Computacional/métodos , Modelos Neurológicos , Neurônios/fisiologia , Animais , Gatos , Células Ganglionares da Retina/citologia , Células Ganglionares da Retina/fisiologia , TermodinâmicaRESUMO
In the perceptual sciences, experimenters study the causal mechanisms of perceptual systems by probing observers with carefully constructed stimuli. It has long been known, however, that perceptual decisions are not only determined by the stimulus, but also by internal factors. Internal factors could lead to a statistical influence of previous stimuli and responses on the current trial, resulting in serial dependencies, which complicate the causal inference between stimulus and response. However, the majority of studies do not take serial dependencies into account, and it has been unclear how strongly they influence perceptual decisions. We hypothesize that one reason for this neglect is that there has been no reliable tool to quantify them and to correct for their effects. Here we develop a statistical method to detect, estimate, and correct for serial dependencies in behavioral data. We show that even trained psychophysical observers suffer from strong history dependence. A substantial fraction of the decision variance on difficult stimuli was independent of the stimulus but dependent on experimental history.We discuss the strong dependence of perceptual decisions on internal factors and its implications for correct data interpretation.
Assuntos
Tomada de Decisões/fisiologia , Modelos Estatísticos , Percepção Visual/fisiologia , Humanos , Estimulação Luminosa/métodos , PsicofísicaRESUMO
Computational models, particularly finite-element (FE) models, are essential for interpreting experimental data and predicting system behavior, especially when direct measurements are limited. A major challenge in tuning these models is the large number of parameters involved. Traditional methods, such as one-by-one sensitivity analyses, are time-consuming, subjective, and often return only a single set of parameter values, focusing on reproducing averaged data rather than capturing the full variability of experimental measurements. In this study, we applied simulation-based inference (SBI) using neural posterior estimation (NPE) to tune an FE model of the human middle ear. The training dataset consisted of 10,000 FE simulations of stapes velocity, ear-canal (EC) input impedance, and absorbance, paired with seven FE parameter values randomly sampled within plausible ranges. The neural network learned the association between parameters and simulation outcomes, returning the probability distribution of parameter values that can reproduce experimental data. Our approach successfully identified parameter sets that reproduced three experimental datasets simultaneously. By accounting for experimental noise and variability during training, the method provided a probability distribution of parameters, representing all valid combinations that could fit the data, rather than tuning to averaged values. The network demonstrated robustness to noise and exhibited an efficient learning curve due to the large training dataset. SBI offers an objective alternative to laborious sensitivity analyses, providing probability distributions for each parameter and uncovering interactions between them. This method can be applied to any biological FE model, and we demonstrated its effectiveness using a middle-ear model. Importantly, it holds promise for objective differential diagnosis of conductive hearing loss by providing insight into the mechanical properties of the middle ear.
RESUMO
Ongoing advances in experimental technique are making commonplace simultaneous recordings of the activity of tens to hundreds of cortical neurons at high temporal resolution. Latent population models, including Gaussian-process factor analysis and hidden linear dynamical system (LDS) models, have proven effective at capturing the statistical structure of such data sets. They can be estimated efficiently, yield useful visualisations of population activity, and are also integral building-blocks of decoding algorithms for brain-machine interfaces (BMI). One practical challenge, particularly to LDS models, is that when parameters are learned using realistic volumes of data the resulting models often fail to reflect the true temporal continuity of the dynamics; and indeed may describe a biologically-implausible unstable population dynamic that is, it may predict neural activity that grows without bound. We propose a method for learning LDS models based on expectation maximisation that constrains parameters to yield stable systems and at the same time promotes capture of temporal structure by appropriate regularisation. We show that when only little training data is available our method yields LDS parameter estimates which provide a substantially better statistical description of the data than alternatives, whilst guaranteeing stable dynamics. We demonstrate our methods using both synthetic data and extracellular multi-electrode recordings from motor cortex.
Assuntos
Inteligência Artificial , Redes Neurais de Computação , Algoritmos , Animais , Simulação por Computador , Interpretação Estatística de Dados , Eletrodos Implantados , Funções Verossimilhança , Modelos Lineares , Macaca mulatta , Modelos Neurológicos , Córtex Motor/fisiologia , Rede Nervosa/fisiologia , Distribuição Normal , Dinâmica Populacional , Interface Usuário-ComputadorRESUMO
Inferring parameters of computational models that capture experimental data are a central task in cognitive neuroscience. Bayesian statistical inference methods usually require the ability to evaluate the likelihood of the model-however, for many models of interest in cognitive neuroscience, the associated likelihoods cannot be computed efficiently. Simulation-based inference (SBI) offers a solution to this problem by only requiring access to simulations produced by the model. Previously, Fengler et al. introduced likelihood approximation networks (LANs, Fengler et al., 2021) which make it possible to apply SBI to models of decision-making, but require billions of simulations for training. Here, we provide a new SBI method that is substantially more simulation efficient. Our approach, mixed neural likelihood estimation (MNLE), trains neural density estimators on model simulations to emulate the simulator, and is designed to capture both the continuous (e.g., reaction times) and discrete (choices) data of decision-making models. The likelihoods of the emulator can then be used to perform Bayesian parameter inference on experimental data using standard approximate inference methods like Markov Chain Monte Carlo sampling. We demonstrate MNLE on two variants of the drift-diffusion model and show that it is substantially more efficient than LANs: MNLE achieves similar likelihood accuracy with six orders of magnitude fewer training simulations, and is significantly more accurate than LANs when both are trained with the same budget. Our approach enables researchers to perform SBI on custom-tailored models of decision-making, leading to fast iteration of model design for scientific discovery.
Assuntos
Algoritmos , Projetos de Pesquisa , Teorema de Bayes , Simulação por Computador , Cadeias de Markov , Método de Monte CarloRESUMO
The neurons in the cerebral cortex are not randomly interconnected. This specificity in wiring can result from synapse formation mechanisms that connect neurons, depending on their electrical activity and genetically defined identity. Here, we report that the morphological properties of the neurons provide an additional prominent source by which wiring specificity emerges in cortical networks. This morphologically determined wiring specificity reflects similarities between the neurons' axo-dendritic projections patterns, the packing density, and the cellular diversity of the neuropil. The higher these three factors are, the more recurrent is the topology of the network. Conversely, the lower these factors are, the more feedforward is the network's topology. These principles predict the empirically observed occurrences of clusters of synapses, cell type-specific connectivity patterns, and nonrandom network motifs. Thus, we demonstrate that wiring specificity emerges in the cerebral cortex at subcellular, cellular, and network scales from the specific morphological properties of its neuronal constituents.
Assuntos
Córtex Cerebral , Neurônios , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Sinapses/fisiologiaRESUMO
A striking feature of cortical organization is that the encoding of many stimulus features, for example orientation or direction selectivity, is arranged into topographic maps. Functional imaging methods such as optical imaging of intrinsic signals, voltage sensitive dye imaging or functional magnetic resonance imaging are important tools for studying the structure of cortical maps. As functional imaging measurements are usually noisy, statistical processing of the data is necessary to extract maps from the imaging data. We here present a probabilistic model of functional imaging data based on Gaussian processes. In comparison to conventional approaches, our model yields superior estimates of cortical maps from smaller amounts of data. In addition, we obtain quantitative uncertainty estimates, i.e. error bars on properties of the estimated map. We use our probabilistic model to study the coding properties of the map and the role of noise-correlations by decoding the stimulus from single trials of an imaging experiment.