RESUMO
Pattern separation is a valuable computational function performed by neuronal circuits, such as the dentate gyrus, where dissimilarity between inputs is increased, reducing noise and increasing the storage capacity of downstream networks. Pattern separation is studied from both in vivo experimental and computational perspectives and, a number of different measures (such as orthogonalisation, decorrelation, or spike train distance) have been applied to quantify the process of pattern separation. However, these are known to give conclusions that can differ qualitatively depending on the choice of measure and the parameters used to calculate it. We here demonstrate that arbitrarily increasing sparsity, a noticeable feature of dentate granule cell firing and one that is believed to be key to pattern separation, typically leads to improved classical measures for pattern separation even, inappropriately, up to the point where almost all information about the inputs is lost. Standard measures therefore both cannot differentiate between pattern separation and pattern destruction, and give results that may depend on arbitrary parameter choices. We propose that techniques from information theory, in particular mutual information, transfer entropy, and redundancy, should be applied to penalise the potential for lost information (often due to increased sparsity) that is neglected by existing measures. We compare five commonly-used measures of pattern separation with three novel techniques based on information theory, showing that the latter can be applied in a principled way and provide a robust and reliable measure for comparing the pattern separation performance of different neurons and networks. We demonstrate our new measures on detailed compartmental models of individual dentate granule cells and a dentate microcircuit, and show how structural changes associated with epilepsy affect pattern separation performance. We also demonstrate how our measures of pattern separation can predict pattern completion accuracy. Overall, our measures solve a widely acknowledged problem in assessing the pattern separation of neural circuits such as the dentate gyrus, as well as the cerebellum and mushroom body. Finally we provide a publicly available toolbox allowing for easy analysis of pattern separation in spike train ensembles.
Assuntos
Giro Denteado , Teoria da Informação , Giro Denteado/fisiologia , Neurônios/fisiologia , Encéfalo , Modelos NeurológicosRESUMO
Investigating and modelling the functionality of human neurons remains challenging due to the technical limitations, resulting in scarce and incomplete 3D anatomical reconstructions. Here we used a morphological modelling approach based on optimal wiring to repair the parts of a dendritic morphology that were lost due to incomplete tissue samples. In Drosophila, where dendritic regrowth has been studied experimentally using laser ablation, we found that modelling the regrowth reproduced a bimodal distribution between regeneration of cut branches and invasion by neighbouring branches. Interestingly, our repair model followed growth rules similar to those for the generation of a new dendritic tree. To generalise the repair algorithm from Drosophila to mammalian neurons, we artificially sectioned reconstructed dendrites from mouse and human hippocampal pyramidal cell morphologies, and showed that the regrown dendrites were morphologically similar to the original ones. Furthermore, we were able to restore their electrophysiological functionality, as evidenced by the recovery of their firing behaviour. Importantly, we show that such repairs also apply to other neuron types including hippocampal granule cells and cerebellar Purkinje cells. We then extrapolated the repair to incomplete human CA1 pyramidal neurons, where the anatomical boundaries of the particular brain areas innervated by the neurons in question were known. Interestingly, the repair of incomplete human dendrites helped to simulate the recently observed increased synaptic thresholds for dendritic NMDA spikes in human versus mouse dendrites. To make the repair tool available to the neuroscience community, we have developed an intuitive and simple graphical user interface (GUI), which is available in the TREES toolbox (www.treestoolbox.org).
Assuntos
Dendritos , Neurônios , Humanos , Camundongos , Animais , Dendritos/fisiologia , Neurônios/fisiologia , Células Piramidais/fisiologia , Hipocampo/fisiologia , Drosophila , MamíferosRESUMO
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown, given that simpler models with fewer ion channels are also able to functionally reproduce the behaviour of some neurons. Here, we stochastically varied the ion channel densities of a biophysically detailed dentate gyrus granule cell model to produce a large population of putative granule cells, comparing those with all 15 original ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were dramatically more frequent at ~6% vs. ~1% in the simpler model. The full models were also more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve a target excitability.
Assuntos
Modelos Neurológicos , Neurônios , Potenciais de Ação/fisiologia , Neurônios/fisiologia , Canais Iônicos/fisiologiaRESUMO
Neuronal hyperexcitability is a pathological characteristic of Alzheimer's disease (AD). Three main mechanisms have been proposed to explain it: (i) dendritic degeneration leading to increased input resistance, (ii) ion channel changes leading to enhanced intrinsic excitability, and (iii) synaptic changes leading to excitation-inhibition (E/I) imbalance. However, the relative contribution of these mechanisms is not fully understood. Therefore, we performed biophysically realistic multi-compartmental modelling of neuronal excitability in reconstructed CA1 pyramidal neurons from wild-type and APP/PS1 mice, a well-established animal model of AD. We show that, for synaptic activation, the excitability-promoting effects of dendritic degeneration are cancelled out by decreased excitation due to synaptic loss. We find an interesting balance between excitability regulation and an enhanced degeneration in the basal dendrites of APP/PS1 cells, potentially leading to increased excitation by the apical but decreased excitation by the basal Schaffer collateral pathway. Furthermore, our simulations reveal three pathomechanistic scenarios that can account for the experimentally observed increase in firing and bursting of CA1 pyramidal neurons in APP/PS1 mice: scenario 1: enhanced E/I ratio; scenario 2: alteration of intrinsic ion channels (IAHP down-regulated; INap , INa and ICaT up-regulated) in addition to enhanced E/I ratio; and scenario 3: increased excitatory burst input. Our work supports the hypothesis that pathological network and ion channel changes are major contributors to neuronal hyperexcitability in AD. Overall, our results are in line with the concept of multi-causality according to which multiple different disruptions are separately sufficient but no single particular disruption is necessary for neuronal hyperexcitability. KEY POINTS: This work presents simulations of synaptically driven responses in pyramidal cells (PCs) with Alzheimer's disease (AD)-related dendritic degeneration. Dendritic degeneration alone alters PC responses to layer-specific input but additional pathomechanistic scenarios are required to explain neuronal hyperexcitability in AD as follows. Possible scenario 1: AD-related increased excitatory input together with decreased inhibitory input (E/I imbalance) can lead to hyperexcitability in PCs. Possible scenario 2: changes in E/I balance combined with altered ion channel properties can account for hyperexcitability in AD. Possible scenario 3: burst hyperactivity of the surrounding network can explain hyperexcitability of PCs during AD.
Assuntos
Doença de Alzheimer , Camundongos , Animais , Hipocampo/fisiologia , Neurônios/fisiologia , Células Piramidais/fisiologia , Canais Iônicos/metabolismo , Modelos Animais de DoençasRESUMO
The formation of neuronal dendrite branches is fundamental for the wiring and function of the nervous system. Indeed, dendrite branching enhances the coverage of the neuron's receptive field and modulates the initial processing of incoming stimuli. Complex dendrite patterns are achieved in vivo through a dynamic process of de novo branch formation, branch extension and retraction. The first step towards branch formation is the generation of a dynamic filopodium-like branchlet. The mechanisms underlying the initiation of dendrite branchlets are therefore crucial to the shaping of dendrites. Through in vivo time-lapse imaging of the subcellular localization of actin during the process of branching of Drosophila larva sensory neurons, combined with genetic analysis and electron tomography, we have identified the Actin-related protein (Arp) 2/3 complex as the major actin nucleator involved in the initiation of dendrite branchlet formation, under the control of the activator WAVE and of the small GTPase Rac1. Transient recruitment of an Arp2/3 component marks the site of branchlet initiation in vivo These data position the activation of Arp2/3 as an early hub for the initiation of branchlet formation.
Assuntos
Complexo 2-3 de Proteínas Relacionadas à Actina/metabolismo , Dendritos/metabolismo , Citoesqueleto de Actina/metabolismo , Complexo 2-3 de Proteínas Relacionadas à Actina/genética , Actinas/metabolismo , Animais , Drosophila , Drosophila melanogaster , Células Receptoras Sensoriais/metabolismoRESUMO
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron's afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Assuntos
Dendritos/fisiologia , Aprendizado de Máquina , Modelos Neurológicos , Redes Neurais de Computação , Potenciais de Ação/fisiologia , Animais , Biologia Computacional , Aprendizado Profundo , Humanos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Processos EstocásticosRESUMO
Throughout the animal kingdom, the structure of the central nervous system varies widely from distributed ganglia in worms to compact brains with varying degrees of folding in mammals. The differences in structure may indicate a fundamentally different circuit organization. However, the folded brain most likely is a direct result of mechanical forces when considering that a larger surface area of cortex packs into the restricted volume provided by the skull. Here, we introduce a computational model that instead of modeling mechanical forces relies on dimension reduction methods to place neurons according to specific connectivity requirements. For a simplified connectivity with strong local and weak long-range connections, our model predicts a transition from separate ganglia through smooth brain structures to heavily folded brains as the number of cortical columns increases. The model reproduces experimentally determined relationships between metrics of cortical folding and its pathological phenotypes in lissencephaly, polymicrogyria, microcephaly, autism, and schizophrenia. This suggests that mechanical forces that are known to lead to cortical folding may synergistically contribute to arrangements that reduce wiring. Our model provides a unified conceptual understanding of gyrification linking cellular connectivity and macroscopic structures in large-scale neural network models of the brain.
Assuntos
Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Modelos Neurológicos , Rede Nervosa/anatomia & histologia , Rede Nervosa/fisiologia , Animais , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , HumanosRESUMO
Adult newborn hippocampal granule cells (abGCs) contribute to spatial learning and memory. abGCs are thought to play a specific role in pattern separation, distinct from developmentally born mature GCs (mGCs). Here we examine at which exact cell age abGCs are synaptically integrated into the adult network and which forms of synaptic plasticity are expressed in abGCs and mGCs. We used virus-mediated labeling of abGCs and mGCs to analyze changes in spine morphology as an indicator of plasticity in rats in vivo. High-frequency stimulation of the medial perforant path induced long-term potentiation in the middle molecular layer (MML) and long-term depression in the nonstimulated outer molecular layer (OML). This stimulation protocol elicited NMDA receptor-dependent homosynaptic spine enlargement in the MML and heterosynaptic spine shrinkage in the inner molecular layer and OML. Both processes were concurrently present on individual dendritic trees of abGCs and mGCs. Spine shrinkage counteracted spine enlargement and thus could play a homeostatic role, normalizing synaptic weights. Structural homosynaptic spine plasticity had a clear onset, appearing in abGCs by 28 d postinjection (dpi), followed by heterosynaptic spine plasticity at 35 dpi, and at 77 dpi was equally as present in mature abGCs as in mGCs. From 35 dpi on, about 60% of abGCs and mGCs showed significant homo- and heterosynaptic plasticity on the single-cell level. This demonstration of structural homo- and heterosynaptic plasticity in abGCs and mGCs defines the time course of the appearance of synaptic plasticity and integration for abGCs.
Assuntos
Grânulos Citoplasmáticos/metabolismo , Espinhas Dendríticas/fisiologia , Hipocampo/citologia , Hipocampo/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Animais , Animais Recém-Nascidos , Células Cultivadas , Estimulação Elétrica , Potenciação de Longa Duração , Masculino , Modelos Neurológicos , Neurônios/citologia , Ratos , Ratos Sprague-DawleyRESUMO
Neurons sharing similar features are often selectively connected with a higher probability and should be located in close vicinity to save wiring. Selective connectivity has, therefore, been proposed to be the cause for spatial organization in cortical maps. Interestingly, orientation preference (OP) maps in the visual cortex are found in carnivores, ungulates, and primates but are not found in rodents, indicating fundamental differences in selective connectivity that seem unexpected for closely related species. Here, we investigate this finding by using multidimensional scaling to predict the locations of neurons based on minimizing wiring costs for any given connectivity. Our model shows a transition from an unstructured salt-and-pepper organization to a pinwheel arrangement when increasing the number of neurons, even without changing the selectivity of the connections. Increasing neuronal numbers also leads to the emergence of layers, retinotopy, or ocular dominance columns for the selective connectivity corresponding to each arrangement. We further show that neuron numbers impact overall interconnectivity as the primary reason for the appearance of neural maps, which we link to a known phase transition in an Ising-like model from statistical mechanics. Finally, we curated biological data from the literature to show that neural maps appear as the number of neurons in visual cortex increases over a wide range of mammalian species. Our results provide a simple explanation for the existence of salt-and-pepper arrangements in rodents and pinwheel arrangements in the visual cortex of primates, carnivores, and ungulates without assuming differences in the general visual cortex architecture and connectivity.
Assuntos
Modelos Biológicos , Rede Nervosa , Animais , TemperaturaRESUMO
Neurons collect their inputs from other neurons by sending out arborized dendritic structures. However, the relationship between the shape of dendrites and the precise organization of synaptic inputs in the neural tissue remains unclear. Inputs could be distributed in tight clusters, entirely randomly or else in a regular grid-like manner. Here, we analyze dendritic branching structures using a regularity index R, based on average nearest neighbor distances between branch and termination points, characterizing their spatial distribution. We find that the distributions of these points depend strongly on cell types, indicating possible fundamental differences in synaptic input organization. Moreover, R is independent of cell size and we find that it is only weakly correlated with other branching statistics, suggesting that it might reflect features of dendritic morphology that are not captured by commonly studied branching statistics. We then use morphological models based on optimal wiring principles to study the relation between input distributions and dendritic branching structures. Using our models, we find that branch point distributions correlate more closely with the input distributions while termination points in dendrites are generally spread out more randomly with a close to uniform distribution. We validate these model predictions with connectome data. Finally, we find that in spatial input distributions with increasing regularity, characteristic scaling relationships between branching features are altered significantly. In summary, we conclude that local statistics of input distributions and dendrite morphology depend on each other leading to potentially cell type specific branching features.
Assuntos
Biologia Computacional/métodos , Dendritos/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Neurônios/fisiologia , Animais , Tamanho Celular , Simulação por Computador , Conectoma , Dípteros , Modelos Neurológicos , Plasticidade Neuronal , Reconhecimento Automatizado de Padrão , Software , Sinapses/fisiologiaRESUMO
Dendrites form predominantly binary trees that are exquisitely embedded in the networks of the brain. While neuronal computation is known to depend on the morphology of dendrites, their underlying topological blueprint remains unknown. Here, we used a centripetal branch ordering scheme originally developed to describe river networks-the Horton-Strahler order (SO)-to examine hierarchical relationships of branching statistics in reconstructed and model dendritic trees. We report on a number of universal topological relationships with SO that are true for all binary trees and distinguish those from SO-sorted metric measures that appear to be cell type-specific. The latter are therefore potential new candidates for categorising dendritic tree structures. Interestingly, we find a faithful correlation of branch diameters with centripetal branch orders, indicating a possible functional importance of SO for dendritic morphology and growth. Also, simulated local voltage responses to synaptic inputs are strongly correlated with SO. In summary, our study identifies important SO-dependent measures in dendritic morphology that are relevant for neural function while at the same time it describes other relationships that are universal for all dendrites.
Assuntos
Dendritos/ultraestrutura , Modelos Anatômicos , Modelos Neurológicos , Modelos Estatísticos , Plasticidade Neuronal , Animais , Simulação por Computador , HumanosRESUMO
Integration of synaptic currents across an extensive dendritic tree is a prerequisite for computation in the brain. Dendritic tapering away from the soma has been suggested to both equalise contributions from synapses at different locations and maximise the current transfer to the soma. To find out how this is achieved precisely, an analytical solution for the current transfer in dendrites with arbitrary taper is required. We derive here an asymptotic approximation that accurately matches results from numerical simulations. From this we then determine the diameter profile that maximises the current transfer to the soma. We find a simple quadratic form that matches diameters obtained experimentally, indicating a fundamental architectural principle of the brain that links dendritic diameters to signal transmission.
Assuntos
Dendritos/fisiologia , Modelos Neurológicos , Transmissão Sináptica/fisiologia , Algoritmos , Animais , Encéfalo/fisiologia , Biologia Computacional , Simulação por Computador , Neurônios/fisiologiaRESUMO
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Assuntos
Potenciais de Ação/fisiologia , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Simulação por Computador , Campos Eletromagnéticos , Humanos , Potenciais da Membrana/fisiologia , Transmissão Sináptica/fisiologiaRESUMO
Dendritic morphology has been shown to have a dramatic impact on neuronal function. However, population features such as the inherent variability in dendritic morphology between cells belonging to the same neuronal type are often overlooked when studying computation in neural networks. While detailed models for morphology and electrophysiology exist for many types of single neurons, the role of detailed single cell morphology in the population has not been studied quantitatively or computationally. Here we use the structural context of the neural tissue in which dendritic trees exist to drive their generation in silico. We synthesize the entire population of dentate gyrus granule cells, the most numerous cell type in the hippocampus, by growing their dendritic trees within their characteristic dendritic fields bounded by the realistic structural context of (1) the granule cell layer that contains all somata and (2) the molecular layer that contains the dendritic forest. This process enables branching statistics to be linked to larger scale neuroanatomical features. We find large differences in dendritic total length and individual path length measures as a function of location in the dentate gyrus and of somatic depth in the granule cell layer. We also predict the number of unique granule cell dendrites invading a given volume in the molecular layer. This work enables the complete population-level study of morphological properties and provides a framework to develop complex and realistic neural network models.
Assuntos
Biologia Computacional/métodos , Simulação por Computador , Dendritos/fisiologia , Modelos Neurológicos , Neuroanatomia/métodos , Neurônios/citologia , Animais , Giro Denteado/citologia , RatosRESUMO
The wide diversity of dendritic trees is one of the most striking features of neural circuits. Here we develop a general quantitative theory relating the total length of dendritic wiring to the number of branch points and synapses. We show that optimal wiring predicts a 2/3 power law between these measures. We demonstrate that the theory is consistent with data from a wide variety of neurons across many different species and helps define the computational compartments in dendritic trees. Our results imply fundamentally distinct design principles for dendritic arbors compared with vascular, bronchial, and botanical trees.
Assuntos
Dendritos/fisiologia , Modelos Neurológicos , Vias Neurais/fisiologia , Neurônios/fisiologia , Sinapses/fisiologia , Animais , Humanos , Neurônios/ultraestruturaRESUMO
Background: Repetitive transcranial magnetic stimulation (rTMS) induces long-term changes of synapses, but the mechanisms behind these modifications are not fully understood. Although there has been progress in the development of multi-scale modeling tools, no comprehensive module for simulating rTMS-induced synaptic plasticity in biophysically realistic neurons exists.. Objective: We developed a modelling framework that allows the replication and detailed prediction of long-term changes of excitatory synapses in neurons stimulated by rTMS. Methods: We implemented a voltage-dependent plasticity model that has been previously established for simulating frequency-, time-, and compartment-dependent spatio-temporal changes of excitatory synapses in neuronal dendrites. The plasticity model can be incorporated into biophysical neuronal models and coupled to electrical field simulations. Results: We show that the plasticity modelling framework replicates long-term potentiation (LTP)-like plasticity in hippocampal CA1 pyramidal cells evoked by 10-Hz repetitive magnetic stimulation (rMS). This plasticity was strongly distance dependent and concentrated at the proximal synapses of the neuron. We predicted a decrease in the plasticity amplitude for 5 Hz and 1 Hz protocols with decreasing frequency. Finally, we successfully modelled plasticity in distal synapses upon local electrical theta-burst stimulation (TBS) and predicted proximal and distal plasticity for rMS TBS. Notably, the rMS TBS-evoked synaptic plasticity exhibited robust facilitation by dendritic spikes and low sensitivity to inhibitory suppression. Conclusion: The plasticity modelling framework enables precise simulations of LTP-like cellular effects with high spatio-temporal resolution, enhancing the efficiency of parameter screening and the development of plasticity-inducing rTMS protocols.
RESUMO
Branching allows neurons to make synaptic contacts with large numbers of other neurons, facilitating the high connectivity of nervous systems. Neuronal arbors have geometric properties such as branch lengths and diameters that are optimal in that they maximize signaling speeds while minimizing construction costs. In this work, we asked whether neuronal arbors have topological properties that may also optimize their growth or function. We discovered that for a wide range of invertebrate and vertebrate neurons the distributions of their subtree sizes follow power laws, implying that they are scale invariant. The power-law exponent distinguishes different neuronal cell types. Postsynaptic spines and branchlets perturb scale invariance. Through simulations, we show that the subtree-size distribution depends on the symmetry of the branching rules governing arbor growth and that optimal morphologies are scale invariant. Thus, the subtree-size distribution is a topological property that recapitulates the functional morphology of dendrites.
Assuntos
Dendritos , Neurônios , Dendritos/metabolismo , Neurônios/fisiologia , MorfogêneseRESUMO
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc13-1/Munc13-2 knockout mice with completely blocked synaptic transmission. Neither the induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal-like distribution of spines.
Assuntos
Plasticidade Neuronal , Neurônios , Camundongos , Ratos , Animais , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Células Piramidais/metabolismo , Espinhas Dendríticas/metabolismo , Transmissão Sináptica/fisiologia , Sinapses/fisiologia , NeurogêneseRESUMO
Neurons encounter unavoidable evolutionary trade-offs between multiple tasks. They must consume as little energy as possible while effectively fulfilling their functions. Cells displaying the best performance for such multi-task trade-offs are said to be Pareto optimal, with their ion channel configurations underpinning their functionality. Ion channel degeneracy, however, implies that multiple ion channel configurations can lead to functionally similar behaviour. Therefore, instead of a single model, neuroscientists often use populations of models with distinct combinations of ionic conductances. This approach is called population (database or ensemble) modelling. It remains unclear, which ion channel parameters in the vast population of functional models are more likely to be found in the brain. Here we argue that Pareto optimality can serve as a guiding principle for addressing this issue by helping to identify the subpopulations of conductance-based models that perform best for the trade-off between economy and functionality. In this way, the high-dimensional parameter space of neuronal models might be reduced to geometrically simple low-dimensional manifolds, potentially explaining experimentally observed ion channel correlations. Conversely, Pareto inference might also help deduce neuronal functions from high-dimensional Patch-seq data. In summary, Pareto optimality is a promising framework for improving population modelling of neurons and their circuits.