Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Neurosci ; 44(5)2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-37989593

RESUMEN

Scientists have long conjectured that the neocortex learns patterns in sensory data to generate top-down predictions of upcoming stimuli. In line with this conjecture, different responses to pattern-matching vs pattern-violating visual stimuli have been observed in both spiking and somatic calcium imaging data. However, it remains unknown whether these pattern-violation signals are different between the distal apical dendrites, which are heavily targeted by top-down signals, and the somata, where bottom-up information is primarily integrated. Furthermore, it is unknown how responses to pattern-violating stimuli evolve over time as an animal gains more experience with them. Here, we address these unanswered questions by analyzing responses of individual somata and dendritic branches of layer 2/3 and layer 5 pyramidal neurons tracked over multiple days in primary visual cortex of awake, behaving female and male mice. We use sequences of Gabor patches with patterns in their orientations to create pattern-matching and pattern-violating stimuli, and two-photon calcium imaging to record neuronal responses. Many neurons in both layers show large differences between their responses to pattern-matching and pattern-violating stimuli. Interestingly, these responses evolve in opposite directions in the somata and distal apical dendrites, with somata becoming less sensitive to pattern-violating stimuli and distal apical dendrites more sensitive. These differences between the somata and distal apical dendrites may be important for hierarchical computation of sensory predictions and learning, since these two compartments tend to receive bottom-up and top-down information, respectively.


Asunto(s)
Calcio , Neocórtex , Masculino , Femenino , Ratones , Animales , Calcio/fisiología , Neuronas/fisiología , Dendritas/fisiología , Células Piramidales/fisiología , Neocórtex/fisiología
2.
Annu Rev Neurosci ; 40: 603-627, 2017 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-28772102

RESUMEN

A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Cerebral/fisiología , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Animales , Neuronas/fisiología , Transmisión Sináptica/fisiología
3.
J Neurosci ; 40(30): 5807-5819, 2020 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-32561674

RESUMEN

Simple stimuli have been critical to understanding neural population codes in sensory systems. Yet it remains necessary to determine the extent to which this understanding generalizes to more complex conditions. To examine this problem, we measured how populations of direction-selective ganglion cells (DSGCs) from the retinas of male and female mice respond to a global motion stimulus with its direction and speed changing dynamically. We then examined the encoding and decoding of motion direction in both individual and populations of DSGCs. Individual cells integrated global motion over ∼200 ms, and responses were tuned to direction. However, responses were sparse and broadly tuned, which severely limited decoding performance from small DSGC populations. In contrast, larger populations compensated for response sparsity, enabling decoding with high temporal precision (<100 ms). At these timescales, correlated spiking was minimal and had little impact on decoding performance, unlike results obtained using simpler local motion stimuli decoded over longer timescales. We use these data to define different DSGC population decoding regimes that use or mitigate correlated spiking to achieve high-spatial versus high-temporal resolution.SIGNIFICANCE STATEMENT ON-OFF direction-selective ganglion cells (ooDSGCs) in the mammalian retina are typically thought to signal local motion to the brain. However, several recent studies suggest they may signal global motion. Here we analyze the fidelity of encoding and decoding global motion in a natural scene across large populations of ooDSGCs. We show that large populations of DSGCs are capable of signaling rapid changes in global motion.


Asunto(s)
Percepción de Movimiento/fisiología , Orientación/fisiología , Estimulación Luminosa/métodos , Células Ganglionares de la Retina/fisiología , Animales , Femenino , Masculino , Ratones , Ratones Endogámicos C57BL , Ratones Endogámicos CBA
4.
J Sleep Res ; 28(4): e12806, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-30549130

RESUMEN

Parkinson's disease (PD) is highly comorbid with sleep dysfunction. In contrast to motor symptoms, few therapeutic interventions exist to address sleep symptoms in PD. Subthalamic nucleus (STN) deep brain stimulation (DBS) treats advanced PD motor symptoms and may improve sleep architecture. As a proof of concept toward demonstrating that STN-DBS could be used to identify sleep stages commensurate with clinician-scored polysomnography (PSG), we developed a novel artificial neural network (ANN) that could trigger targeted stimulation in response to inferred sleep state from STN local field potentials (LFPs) recorded from implanted DBS electrodes. STN LFP recordings were collected from nine PD patients via a percutaneous cable attached to the DBS lead, during a full night's sleep (6-8 hr) with concurrent polysomnography (PSG). We trained a feedforward neural network to prospectively identify sleep stage with PSG-level accuracy from 30-s epochs of LFP recordings. Our model's sleep-stage predictions match clinician-identified sleep stage with a mean accuracy of 91% on held-out epochs. Furthermore, leave-one-group-out analysis also demonstrates 91% mean classification accuracy for novel subjects. These results, which classify sleep stage across a typical heterogenous sample of PD patients, may indicate spectral biomarkers for automatically scoring sleep stage in PD patients with implanted DBS devices. Further development of this model may also focus on adapting stimulation during specific sleep stages to treat targeted sleep deficits.


Asunto(s)
Enfermedad de Parkinson/complicaciones , Polisomnografía/métodos , Fases del Sueño/fisiología , Núcleo Subtalámico/fisiopatología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Enfermedad de Parkinson/fisiopatología
5.
J Vis ; 19(4): 29, 2019 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-31026016

RESUMEN

Primary visual cortex (V1) is the first stage of cortical image processing, and major effort in systems neuroscience is devoted to understanding how it encodes information about visual stimuli. Within V1, many neurons respond selectively to edges of a given preferred orientation: These are known as either simple or complex cells. Other neurons respond to localized center-surround image features. Still others respond selectively to certain image stimuli, but the specific features that excite them are unknown. Moreover, even for the simple and complex cells-the best-understood V1 neurons-it is challenging to predict how they will respond to natural image stimuli. Thus, there are important gaps in our understanding of how V1 encodes images. To fill this gap, we trained deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and we find that the predicted firing rates are highly correlated (\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({\overline {{\bf{CC}}} _{{\bf{norm}}}}\) = 0.556 ± 0.01) with the neurons' actual firing rates over a population of 355 neurons. This performance value is quoted for all neurons, with no selection filter. Performance is better for more active neurons: When evaluated only on neurons with mean firing rates above 5 Hz, our predictors achieve correlations of \({\overline {{\bf{CC}}} _{{\bf{norm}}}}\) = 0.69 ± 0.01 with the neurons' true firing rates. We find that the firing rates of both orientation-selective and non-orientation-selective neurons can be predicted with high accuracy. Additionally, we use a variety of models to benchmark performance and find that our convolutional neural-network model makes more accurate predictions.


Asunto(s)
Aprendizaje Profundo , Neuronas/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Animales , Macaca , Redes Neurales de la Computación , Orientación , Orientación Espacial
6.
PLoS Comput Biol ; 13(4): e1005497, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28419098

RESUMEN

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina's performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with "differential correlations", which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can-in some cases-optimize robustness against noise.


Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Células Receptoras Sensoriales/fisiología , Biología Computacional , Simulación por Computador
7.
Entropy (Basel) ; 20(7)2018 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-33265579

RESUMEN

Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting "Reliable Moment" model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns.

8.
Hippocampus ; 26(5): 623-32, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26482936

RESUMEN

The dentate gyrus (DG) is thought to perform pattern separation on inputs received from the entorhinal cortex, such that the DG forms distinct representations of different input patterns. Neuronal responses, however, are known to be variable, and that variability has the potential to confuse the representations of different inputs, thereby hindering the pattern separation function. This variability can be especially problematic for tissues such as the DG, in which the responses can persist for tens of seconds following stimulation: the long response duration allows for variability from many different sources to accumulate. To understand how the DG can robustly encode different input patterns, we investigated a recently developed in vitro hippocampal DG preparation that generates persistent responses to transient electrical stimulation. For 10-20 s after stimulation, the responses are indicative of the pattern of stimulation that was applied, even though the responses exhibit significant trial-to-trial variability. Analyzing the dynamical trajectories of the evoked responses, we found that, following stimulation, the neural responses follow distinct paths through the space of possible neural activations, with a different path associated with each stimulation pattern. The neural responses' trial-to-trial variability shifts the responses along these paths rather than between them, maintaining the separability of the input patterns. Manipulations that redistributed the variability more isotropically over the space of possible neural activations impeded the pattern separation function. Consequently, we conclude that the confinement of neuronal variability to these one-dimensional paths mitigates the impacts of variability on pattern encoding and, thus, may be an important aspect of the DG's ability to robustly encode input patterns.


Asunto(s)
Potenciales de Acción/fisiología , Giro Dentado/citología , Giro Dentado/fisiología , Neuronas/fisiología , Dinámicas no Lineales , Animales , Estimulación Eléctrica , Potenciales Postsinápticos Excitadores/fisiología , Técnicas In Vitro , Modelos Neurológicos , Ratas
9.
PLoS Comput Biol ; 10(2): e1003469, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24586128

RESUMEN

Over repeat presentations of the same stimulus, sensory neurons show variable responses. This "noise" is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem - neural tuning curves, etc. - held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) - if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all.


Asunto(s)
Modelos Neurológicos , Células Receptoras Sensoriales/fisiología , Animales , Encéfalo/fisiología , Biología Computacional , Modelos Lineales , Conceptos Matemáticos , Red Nerviosa/fisiología , Distribución Normal , Relación Señal-Ruido
10.
J Neurosci ; 33(13): 5475-85, 2013 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-23536063

RESUMEN

Sparse coding models of natural scenes can account for several physiological properties of primary visual cortex (V1), including the shapes of simple cell receptive fields (RFs) and the highly kurtotic firing rates of V1 neurons. Current spiking network models of pattern learning and sparse coding require direct inhibitory connections between the excitatory simple cells, in conflict with the physiological distinction between excitatory (glutamatergic) and inhibitory (GABAergic) neurons (Dale's Law). At the same time, the computational role of inhibitory neurons in cortical microcircuit function has yet to be fully explained. Here we show that adding a separate population of inhibitory neurons to a spiking model of V1 provides conformance to Dale's Law, proposes a computational role for at least one class of interneurons, and accounts for certain observed physiological properties in V1. When trained on natural images, this excitatory-inhibitory spiking circuit learns a sparse code with Gabor-like RFs as found in V1 using only local synaptic plasticity rules. The inhibitory neurons enable sparse code formation by suppressing predictable spikes, which actively decorrelates the excitatory population. The model predicts that only a small number of inhibitory cells is required relative to excitatory cells and that excitatory and inhibitory input should be correlated, in agreement with experimental findings in visual cortex. We also introduce a novel local learning rule that measures stimulus-dependent correlations between neurons to support "explaining away" mechanisms in neural coding.


Asunto(s)
Potenciales de Acción/fisiología , Interneuronas/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Inhibición Neural/fisiología , Corteza Visual/citología , Animales , Simulación por Computador , Humanos , Aprendizaje/fisiología , Red Nerviosa/citología , Vías Nerviosas/fisiología , Dinámicas no Lineales , Valor Predictivo de las Pruebas , Estadística como Asunto
11.
PLoS Comput Biol ; 9(8): e1003182, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24009489

RESUMEN

The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques.


Asunto(s)
Modelos Neurológicos , Neuronas/fisiología , Corteza Visual/fisiología , Campos Visuales/fisiología , Animales , Biología Computacional , Simulación por Computador , Hurones , Macaca , Estadísticas no Paramétricas
12.
bioRxiv ; 2023 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-37461471

RESUMEN

Information is processed by networks of neurons in the brain. On the timescale of sensory processing, those neuronal networks have relatively fixed anatomical connectivity, while functional connectivity, which defines the interactions between neurons, can vary depending on the ongoing activity of the neurons within the network. We thus hypothesized that different types of stimuli, which drive different neuronal activities in the network, could lead those networks to display stimulus-dependent functional connectivity patterns. To test this hypothesis, we analyzed electrophysiological data from the Allen Brain Observatory, which utilized Neuropixels probes to simultaneously record stimulus-evoked activity from hundreds of neurons across 6 different regions of mouse visual cortex. The recordings had single-cell resolution and high temporal fidelity, enabling us to determine fine-scale functional connectivity. Comparing the functional connectivity patterns observed when different stimuli were presented to the mice, we made several nontrivial observations. First, while the frequencies of different connectivity motifs (i.e., the patterns of connectivity between triplets of neurons) were preserved across stimuli, the identities of the neurons within those motifs changed. This means that functional connectivity dynamically changes along with the input stimulus, but does so in a way that preserves the motif frequencies. Secondly, we found that the degree to which functional modules are contained within a single brain region (as opposed to being distributed between regions) increases with increasing stimulus complexity. This suggests a mechanism for how the brain could dynamically alter its computations based on its inputs. Altogether, our work reveals unexpected stimulus-dependence to the way groups of neurons interact to process incoming sensory information.

13.
Sci Data ; 10(1): 287, 2023 05 17.
Artículo en Inglés | MEDLINE | ID: mdl-37198203

RESUMEN

The apical dendrites of pyramidal neurons in sensory cortex receive primarily top-down signals from associative and motor regions, while cell bodies and nearby dendrites are heavily targeted by locally recurrent or bottom-up inputs from the sensory periphery. Based on these differences, a number of theories in computational neuroscience postulate a unique role for apical dendrites in learning. However, due to technical challenges in data collection, little data is available for comparing the responses of apical dendrites to cell bodies over multiple days. Here we present a dataset collected through the Allen Institute Mindscope's OpenScope program that addresses this need. This dataset comprises high-quality two-photon calcium imaging from the apical dendrites and the cell bodies of visual cortical pyramidal neurons, acquired over multiple days in awake, behaving mice that were presented with visual stimuli. Many of the cell bodies and dendrite segments were tracked over days, enabling analyses of how their responses change over time. This dataset allows neuroscientists to explore the differences between apical and somatic processing and plasticity.


Asunto(s)
Células Piramidales , Corteza Visual , Animales , Ratones , Cuerpo Celular , Dendritas/fisiología , Neuronas , Células Piramidales/fisiología , Corteza Visual/fisiología
14.
PLoS Comput Biol ; 7(10): e1002250, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-22046123

RESUMEN

Sparse coding algorithms trained on natural images can accurately predict the features that excite visual cortical neurons, but it is not known whether such codes can be learned using biologically realistic plasticity rules. We have developed a biophysically motivated spiking network, relying solely on synaptically local information, that can predict the full diversity of V1 simple cell receptive field shapes when trained on natural images. This represents the first demonstration that sparse coding principles, operating within the constraints imposed by cortical architecture, can successfully reproduce these receptive fields. We further prove, mathematically, that sparseness and decorrelation are the key ingredients that allow for synaptically local plasticity rules to optimize a cooperative, linear generative image model formed by the neural representation. Finally, we discuss several interesting emergent properties of our network, with the intent of bridging the gap between theoretical and experimental studies of visual cortex.


Asunto(s)
Potenciales de Acción/fisiología , Modelos Neurológicos , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Corteza Visual/fisiología , Animales , Macaca , Ratas , Corteza Visual/citología
15.
Nat Commun ; 11(1): 4605, 2020 09 14.
Artículo en Inglés | MEDLINE | ID: mdl-32929073

RESUMEN

From starlight to sunlight, adaptation alters retinal output, changing both the signal and noise among populations of retinal ganglion cells (RGCs). Here we determine how these light level-dependent changes impact decoding of retinal output, testing the importance of accounting for RGC noise correlations to optimally read out retinal activity. We find that at moonlight conditions, correlated noise is greater and assuming independent noise severely diminishes decoding performance. In fact, assuming independence among a local population of RGCs produces worse decoding than using a single RGC, demonstrating a failure of population codes when correlated noise is substantial and ignored. We generalize these results with a simple model to determine what conditions dictate this failure of population processing. This work elucidates the circumstances in which accounting for noise correlations is necessary to take advantage of population-level codes and shows that sensory adaptation can strongly impact decoding requirements on downstream brain areas.


Asunto(s)
Retina/fisiología , Adaptación Ocular/efectos de la radiación , Animales , Teorema de Bayes , Luz , Modelos Lineales , Visión Nocturna/fisiología , Estimulación Luminosa , Ratas Long-Evans , Retina/efectos de la radiación , Células Ganglionares de la Retina/fisiología , Células Ganglionares de la Retina/efectos de la radiación
16.
Neural Netw ; 131: 103-114, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32771841

RESUMEN

The current state-of-the-art object recognition algorithms, deep convolutional neural networks (DCNNs), are inspired by the architecture of the mammalian visual system, and are capable of human-level performance on many tasks. As they are trained for object recognition tasks, it has been shown that DCNNs develop hidden representations that resemble those observed in the mammalian visual system (Razavi and Kriegeskorte, 2014; Yamins and Dicarlo, 2016; Gu and van Gerven, 2015; Mcclure and Kriegeskorte, 2016). Moreover, DCNNs trained on object recognition tasks are currently among the best models we have of the mammalian visual system. This led us to hypothesize that teaching DCNNs to achieve even more brain-like representations could improve their performance. To test this, we trained DCNNs on a composite task, wherein networks were trained to: (a) classify images of objects; while (b) having intermediate representations that resemble those observed in neural recordings from monkey visual cortex. Compared with DCNNs trained purely for object categorization, DCNNs trained on the composite task had better object recognition performance and are more robust to label corruption. Interestingly, we found that neural data was not required for this process, but randomized data with the same statistical properties as neural data also boosted performance. While the performance gains we observed when training on the composite task vs the "pure" object recognition task were modest, they were remarkably robust. Notably, we observed these performance gains across all network variations we studied, including: smaller (CORNet-Z) vs larger (VGG-16) architectures; variations in optimizers (Adam vs gradient descent); variations in activation function (ReLU vs ELU); and variations in network initialization. Our results demonstrate the potential utility of a new approach to training object recognition networks, using strategies in which the brain - or at least the statistical properties of its activation patterns - serves as a teacher signal for training DCNNs.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Reconocimiento Visual de Modelos , Animales , Haplorrinos , Reconocimiento de Normas Patrones Automatizadas/normas , Corteza Visual/fisiología
17.
Front Cell Neurosci ; 14: 614377, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33414707

RESUMEN

A common feature of the primary processing structures of sensory systems is the presence of parallel output "channels" that convey different information about a stimulus. In the mammalian olfactory bulb, this is reflected in the mitral cells (MCs) and tufted cells (TCs) that have differing sensitivities to odors, with TCs being more sensitive than MCs. In this study, we examined potential mechanisms underlying the different responses of MCs vs. TCs. For TCs, we focused on superficial TCs (sTCs), which are a population of output TCs that reside in the superficial-most portion of the external plexiform layer, along with external tufted cells (eTCs), which are glutamatergic interneurons in the glomerular layer. Using whole-cell patch-clamp recordings in mouse bulb slices, we first measured excitatory currents in MCs, sTCs, and eTCs following olfactory sensory neuron (OSN) stimulation, separating the responses into a fast, monosynaptic component reflecting direct inputs from OSNs and a prolonged component partially reflecting eTC-mediated feedforward excitation. Responses were measured to a wide range of OSN stimulation intensities, simulating the different levels of OSN activity that would be expected to be produced by varying odor concentrations in vivo. Over a range of stimulation intensities, we found that the monosynaptic current varied significantly between the cell types, in the order of eTC > sTC > MC. The prolonged component was smaller in sTCs vs. both MCs and eTCs. sTCs also had much higher whole-cell input resistances than MCs, reflecting their smaller size and greater membrane resistivity. To evaluate how these different electrophysiological aspects contributed to spiking of the output MCs and sTCs, we used computational modeling. By exchanging the different cell properties in our modeled MCs and sTCs, we could evaluate each property's contribution to spiking differences between these cell types. This analysis suggested that the higher sensitivity of spiking in sTCs vs. MCs reflected both their larger monosynaptic OSN signal as well as their higher input resistance, while their smaller prolonged currents had a modest opposing effect. Taken together, our results indicate that both synaptic and intrinsic cellular features contribute to the production of parallel output channels in the olfactory bulb.

18.
Phys Rev Lett ; 103(24): 241301, 2009 Dec 11.
Artículo en Inglés | MEDLINE | ID: mdl-20366194

RESUMEN

Future weak lensing surveys will map the evolution of matter perturbations and gravitational potentials, yielding a new test of general relativity on cosmic scales. They will probe the relations between matter overdensities, local curvature, and the Newtonian potential. These relations can be modified in alternative gravity theories or by the effects of massive neutrinos or exotic dark energy fluids. We introduce two functions of time and scale which account for any such modifications in the linear regime. We use a principal component analysis to find the eigenmodes of these functions that cosmological data will constrain. The number of constrained modes gives a model-independent forecast of how many parameters describing deviations from general relativity could be constrained, along with w(z). The modes' scale and time dependence tell us which theoretical models will be better tested.

19.
Neuroscience ; 408: 191-203, 2019 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-30981865

RESUMEN

Selecting and moving to spatial targets are critical components of goal-directed behavior, yet their neural bases are not well understood. The superior colliculus (SC) is thought to contain a topographic map of contralateral space in which the activity of specific neuronal populations corresponds to particular spatial locations. However, these spatial representations are modulated by several decision-related variables, suggesting that they reflect information beyond simply the location of an upcoming movement. Here, we examine the extent to which these representations arise from competitive spatial choice. We recorded SC activity in male mice performing a behavioral task requiring orienting movements to targets for a water reward in two contexts. In "competitive" trials, either the left or right target could be rewarded, depending on which stimulus was presented at the central port. In "noncompetitive" trials, the same target (e.g., left) was rewarded throughout an entire block. While both trial types required orienting movements to the same spatial targets, only in competitive trials do targets compete for selection. We found that in competitive trials, pre-movement SC activity predicted movement to contralateral targets, as expected. However, in noncompetitive trials, some neurons lost their spatial selectivity and in others activity predicted movement to ipsilateral targets. Consistent with these findings, unilateral optogenetic inactivation of pre-movement SC activity ipsiversively biased competitive, but not noncompetitive, trials. Incorporating these results into an attractor model of SC activity points to distinct pathways for orienting movements under competitive and noncompetitive conditions, with the SC specifically required for selecting among multiple potential targets.


Asunto(s)
Toma de Decisiones/fisiología , Neuronas/fisiología , Orientación Espacial/fisiología , Conducta Espacial/fisiología , Colículos Superiores/fisiología , Animales , Masculino , Ratones , Movimiento/fisiología , Optogenética , Estimulación Luminosa , Recompensa
20.
Nat Neurosci ; 22(11): 1761-1770, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31659335

RESUMEN

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Redes Neurales de la Computación , Animales , Encéfalo/fisiología , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA