RESUMO
Neural network pruning is a popular approach to reducing the computational costs of training and/or deploying a network and aims to do so while minimizing accuracy loss. Pruning methods that remove individual weights (fine granularity) can remove more total network parameters before reaching a given degree of accuracy loss, while methods that preserve some or all of a network's structure (coarser granularity, such as pruning channels from a CNN) take better advantage of hardware and software optimized for dense matrix computations. We compare intelligent iterative pruning using several different criteria sampled from the literature against random pruning at initialization across multiple granularities on two different architectures and three image classification tasks. Our work is the first direct and comprehensive investigation of the relationship between granularity and the efficacy of intelligent pruning relative to a random-pruning baseline. We find that the accuracy advantage of intelligent over random pruning decreases dramatically as granularity becomes coarser, with minimal advantage for intelligent pruning at granularity coarse enough to fully preserve network structure. For instance, at pruning rates where random pruning leaves ResNet-20 at 85.0% test accuracy on CIFAR-10 after 30,000 training iterations, intelligent weight pruning with the best-in-context criterion leaves it at about 90.0% accuracy (on par with the unpruned network), kernel pruning leaves it at about 86.5%, and channel pruning leaves it at about 85.5%. Our results suggest that compared to coarse pruning, fine pruning combined with efficient implementation of the resulting networks is a more promising direction for easing the trade-off between high accuracy and low computational cost.
RESUMO
Adaptation is a universal aspect of neural systems that changes circuit computations to match prevailing inputs. These changes facilitate efficient encoding of sensory inputs while avoiding saturation. Conventional artificial neural networks (ANNs) have limited adaptive capabilities, hindering their ability to reliably predict neural output under dynamic input conditions. Can embedding neural adaptive mechanisms in ANNs improve their performance? To answer this question, we develop a new deep learning model of the retina that incorporates the biophysics of photoreceptor adaptation at the front-end of conventional convolutional neural networks (CNNs). These conventional CNNs build on 'Deep Retina,' a previously developed model of retinal ganglion cell (RGC) activity. CNNs that include this new photoreceptor layer outperform conventional CNN models at predicting male and female primate and rat RGC responses to naturalistic stimuli that include dynamic local intensity changes and large changes in the ambient illumination. These improved predictions result directly from adaptation within the phototransduction cascade. This research underscores the potential of embedding models of neural adaptation in ANNs and using them to determine how neural circuits manage the complexities of encoding natural inputs that are dynamic and span a large range of light levels.
Assuntos
Redes Neurais de Computação , Retina , Células Ganglionares da Retina , Animais , Células Ganglionares da Retina/fisiologia , Ratos , Retina/fisiologia , Masculino , Feminino , Aprendizado Profundo , Adaptação Fisiológica/fisiologia , Modelos Neurológicos , Estimulação LuminosaRESUMO
On the timescale of sensory processing, neuronal networks have relatively fixed anatomical connectivity, while functional interactions between neurons can vary depending on the ongoing activity of the neurons within the network. We thus hypothesized that different types of stimuli could lead those networks to display stimulus-dependent functional connectivity patterns. To test this hypothesis, we analyzed single-cell resolution electrophysiological data from the Allen Institute, with simultaneous recordings of stimulus-evoked activity from neurons across 6 different regions of mouse visual cortex. Comparing the functional connectivity patterns during different stimulus types, we made several nontrivial observations: (1) while the frequencies of different functional motifs were preserved across stimuli, the identities of the neurons within those motifs changed; (2) the degree to which functional modules are contained within a single brain region increases with stimulus complexity. Altogether, our work reveals unexpected stimulus-dependence to the way groups of neurons interact to process incoming sensory information.
Assuntos
Rede Nervosa , Neurônios , Estimulação Luminosa , Córtex Visual , Animais , Córtex Visual/fisiologia , Córtex Visual/citologia , Camundongos , Neurônios/fisiologia , Rede Nervosa/fisiologia , Camundongos Endogâmicos C57BL , MasculinoRESUMO
Scientists have long conjectured that the neocortex learns patterns in sensory data to generate top-down predictions of upcoming stimuli. In line with this conjecture, different responses to pattern-matching vs pattern-violating visual stimuli have been observed in both spiking and somatic calcium imaging data. However, it remains unknown whether these pattern-violation signals are different between the distal apical dendrites, which are heavily targeted by top-down signals, and the somata, where bottom-up information is primarily integrated. Furthermore, it is unknown how responses to pattern-violating stimuli evolve over time as an animal gains more experience with them. Here, we address these unanswered questions by analyzing responses of individual somata and dendritic branches of layer 2/3 and layer 5 pyramidal neurons tracked over multiple days in primary visual cortex of awake, behaving female and male mice. We use sequences of Gabor patches with patterns in their orientations to create pattern-matching and pattern-violating stimuli, and two-photon calcium imaging to record neuronal responses. Many neurons in both layers show large differences between their responses to pattern-matching and pattern-violating stimuli. Interestingly, these responses evolve in opposite directions in the somata and distal apical dendrites, with somata becoming less sensitive to pattern-violating stimuli and distal apical dendrites more sensitive. These differences between the somata and distal apical dendrites may be important for hierarchical computation of sensory predictions and learning, since these two compartments tend to receive bottom-up and top-down information, respectively.
Assuntos
Cálcio , Neocórtex , Masculino , Feminino , Camundongos , Animais , Cálcio/fisiologia , Neurônios/fisiologia , Dendritos/fisiologia , Células Piramidais/fisiologia , Neocórtex/fisiologiaRESUMO
Information is processed by networks of neurons in the brain. On the timescale of sensory processing, those neuronal networks have relatively fixed anatomical connectivity, while functional connectivity, which defines the interactions between neurons, can vary depending on the ongoing activity of the neurons within the network. We thus hypothesized that different types of stimuli, which drive different neuronal activities in the network, could lead those networks to display stimulus-dependent functional connectivity patterns. To test this hypothesis, we analyzed electrophysiological data from the Allen Brain Observatory, which utilized Neuropixels probes to simultaneously record stimulus-evoked activity from hundreds of neurons across 6 different regions of mouse visual cortex. The recordings had single-cell resolution and high temporal fidelity, enabling us to determine fine-scale functional connectivity. Comparing the functional connectivity patterns observed when different stimuli were presented to the mice, we made several nontrivial observations. First, while the frequencies of different connectivity motifs (i.e., the patterns of connectivity between triplets of neurons) were preserved across stimuli, the identities of the neurons within those motifs changed. This means that functional connectivity dynamically changes along with the input stimulus, but does so in a way that preserves the motif frequencies. Secondly, we found that the degree to which functional modules are contained within a single brain region (as opposed to being distributed between regions) increases with increasing stimulus complexity. This suggests a mechanism for how the brain could dynamically alter its computations based on its inputs. Altogether, our work reveals unexpected stimulus-dependence to the way groups of neurons interact to process incoming sensory information.
RESUMO
The apical dendrites of pyramidal neurons in sensory cortex receive primarily top-down signals from associative and motor regions, while cell bodies and nearby dendrites are heavily targeted by locally recurrent or bottom-up inputs from the sensory periphery. Based on these differences, a number of theories in computational neuroscience postulate a unique role for apical dendrites in learning. However, due to technical challenges in data collection, little data is available for comparing the responses of apical dendrites to cell bodies over multiple days. Here we present a dataset collected through the Allen Institute Mindscope's OpenScope program that addresses this need. This dataset comprises high-quality two-photon calcium imaging from the apical dendrites and the cell bodies of visual cortical pyramidal neurons, acquired over multiple days in awake, behaving mice that were presented with visual stimuli. Many of the cell bodies and dendrite segments were tracked over days, enabling analyses of how their responses change over time. This dataset allows neuroscientists to explore the differences between apical and somatic processing and plasticity.
Assuntos
Células Piramidais , Córtex Visual , Animais , Camundongos , Corpo Celular , Dendritos/fisiologia , Neurônios , Células Piramidais/fisiologia , Córtex Visual/fisiologiaRESUMO
From starlight to sunlight, adaptation alters retinal output, changing both the signal and noise among populations of retinal ganglion cells (RGCs). Here we determine how these light level-dependent changes impact decoding of retinal output, testing the importance of accounting for RGC noise correlations to optimally read out retinal activity. We find that at moonlight conditions, correlated noise is greater and assuming independent noise severely diminishes decoding performance. In fact, assuming independence among a local population of RGCs produces worse decoding than using a single RGC, demonstrating a failure of population codes when correlated noise is substantial and ignored. We generalize these results with a simple model to determine what conditions dictate this failure of population processing. This work elucidates the circumstances in which accounting for noise correlations is necessary to take advantage of population-level codes and shows that sensory adaptation can strongly impact decoding requirements on downstream brain areas.
Assuntos
Retina/fisiologia , Adaptação Ocular/efeitos da radiação , Animais , Teorema de Bayes , Luz , Modelos Lineares , Visão Noturna/fisiologia , Estimulação Luminosa , Ratos Long-Evans , Retina/efeitos da radiação , Células Ganglionares da Retina/fisiologia , Células Ganglionares da Retina/efeitos da radiaçãoRESUMO
The current state-of-the-art object recognition algorithms, deep convolutional neural networks (DCNNs), are inspired by the architecture of the mammalian visual system, and are capable of human-level performance on many tasks. As they are trained for object recognition tasks, it has been shown that DCNNs develop hidden representations that resemble those observed in the mammalian visual system (Razavi and Kriegeskorte, 2014; Yamins and Dicarlo, 2016; Gu and van Gerven, 2015; Mcclure and Kriegeskorte, 2016). Moreover, DCNNs trained on object recognition tasks are currently among the best models we have of the mammalian visual system. This led us to hypothesize that teaching DCNNs to achieve even more brain-like representations could improve their performance. To test this, we trained DCNNs on a composite task, wherein networks were trained to: (a) classify images of objects; while (b) having intermediate representations that resemble those observed in neural recordings from monkey visual cortex. Compared with DCNNs trained purely for object categorization, DCNNs trained on the composite task had better object recognition performance and are more robust to label corruption. Interestingly, we found that neural data was not required for this process, but randomized data with the same statistical properties as neural data also boosted performance. While the performance gains we observed when training on the composite task vs the "pure" object recognition task were modest, they were remarkably robust. Notably, we observed these performance gains across all network variations we studied, including: smaller (CORNet-Z) vs larger (VGG-16) architectures; variations in optimizers (Adam vs gradient descent); variations in activation function (ReLU vs ELU); and variations in network initialization. Our results demonstrate the potential utility of a new approach to training object recognition networks, using strategies in which the brain - or at least the statistical properties of its activation patterns - serves as a teacher signal for training DCNNs.
Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Visual de Modelos , Animais , Haplorrinos , Reconhecimento Automatizado de Padrão/normas , Córtex Visual/fisiologiaRESUMO
Simple stimuli have been critical to understanding neural population codes in sensory systems. Yet it remains necessary to determine the extent to which this understanding generalizes to more complex conditions. To examine this problem, we measured how populations of direction-selective ganglion cells (DSGCs) from the retinas of male and female mice respond to a global motion stimulus with its direction and speed changing dynamically. We then examined the encoding and decoding of motion direction in both individual and populations of DSGCs. Individual cells integrated global motion over â¼200 ms, and responses were tuned to direction. However, responses were sparse and broadly tuned, which severely limited decoding performance from small DSGC populations. In contrast, larger populations compensated for response sparsity, enabling decoding with high temporal precision (<100 ms). At these timescales, correlated spiking was minimal and had little impact on decoding performance, unlike results obtained using simpler local motion stimuli decoded over longer timescales. We use these data to define different DSGC population decoding regimes that use or mitigate correlated spiking to achieve high-spatial versus high-temporal resolution.SIGNIFICANCE STATEMENT ON-OFF direction-selective ganglion cells (ooDSGCs) in the mammalian retina are typically thought to signal local motion to the brain. However, several recent studies suggest they may signal global motion. Here we analyze the fidelity of encoding and decoding global motion in a natural scene across large populations of ooDSGCs. We show that large populations of DSGCs are capable of signaling rapid changes in global motion.
Assuntos
Percepção de Movimento/fisiologia , Orientação/fisiologia , Estimulação Luminosa/métodos , Células Ganglionares da Retina/fisiologia , Animais , Feminino , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Camundongos Endogâmicos CBARESUMO
A common feature of the primary processing structures of sensory systems is the presence of parallel output "channels" that convey different information about a stimulus. In the mammalian olfactory bulb, this is reflected in the mitral cells (MCs) and tufted cells (TCs) that have differing sensitivities to odors, with TCs being more sensitive than MCs. In this study, we examined potential mechanisms underlying the different responses of MCs vs. TCs. For TCs, we focused on superficial TCs (sTCs), which are a population of output TCs that reside in the superficial-most portion of the external plexiform layer, along with external tufted cells (eTCs), which are glutamatergic interneurons in the glomerular layer. Using whole-cell patch-clamp recordings in mouse bulb slices, we first measured excitatory currents in MCs, sTCs, and eTCs following olfactory sensory neuron (OSN) stimulation, separating the responses into a fast, monosynaptic component reflecting direct inputs from OSNs and a prolonged component partially reflecting eTC-mediated feedforward excitation. Responses were measured to a wide range of OSN stimulation intensities, simulating the different levels of OSN activity that would be expected to be produced by varying odor concentrations in vivo. Over a range of stimulation intensities, we found that the monosynaptic current varied significantly between the cell types, in the order of eTC > sTC > MC. The prolonged component was smaller in sTCs vs. both MCs and eTCs. sTCs also had much higher whole-cell input resistances than MCs, reflecting their smaller size and greater membrane resistivity. To evaluate how these different electrophysiological aspects contributed to spiking of the output MCs and sTCs, we used computational modeling. By exchanging the different cell properties in our modeled MCs and sTCs, we could evaluate each property's contribution to spiking differences between these cell types. This analysis suggested that the higher sensitivity of spiking in sTCs vs. MCs reflected both their larger monosynaptic OSN signal as well as their higher input resistance, while their smaller prolonged currents had a modest opposing effect. Taken together, our results indicate that both synaptic and intrinsic cellular features contribute to the production of parallel output channels in the olfactory bulb.
RESUMO
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Assuntos
Inteligência Artificial , Aprendizado Profundo , Redes Neurais de Computação , Animais , Encéfalo/fisiologia , HumanosRESUMO
Primary visual cortex (V1) is the first stage of cortical image processing, and major effort in systems neuroscience is devoted to understanding how it encodes information about visual stimuli. Within V1, many neurons respond selectively to edges of a given preferred orientation: These are known as either simple or complex cells. Other neurons respond to localized center-surround image features. Still others respond selectively to certain image stimuli, but the specific features that excite them are unknown. Moreover, even for the simple and complex cells-the best-understood V1 neurons-it is challenging to predict how they will respond to natural image stimuli. Thus, there are important gaps in our understanding of how V1 encodes images. To fill this gap, we trained deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and we find that the predicted firing rates are highly correlated (\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({\overline {{\bf{CC}}} _{{\bf{norm}}}}\) = 0.556 ± 0.01) with the neurons' actual firing rates over a population of 355 neurons. This performance value is quoted for all neurons, with no selection filter. Performance is better for more active neurons: When evaluated only on neurons with mean firing rates above 5 Hz, our predictors achieve correlations of \({\overline {{\bf{CC}}} _{{\bf{norm}}}}\) = 0.69 ± 0.01 with the neurons' true firing rates. We find that the firing rates of both orientation-selective and non-orientation-selective neurons can be predicted with high accuracy. Additionally, we use a variety of models to benchmark performance and find that our convolutional neural-network model makes more accurate predictions.
Assuntos
Aprendizado Profundo , Neurônios/fisiologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Animais , Macaca , Redes Neurais de Computação , Orientação , Orientação EspacialRESUMO
Selecting and moving to spatial targets are critical components of goal-directed behavior, yet their neural bases are not well understood. The superior colliculus (SC) is thought to contain a topographic map of contralateral space in which the activity of specific neuronal populations corresponds to particular spatial locations. However, these spatial representations are modulated by several decision-related variables, suggesting that they reflect information beyond simply the location of an upcoming movement. Here, we examine the extent to which these representations arise from competitive spatial choice. We recorded SC activity in male mice performing a behavioral task requiring orienting movements to targets for a water reward in two contexts. In "competitive" trials, either the left or right target could be rewarded, depending on which stimulus was presented at the central port. In "noncompetitive" trials, the same target (e.g., left) was rewarded throughout an entire block. While both trial types required orienting movements to the same spatial targets, only in competitive trials do targets compete for selection. We found that in competitive trials, pre-movement SC activity predicted movement to contralateral targets, as expected. However, in noncompetitive trials, some neurons lost their spatial selectivity and in others activity predicted movement to ipsilateral targets. Consistent with these findings, unilateral optogenetic inactivation of pre-movement SC activity ipsiversively biased competitive, but not noncompetitive, trials. Incorporating these results into an attractor model of SC activity points to distinct pathways for orienting movements under competitive and noncompetitive conditions, with the SC specifically required for selecting among multiple potential targets.
Assuntos
Tomada de Decisões/fisiologia , Neurônios/fisiologia , Orientação Espacial/fisiologia , Comportamento Espacial/fisiologia , Colículos Superiores/fisiologia , Animais , Masculino , Camundongos , Movimento/fisiologia , Optogenética , Estimulação Luminosa , RecompensaRESUMO
Parkinson's disease (PD) is highly comorbid with sleep dysfunction. In contrast to motor symptoms, few therapeutic interventions exist to address sleep symptoms in PD. Subthalamic nucleus (STN) deep brain stimulation (DBS) treats advanced PD motor symptoms and may improve sleep architecture. As a proof of concept toward demonstrating that STN-DBS could be used to identify sleep stages commensurate with clinician-scored polysomnography (PSG), we developed a novel artificial neural network (ANN) that could trigger targeted stimulation in response to inferred sleep state from STN local field potentials (LFPs) recorded from implanted DBS electrodes. STN LFP recordings were collected from nine PD patients via a percutaneous cable attached to the DBS lead, during a full night's sleep (6-8 hr) with concurrent polysomnography (PSG). We trained a feedforward neural network to prospectively identify sleep stage with PSG-level accuracy from 30-s epochs of LFP recordings. Our model's sleep-stage predictions match clinician-identified sleep stage with a mean accuracy of 91% on held-out epochs. Furthermore, leave-one-group-out analysis also demonstrates 91% mean classification accuracy for novel subjects. These results, which classify sleep stage across a typical heterogenous sample of PD patients, may indicate spectral biomarkers for automatically scoring sleep stage in PD patients with implanted DBS devices. Further development of this model may also focus on adapting stimulation during specific sleep stages to treat targeted sleep deficits.
Assuntos
Doença de Parkinson/complicações , Polissonografia/métodos , Fases do Sono/fisiologia , Núcleo Subtalâmico/fisiopatologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Doença de Parkinson/fisiopatologiaRESUMO
Working memory requires information about external stimuli to be represented in the brain even after those stimuli go away. This information is encoded in the activities of neurons, and neural activities change over timescales of tens of milliseconds. Information in working memory, however, is retained for tens of seconds, suggesting the question of how time-varying neural activities maintain stable representations. Prior work shows that, if the neural dynamics are in the 'null space' of the representation - so that changes to neural activity do not affect the downstream read-out of stimulus information - then information can be retained for periods much longer than the time-scale of individual-neuronal activities. The prior work, however, requires precisely constructed synaptic connectivity matrices, without explaining how this would arise in a biological neural network. To identify mechanisms through which biological networks can self-organize to learn memory function, we derived biologically plausible synaptic plasticity rules that dynamically modify the connectivity matrix to enable information storing. Networks implementing this plasticity rule can successfully learn to form memory representations even if only 10% of the synapses are plastic, they are robust to synaptic noise, and they can represent information about multiple stimuli.
Assuntos
Encéfalo/fisiologia , Memória de Curto Prazo/fisiologia , Redes Neurais de Computação , Plasticidade Neuronal/fisiologia , Humanos , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Sinapses/fisiologiaRESUMO
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting "Reliable Moment" model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns.
RESUMO
A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.
Assuntos
Potenciais de Ação/fisiologia , Córtex Cerebral/fisiologia , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Animais , Neurônios/fisiologia , Transmissão Sináptica/fisiologiaRESUMO
Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina's performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with "differential correlations", which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can-in some cases-optimize robustness against noise.