RESUMO
Place cells in the hippocampus (HC) are active when an animal visits a certain location (referred to as a place field) within an environment. Grid cells in the medial entorhinal cortex (MEC) respond at multiple locations, with firing fields that form a periodic and hexagonal tiling of the environment. The joint activity of grid and place cell populations, as a function of location, forms a neural code for space. In this article, we develop an understanding of the relationships between coding theoretically relevant properties of the combined activity of these populations and how these properties limit the robustness of this representation to noise-induced interference. These relationships are revisited by measuring the performances of biologically realizable algorithms implemented by networks of place and grid cell populations, as well as constraint neurons, which perform denoising operations. Contributions of this work include the investigation of coding theoretic limitations of the mammalian neural code for location and how communication between grid and place cell networks may improve the accuracy of each population's representation. Simulations demonstrate that denoising mechanisms analyzed here can significantly improve the fidelity of this neural representation of space. Furthermore, patterns observed in connectivity of each population of simulated cells predict that anti-Hebbian learning drives decreases in inter-HC-MEC connectivity along the dorsoventral axis.
Assuntos
Córtex Entorrinal/fisiologia , Hipocampo/fisiologia , Modelos Neurológicos , Células de Lugar/fisiologia , Aprendizagem Espacial/fisiologia , Algoritmos , Animais , Córtex Entorrinal/citologia , Hipocampo/citologia , Transmissão SinápticaRESUMO
Despite recent advances in high-throughput combinatorial mutagenesis assays, the number of labeled sequences available to predict molecular functions has remained small for the vastness of the sequence space combined with the ruggedness of many fitness functions. While deep neural networks (DNNs) can capture high-order epistatic interactions among the mutational sites, they tend to overfit to the small number of labeled sequences available for training. Here, we developed Epistatic Net (EN), a method for spectral regularization of DNNs that exploits evidence that epistatic interactions in many fitness functions are sparse. We built a scalable extension of EN, usable for larger sequences, which enables spectral regularization using fast sparse recovery algorithms informed by coding theory. Results on several biological landscapes show that EN consistently improves the prediction accuracy of DNNs and enables them to outperform competing models which assume other priors. EN estimates the higher-order epistatic interactions of DNNs trained on massive sequence spaces-a computational problem that otherwise takes years to solve.
Assuntos
Algoritmos , Redes Neurais de Computação , Bactérias , Proteínas de Fluorescência VerdeRESUMO
Spike-timing Dependent Plasticity (STDP) is a learning mechanism that can capture causal relationships between events. STDP is considered a foundational element of memory and learning in biological neural networks. Previous research efforts endeavored to understand the functionality of STDP's learning window in spiking neural networks (SNNs). In this study, we investigate the interaction among different encoding/decoding schemes, STDP learning windows and normalization rules for the SNN classifier, trained and tested on MNIST, NIST and ETH80-Contour datasets. The results show that when no normalization rules are applied, classical STDP typically achieves the best performance. Additionally, first-spike decoding classifiers require much less decoding time than a spike count decoding classifier. Thirdly, when no normalization rule is applied, the classifier accuracy decreases as the encoding duration increases from 10ms to 34ms using count decoding scheme. Finally, normalization of output weights is shown to improve the performance of a first-spike decoding classifier, which reveals the importance of weight normalization to SNN.