Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
PLoS Comput Biol ; 20(8): e1012297, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39093861

RESUMO

Understanding the computational mechanisms that underlie the encoding and decoding of environmental stimuli is a crucial investigation in neuroscience. Central to this pursuit is the exploration of how the brain represents visual information across its hierarchical architecture. A prominent challenge resides in discerning the neural underpinnings of the processing of dynamic natural visual scenes. Although considerable research efforts have been made to characterize individual components of the visual pathway, a systematic understanding of the distinctive neural coding associated with visual stimuli, as they traverse this hierarchical landscape, remains elusive. In this study, we leverage the comprehensive Allen Visual Coding-Neuropixels dataset and utilize the capabilities of deep learning neural network models to study neural coding in response to dynamic natural visual scenes across an expansive array of brain regions. Our study reveals that our decoding model adeptly deciphers visual scenes from neural spiking patterns exhibited within each distinct brain area. A compelling observation arises from the comparative analysis of decoding performances, which manifests as a notable encoding proficiency within the visual cortex and subcortical nuclei, in contrast to a relatively reduced encoding activity within hippocampal neurons. Strikingly, our results unveil a robust correlation between our decoding metrics and well-established anatomical and functional hierarchy indexes. These findings corroborate existing knowledge in visual coding related to artificial visual stimuli and illuminate the functional role of these deeper brain regions using dynamic stimuli. Consequently, our results suggest a novel perspective on the utility of decoding neural network models as a metric for quantifying the encoding quality of dynamic natural visual scenes represented by neural responses, thereby advancing our comprehension of visual coding within the complex hierarchy of the brain.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38833393

RESUMO

Sensory information recognition is primarily processed through the ventral and dorsal visual pathways in the primate brain visual system, which exhibits layered feature representations bearing a strong resemblance to convolutional neural networks (CNNs), encompassing reconstruction and classification. However, existing studies often treat these pathways as distinct entities, focusing individually on pattern reconstruction or classification tasks, overlooking a key feature of biological neurons, the fundamental units for neural computation of visual sensory information. Addressing these limitations, we introduce a unified framework for sensory information recognition with augmented spikes. By integrating pattern reconstruction and classification within a single framework, our approach not only accurately reconstructs multimodal sensory information but also provides precise classification through definitive labeling. Experimental evaluations conducted on various datasets including video scenes, static images, dynamic auditory scenes, and functional magnetic resonance imaging (fMRI) brain activities demonstrate that our framework delivers state-of-the-art pattern reconstruction quality and classification accuracy. The proposed framework enhances the biological realism of multimodal pattern recognition models, offering insights into how the primate brain visual system effectively accomplishes the reconstruction and classification tasks through the integration of ventral and dorsal pathways.

3.
Neural Netw ; 176: 106346, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38713970

RESUMO

Spiking neural networks (SNNs) provide necessary models and algorithms for neuromorphic computing. A popular way of building high-performance deep SNNs is to convert ANNs to SNNs, taking advantage of advanced and well-trained ANNs. Here we propose an ANN to SNN conversion methodology that uses a time-based coding scheme, named At-most-two-spike Exponential Coding (AEC), and a corresponding AEC spiking neuron model for ANN-SNN conversion. AEC neurons employ quantization-compensating spikes to improve coding accuracy and capacity, with each neuron generating up to two spikes within the time window. Two exponential decay functions with tunable parameters are proposed to represent the dynamic encoding thresholds, based on which pixel intensities are encoded into spike times and spike times are decoded into pixel intensities. The hyper-parameters of AEC neurons are fine-tuned based on the loss function of SNN-decoded values and ANN-activation values. In addition, we design two regularization terms for the number of spikes, providing the possibility to achieve the best trade-off between accuracy, latency and power consumption. The experimental results show that, compared to other similar methods, the proposed scheme not only obtains deep SNNs with higher accuracy, but also has more significant advantages in terms of energy efficiency and inference latency. More details can be found at https://github.com/RPDS2020/AEC.git.


Assuntos
Potenciais de Ação , Algoritmos , Redes Neurais de Computação , Neurônios , Potenciais de Ação/fisiologia , Neurônios/fisiologia , Modelos Neurológicos , Humanos
4.
J Neural Eng ; 21(2)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38621378

RESUMO

Objective: Epilepsy is a complex disease spanning across multiple scales, from ion channels in neurons to neuronal circuits across the entire brain. Over the past decades, computational models have been used to describe the pathophysiological activity of the epileptic brain from different aspects. Traditionally, each computational model can aid in optimizing therapeutic interventions, therefore, providing a particular view to design strategies for treating epilepsy. As a result, most studies are concerned with generating specific models of the epileptic brain that can help us understand the certain machinery of the pathological state. Those specific models vary in complexity and biological accuracy, with system-level models often lacking biological details.Approach: Here, we review various types of computational model of epilepsy and discuss their potential for different therapeutic approaches and scenarios, including drug discovery, surgical strategies, brain stimulation, and seizure prediction. We propose that we need to consider an integrated approach with a unified modelling framework across multiple scales to understand the epileptic brain. Our proposal is based on the recent increase in computational power, which has opened up the possibility of unifying those specific epileptic models into simulations with an unprecedented level of detail.Main results: A multi-scale epilepsy model can bridge the gap between biologically detailed models, used to address molecular and cellular questions, and brain-wide models based on abstract models which can account for complex neurological and behavioural observations.Significance: With these efforts, we move toward the next generation of epileptic brain models capable of connecting cellular features, such as ion channel properties, with standard clinical measures such as seizure severity.


Assuntos
Encéfalo , Simulação por Computador , Epilepsia , Modelos Neurológicos , Humanos , Epilepsia/fisiopatologia , Epilepsia/terapia , Encéfalo/fisiopatologia , Animais , Rede Nervosa/fisiopatologia
5.
Commun Biol ; 7(1): 487, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38649503

RESUMO

The phenomenon of semantic satiation, which refers to the loss of meaning of a word or phrase after being repeated many times, is a well-known psychological phenomenon. However, the microscopic neural computational principles responsible for these mechanisms remain unknown. In this study, we use a deep learning model of continuous coupled neural networks to investigate the mechanism underlying semantic satiation and precisely describe this process with neuronal components. Our results suggest that, from a mesoscopic perspective, semantic satiation may be a bottom-up process. Unlike existing macroscopic psychological studies that suggest that semantic satiation is a top-down process, our simulations use a similar experimental paradigm as classical psychology experiments and observe similar results. Satiation of semantic objectives, similar to the learning process of our network model used for object recognition, relies on continuous learning and switching between objects. The underlying neural coupling strengthens or weakens satiation. Taken together, both neural and network mechanisms play a role in controlling semantic satiation.


Assuntos
Aprendizado Profundo , Semântica , Humanos , Redes Neurais de Computação , Modelos Neurológicos
6.
Artigo em Inglês | MEDLINE | ID: mdl-38265909

RESUMO

Sensory information transmitted to the brain activates neurons to create a series of coping behaviors. Understanding the mechanisms of neural computation and reverse engineering the brain to build intelligent machines requires establishing a robust relationship between stimuli and neural responses. Neural decoding aims to reconstruct the original stimuli that trigger neural responses. With the recent upsurge of artificial intelligence, neural decoding provides an insightful perspective for designing novel algorithms of brain-machine interface. For humans, vision is the dominant contributor to the interaction between the external environment and the brain. In this study, utilizing the retinal neural spike data collected over multi trials with visual stimuli of two movies with different levels of scene complexity, we used a neural network decoder to quantify the decoded visual stimuli with six different metrics for image quality assessment establishing comprehensive inspection of decoding. With the detailed and systematical study of the effect and single and multiple trials of data, different noise in spikes, and blurred images, our results provide an in-depth investigation of decoding dynamical visual scenes using retinal spikes. These results provide insights into the neural coding of visual scenes and services as a guideline for designing next-generation decoding algorithms of neuroprosthesis and other devices of brain-machine interface.

7.
J Cheminform ; 15(1): 118, 2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-38066570

RESUMO

The solubility of proteins stands as a pivotal factor in the realm of pharmaceutical research and production. Addressing the imperative to enhance production efficiency and curtail experimental costs, the demand arises for computational models adept at accurately predicting solubility based on provided datasets. Prior investigations have leveraged deep learning models and feature engineering techniques to distill features from raw protein sequences for solubility prediction. However, these methodologies have not thoroughly delved into the interdependencies among features or their respective magnitudes of significance. This study introduces HybridGCN, a pioneering Hybrid Graph Convolutional Network that elevates solubility prediction accuracy through the combination of diverse features, encompassing sophisticated deep-learning features and classical biophysical features. An exploration into the intricate interplay between deep-learning features and biophysical features revealed that specific biophysical attributes, notably evolutionary features, complement features extracted by advanced deep-learning models. Augmenting the model's capability for feature representation, we employed ESM, a substantial protein language model, to derive a zero-shot learning feature capturing comprehensive and pertinent information concerning protein functions and structures. Furthermore, we proposed a novel feature fusion module termed Adaptive Feature Re-weighting (AFR) to integrate multiple features, thereby enabling the fine-tuning of feature importance. Ablation experiments and comparative analyses attest to the efficacy of the HybridGCN approach, culminating in state-of-the-art performances on the public eSOL and S. cerevisiae datasets.

8.
STAR Protoc ; 4(4): 102722, 2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-37976152

RESUMO

Finding the complete functional circuits of neurons is a challenging problem in brain research. Here, we present a protocol, based on visual stimuli and spikes, for obtaining the complete circuit of recorded neurons using spike-triggered nonnegative matrix factorization. We describe steps for data preprocessing, inferring the spatial receptive field of the subunits, and analyzing the module matrix. This approach identifies computational components of the feedforward network of retinal ganglion cells and dissects the network structure based on natural image stimuli. For complete details on the use and execution of this protocol, please refer to Jia et al. (2021).1.


Assuntos
Modelos Neurológicos , Células Ganglionares da Retina , Redes Neurais de Computação , Algoritmos , Encéfalo
9.
Neural Netw ; 165: 135-149, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37285730

RESUMO

Depression, as a global mental health problem, is lacking effective screening methods that can help with early detection and treatment. This paper aims to facilitate the large-scale screening of depression by focusing on the speech depression detection (SDD) task. Currently, direct modeling on the raw signal yields a large number of parameters, and the existing deep learning-based SDD models mainly use the fixed Mel-scale spectral features as input. However, these features are not designed for depression detection, and the manual settings limit the exploration of fine-grained feature representations. In this paper, we learn the effective representations of the raw signals from an interpretable perspective. Specifically, we present a joint learning framework with attention-guided learnable time-domain filterbanks for depression classification (DALF), which collaborates with the depression filterbanks features learning (DFBL) module and multi-scale spectral attention learning (MSSA) module. DFBL is capable of producing biologically meaningful acoustic features by employing learnable time-domain filters, and MSSA is used to guide the learnable filters to better retain the useful frequency sub-bands. We collect a new dataset, the Neutral Reading-based Audio Corpus (NRAC), to facilitate the research in depression analysis, and we evaluate the performance of DALF on the NRAC and the public DAIC-woz datasets. The experimental results demonstrate that our method outperforms the state-of-the-art SDD methods with an F1 of 78.4% on the DAIC-woz dataset. In particular, DALF achieves F1 scores of 87.3% and 81.7% on two parts of the NRAC dataset. By analyzing the filter coefficients, we find that the most important frequency range identified by our method is 600-700Hz, which corresponds to the Mandarin vowels /e/ and /eˆ/ and can be considered as an effective biomarker for the SDD task. Taken together, our DALF model provides a promising approach to depression detection.


Assuntos
Depressão , Metil Paration , Depressão/diagnóstico , Fala , Acústica
10.
Artigo em Inglês | MEDLINE | ID: mdl-37227906

RESUMO

Representation learning in heterogeneous graphs with massive unlabeled data has aroused great interest. The heterogeneity of graphs not only contains rich information, but also raises difficult barriers to designing unsupervised or self-supervised learning (SSL) strategies. Existing methods such as random walk-based approaches are mainly dependent on the proximity information of neighbors and lack the ability to integrate node features into a higher-level representation. Furthermore, previous self-supervised or unsupervised frameworks are usually designed for node-level tasks, which are commonly short of capturing global graph properties and may not perform well in graph-level tasks. Therefore, a label-free framework that can better capture the global properties of heterogeneous graphs is urgently required. In this article, we propose a self-supervised heterogeneous graph neural network (GNN) based on cross-view contrastive learning (HeGCL). The HeGCL presents two views for encoding heterogeneous graphs: the meta-path view and the outline view. Compared with the meta-path view that provides semantic information, the outline view encodes the complex edge relations and captures graph-level properties by using a nonlocal block. Thus, the HeGCL learns node embeddings through maximizing mutual information (MI) between global and semantic representations coming from the outline and meta-path view, respectively. Experiments on both node-level and graph-level tasks show the superiority of the proposed model over other methods, and further exploration studies also show that the introduction of nonlocal block brings a significant contribution to graph-level tasks.

11.
PLoS Comput Biol ; 19(4): e1011019, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-37036844

RESUMO

Neurons, represented as a tree structure of morphology, have various distinguished branches of dendrites. Different types of synaptic receptors distributed over dendrites are responsible for receiving inputs from other neurons. NMDA receptors (NMDARs) are expressed as excitatory units, and play a key physiological role in synaptic function. Although NMDARs are widely expressed in most types of neurons, they play a different role in the cerebellar Purkinje cells (PCs). Utilizing a computational PC model with detailed dendritic morphology, we explored the role of NMDARs at different parts of dendritic branches and regions. We found somatic responses can switch from silent, to simple spikes and complex spikes, depending on specific dendritic branches. Detailed examination of the dendrites regarding their diameters and distance to soma revealed diverse response patterns, yet explain two firing modes, simple and complex spike. Taken together, these results suggest that NMDARs play an important role in controlling excitability sensitivity while taking into account the factor of dendritic properties. Given the complexity of neural morphology varying in cell types, our work suggests that the functional role of NMDARs is not stereotyped but highly interwoven with local properties of neuronal structure.


Assuntos
Dendritos , Receptores de N-Metil-D-Aspartato , Dendritos/fisiologia , Neurônios/fisiologia , Células de Purkinje/fisiologia , Sinapses/fisiologia , Potenciais de Ação/fisiologia
12.
IEEE Trans Neural Netw Learn Syst ; 34(9): 5841-5855, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-34890341

RESUMO

Spiking neural networks (SNNs), inspired by the neuronal network in the brain, provide biologically relevant and low-power consuming models for information processing. Existing studies either mimic the learning mechanism of brain neural networks as closely as possible, for example, the temporally local learning rule of spike-timing-dependent plasticity (STDP), or apply the gradient descent rule to optimize a multilayer SNN with fixed structure. However, the learning rule used in the former is local and how the real brain might do the global-scale credit assignment is still not clear, which means that those shallow SNNs are robust but deep SNNs are difficult to be trained globally and could not work so well. For the latter, the nondifferentiable problem caused by the discrete spike trains leads to inaccuracy in gradient computing and difficulties in effective deep SNNs. Hence, a hybrid solution is interesting to combine shallow SNNs with an appropriate machine learning (ML) technique not requiring the gradient computing, which is able to provide both energy-saving and high-performance advantages. In this article, we propose a HybridSNN, a deep and strong SNN composed of multiple simple SNNs, in which data-driven greedy optimization is used to build powerful classifiers, avoiding the derivative problem in gradient descent. During the training process, the output features (spikes) of selected weak classifiers are fed back to the pool for the subsequent weak SNN training and selection. This guarantees HybridSNN not only represents the linear combination of simple SNNs, as what regular AdaBoost algorithm generates, but also contains neuron connection information, thus closely resembling the neural networks of a brain. HybridSNN has the benefits of both low power consumption in weak units and overall data-driven optimizing strength. The network structure in HybridSNN is learned from training samples, which is more flexible and effective compared with existing fixed multilayer SNNs. Moreover, the topological tree of HybridSNN resembles the neural system in the brain, where pyramidal neurons receive thousands of synaptic input signals through their dendrites. Experimental results show that the proposed HybridSNN is highly competitive among the state-of-the-art SNNs.


Assuntos
Algoritmos , Redes Neurais de Computação , Aprendizado de Máquina , Neurônios/fisiologia , Encéfalo/fisiologia
13.
Front Neurosci ; 17: 1291051, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38249589

RESUMO

Spiking neural networks (SNNs), as brain-inspired neural network models based on spikes, have the advantage of processing information with low complexity and efficient energy consumption. Currently, there is a growing trend to design hardware accelerators for dedicated SNNs to overcome the limitation of running under the traditional von Neumann architecture. Probabilistic sampling is an effective modeling approach for implementing SNNs to simulate the brain to achieve Bayesian inference. However, sampling consumes considerable time. It is highly demanding for specific hardware implementation of SNN sampling models to accelerate inference operations. Hereby, we design a hardware accelerator based on FPGA to speed up the execution of SNN algorithms by parallelization. We use streaming pipelining and array partitioning operations to achieve model operation acceleration with the least possible resource consumption, and combine the Python productivity for Zynq (PYNQ) framework to implement the model migration to the FPGA while increasing the speed of model operations. We verify the functionality and performance of the hardware architecture on the Xilinx Zynq ZCU104. The experimental results show that the hardware accelerator of the SNN sampling model proposed can significantly improve the computing speed while ensuring the accuracy of inference. In addition, Bayesian inference for spiking neural networks through the PYNQ framework can fully optimize the high performance and low power consumption of FPGAs in embedded applications. Taken together, our proposed FPGA implementation of Bayesian inference with SNNs has great potential for a wide range of applications, it can be ideal for implementing complex probabilistic model inference in embedded systems.

14.
Front Comput Neurosci ; 16: 883065, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36157841

RESUMO

Alpha rhythms in the human electroencephalogram (EEG), oscillating at 8-13 Hz, are located in parieto-occipital cortex and are strongest when awake people close their eyes. It has been suggested that alpha rhythms were related to attention-related functions and mental disorders (e.g., Attention-deficit/hyperactivity disorder (ADHD)). However, many studies have shown inconsistent results on the difference in alpha oscillation between ADHD and control groups. Hence it is essential to verify this difference. In this study, a dataset of EEG recording (128 channel EGI) from 87 healthy controls (HC) and 162 ADHD (141 persisters and 21 remitters) adults in a resting state with their eyes closed was used to address this question and a three-gauss model (summation of baseline and alpha components) was conducted to fit the data. To our surprise, the power of alpha components was not a significant difference among the three groups. Instead, the baseline power of remission and HC group in the alpha band is significantly stronger than that of persister groups. Our results suggest that ADHD recovery may have compensatory mechanisms and many abnormalities in EEG may be due to the influence of behavior rather than the difference in brain signals.

15.
Neural Comput ; 34(6): 1369-1397, 2022 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-35534008

RESUMO

Images of visual scenes comprise essential features important for visual cognition of the brain. The complexity of visual features lies at different levels, from simple artificial patterns to natural images with different scenes. It has been a focus of using stimulus images to predict neural responses. However, it remains unclear how to extract features from neuronal responses. Here we address this question by leveraging two-photon calcium neural data recorded from the visual cortex of awake macaque monkeys. With stimuli including various categories of artificial patterns and diverse scenes of natural images, we employed a deep neural network decoder inspired by image segmentation technique. Consistent with the notation of sparse coding for natural images, a few neurons with stronger responses dominated the decoding performance, whereas decoding of ar tificial patterns needs a large number of neurons. When natural images using the model pretrained on artificial patterns are decoded, salient features of natural scenes can be extracted, as well as the conventional category information. Altogether, our results give a new perspective on studying neural encoding principles using reverse-engineering decoding strategies.


Assuntos
Cálcio , Córtex Visual , Animais , Encéfalo , Macaca , Redes Neurais de Computação , Estimulação Luminosa , Córtex Visual/fisiologia , Percepção Visual/fisiologia
16.
Patterns (N Y) ; 3(3): 100424, 2022 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-35510192

RESUMO

A crucial question in data science is to extract meaningful information embedded in high-dimensional data into a low-dimensional set of features that can represent the original data at different levels. Wavelet analysis is a pervasive method for decomposing time-series signals into a few levels with detailed temporal resolution. However, obtained wavelets are intertwined and over-represented across levels for each sample and across different samples within one population. Here, using neuroscience data of simulated spikes, experimental spikes, calcium imaging signals, and human electrocorticography signals, we leveraged conditional mutual information between wavelets for feature selection. The meaningfulness of selected features was verified to decode stimulus or condition with high accuracy yet using only a small set of features. These results provide a new way of wavelet analysis for extracting essential features of the dynamics of spatiotemporal neural data, which then enables to support novel model design of machine learning with representative features.

17.
PLoS Comput Biol ; 18(3): e1009925, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35259159

RESUMO

A central goal in sensory neuroscience is to understand the neuronal signal processing involved in the encoding of natural stimuli. A critical step towards this goal is the development of successful computational encoding models. For ganglion cells in the vertebrate retina, the development of satisfactory models for responses to natural visual scenes is an ongoing challenge. Standard models typically apply linear integration of visual stimuli over space, yet many ganglion cells are known to show nonlinear spatial integration, in particular when stimulated with contrast-reversing gratings. We here study the influence of spatial nonlinearities in the encoding of natural images by ganglion cells, using multielectrode-array recordings from isolated salamander and mouse retinas. We assess how responses to natural images depend on first- and second-order statistics of spatial patterns inside the receptive field. This leads us to a simple extension of current standard ganglion cell models. We show that taking not only the weighted average of light intensity inside the receptive field into account but also its variance over space can partly account for nonlinear integration and substantially improve response predictions of responses to novel images. For salamander ganglion cells, we find that response predictions for cell classes with large receptive fields profit most from including spatial contrast information. Finally, we demonstrate how this model framework can be used to assess the spatial scale of nonlinear integration. Our results underscore that nonlinear spatial stimulus integration translates to stimulation with natural images. Furthermore, the introduced model framework provides a simple, yet powerful extension of standard models and may serve as a benchmark for the development of more detailed models of the nonlinear structure of receptive fields.


Assuntos
Retina , Células Ganglionares da Retina , Animais , Luz , Camundongos , Estimulação Luminosa/métodos , Retina/fisiologia , Células Ganglionares da Retina/fisiologia , Urodelos
18.
Artigo em Inglês | MEDLINE | ID: mdl-37015639

RESUMO

Thanks to their event-driven nature, spiking neural networks (SNNs) are surmised to be great computation-efficient models. The spiking neurons encode beneficial temporal facts and possess excessive anti-noise properties. However, the high-quality encoding of spatio-temporal complexity and also its training optimization of SNNs are restricted by means of the contemporary problem, this article proposes a novel hierarchical event-driven visual device to explore how information transmits and signifies in the retina the usage of biologically manageable mechanisms. This cognitive model is an augmented spiking-based framework consisting of the function learning capacity of convolutional neural networks (CNNs) with the cognition capability of SNNs. Furthermore, this visual device is modeled in a biological realism way with unsupervised learning rules and advanced spike firing rate encoding methods. We train and test them on some image datasets (Modified National Institute of Standards and Technology (MNIST), Canadian Institute for Advanced Research (CIFAR)10, and its noisy versions) to show that our mannequin can process greater vital data than present cognitive models. This article also proposes a novel quantization approach to make the proposed spiking-based model more efficient for neuromorphic hardware implementation. The outcomes show this joint CNN-SNN model can reap excessive focus accuracy and get more effective generalization ability.

19.
IEEE Trans Cybern ; 52(1): 39-50, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32167923

RESUMO

Deep convolutional neural networks (CNNs) have demonstrated impressive performance on many visual tasks. Recently, they became useful models for the visual system in neuroscience. However, it is still not clear what is learned by CNNs in terms of neuronal circuits. When a deep CNN with many layers is used for the visual system, it is not easy to compare the structure components of CNNs with possible neuroscience underpinnings due to highly complex circuits from the retina to the higher visual cortex. Here, we address this issue by focusing on single retinal ganglion cells with biophysical models and recording data from animals. By training CNNs with white noise images to predict neuronal responses, we found that fine structures of the retinal receptive field can be revealed. Specifically, convolutional filters learned are resembling biological components of the retinal circuit. This suggests that a CNN learning from one single retinal cell reveals a minimal neural network carried out in this cell. Furthermore, when CNNs learned from different cells are transferred between cells, there is a diversity of transfer learning performance, which indicates that CNNs are cell specific. Moreover, when CNNs are transferred between different types of input images, here white noise versus natural images, transfer learning shows a good performance, which implies that CNNs indeed capture the full computational ability of a single retinal cell for different inputs. Taken together, these results suggest that CNNs could be used to reveal structure components of neuronal circuits, and provide a powerful model for neural system identification.


Assuntos
Aprendizado Profundo , Animais , Redes Neurais de Computação , Neurônios , Retina/diagnóstico por imagem
20.
IEEE Trans Neural Netw Learn Syst ; 33(5): 1935-1946, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34665741

RESUMO

Neural coding, including encoding and decoding, is one of the key problems in neuroscience for understanding how the brain uses neural signals to relate sensory perception and motor behaviors with neural systems. However, most of the existed studies only aim at dealing with the continuous signal of neural systems, while lacking a unique feature of biological neurons, termed spike, which is the fundamental information unit for neural computation as well as a building block for brain-machine interface. Aiming at these limitations, we propose a transcoding framework to encode multi-modal sensory information into neural spikes and then reconstruct stimuli from spikes. Sensory information can be compressed into 10% in terms of neural spikes, yet re-extract 100% of information by reconstruction. Our framework can not only feasibly and accurately reconstruct dynamical visual and auditory scenes, but also rebuild the stimulus patterns from functional magnetic resonance imaging (fMRI) brain activities. More importantly, it has a superb ability of noise immunity for various types of artificial noises and background signals. The proposed framework provides efficient ways to perform multimodal feature representation and reconstruction in a high-throughput fashion, with potential usage for efficient neuromorphic computing in a noisy environment.


Assuntos
Interfaces Cérebro-Computador , Redes Neurais de Computação , Potenciais de Ação/fisiologia , Encéfalo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA