Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38116240

RESUMO

microRNA-9 (miR-9) is one of the most abundant microRNAs in the mammalian brain, essential for its development and normal function. In neurons, it regulates the expression of several key molecules, ranging from ion channels to enzymes, to transcription factors broadly affecting the expression of many genes. The neuronal effects of alcohol, one of the most abused drugs in the world, seem to be at least partially dependent on regulating the expression of miR-9. We previously observed that molecular mechanisms of the development of alcohol tolerance are miR-9 dependent. Since a critical feature of alcohol action is temporal exposure to the drug, we decided to better understand the time dependence of alcohol regulation of miR-9 biogenesis and expression. We measured the effect of intoxicating concentration of alcohol (20 mM ethanol) on the expression of all major elements of miR-9 biogenesis: three pri-precursors (pri-mir-9-1, pri-mir-9-2, pri-mir-9-3), three pre-precursors (pre-mir-9-1, pre-mir-9-2, pre-mir-9-3), and two mature microRNAs: miR-9-5p and miR-9-3p, using digital PCR and RT-qPCR, and murine primary medium spiny neurons (MSN) cultures. We subjected the neurons to alcohol based on an exposure/withdrawal matrix of different exposure times (from 15 min to 24 h) followed by different withdrawal times (from 0 h to 24 h). We observed that a short exposure increased mature miR-9-5p expression, which was followed by a gradual decrease and subsequent increase of the expression, returning to pre-exposure levels within 24 h. Temporal changes of miR-9-3p expression were complementing miR-9-5p changes. Interestingly, an extended, continuous presence of the drug caused a similar pattern. These results suggest the presence of the adaptive mechanisms of miR-9 expression in the presence and absence of alcohol. Measurement of miR-9 pre- and pri-precursors showed further that the primary effect of alcohol on miR-9 is through the mir-9-2 precursor pathway with a smaller contribution of mir-9-1 and mir-9-3 precursors. Our results provide new insight into the adaptive mechanisms of neurons to alcohol exposure. It would be of interest to determine next which microRNA-based mechanisms are involved in a transition from the acute, intoxicating effects of alcohol to the chronic, addictive effects of the drug.

2.
Int J Mol Sci ; 24(19)2023 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-37834096

RESUMO

One of the most important aspects of successful cancer therapy is the identification of a target protein for inhibition interaction. Conventionally, this consists of screening a panel of genes to assess which is mutated and then developing a small molecule to inhibit the interaction of two proteins or to simply inhibit a specific protein from all interactions. In previous work, we have proposed computational methods that analyze protein-protein networks using both topological approaches and thermodynamic quantification provided by Gibbs free energy. In order to make these approaches both easier to implement and free of arbitrary topological filtration criteria, in the present paper, we propose a modification of the topological-thermodynamic analysis, which focuses on the selection of the most thermodynamically stable proteins and their subnetwork interaction partners with the highest expression levels. We illustrate the implementation of the new approach with two specific cases, glioblastoma (glioma brain tumors) and chronic lymphatic leukoma (CLL), based on the publicly available patient-derived datasets. We also discuss how this can be used in clinical practice in connection with the availability of approved and investigational drugs.


Assuntos
Neoplasias Encefálicas , Glioma , Humanos , Termodinâmica , Proteínas , Expressão Gênica , Mapas de Interação de Proteínas , Biologia Computacional/métodos
3.
Artigo em Inglês | MEDLINE | ID: mdl-37022224

RESUMO

We propose a new learning framework, signal propagation (sigprop), for propagating a learning signal and updating neural network parameters via a forward pass, as an alternative to backpropagation (BP). In sigprop, there is only the forward path for inference and learning. So, there are no structural or computational constraints necessary for learning to take place, beyond the inference model itself, such as feedback connectivity, weight transport, or a backward pass, which exist under BP-based approaches. That is, sigprop enables global supervised learning with only a forward path. This is ideal for parallel training of layers or modules. In biology, this explains how neurons without feedback connections can still receive a global learning signal. In hardware, this provides an approach for global supervised learning without backward connectivity. Sigprop by construction has compatibility with models of learning in the brain and in hardware than BP, including alternative approaches relaxing learning constraints. We also demonstrate that sigprop is more efficient in time and memory than they are. To further explain the behavior of sigprop, we provide evidence that sigprop provides useful learning signals in context to BP. To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with the Hebbian updates and train spiking neural networks (SNNs) with only the voltage or with biologically and hardware-compatible surrogate functions.

4.
J Biophotonics ; 16(8): e202300001, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37078262

RESUMO

Skin cancer, an anomalous development of skin cells in the epidermis, is among the most common types of cancer worldwide. Because of its clinical importance and to improve early diagnosis and patient management, there is an urgent need to develop noninvasive, accurate medical diagnostic tools. To this aim, light reflectance spectroscopy over the visible and near-infrared spectral range (400-1000 nm) based on a single-fiber six-around-one optical probe was applied to extract nine features used for diagnostics. These features include skewness, entropy, energy, kurtosis, scattering amplitude, and others, and are spread over each of four different spectral signatures, namely, light reflectance, absorbance, scattering profile approximation, and absorption/scattering ratio. Our preliminary studies focused on 11 adult patients with diagnoses of malignant melanoma (n = 4), basal cell carcinoma (n = 5), and squamous cell carcinoma (n = 2) in a variety of locations on the body. Measurements were taken first in vivo before surgery, at the site of the lesion and from healthy skin of the same patient, and ex vivo after surgical excision, where the lesion was rinsed in saline solution and measurements of the reflected light from the "inside" facing plane of the tissue were taken in the same manner. Overall, experimental results demonstrate that by examining a variety of wavebands, features, and statistical metrics, we can detect and distinguish cancer from normal tissue and different cancer subtypes. Nevertheless, discrepancies in results between in vivo and ex vivo tissue were observed and explanations for these discrepancies are discussed.


Assuntos
Carcinoma Basocelular , Melanoma , Neoplasias Cutâneas , Adulto , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Melanoma/diagnóstico por imagem , Melanoma/patologia , Pele/diagnóstico por imagem , Pele/patologia , Carcinoma Basocelular/diagnóstico por imagem , Carcinoma Basocelular/patologia , Análise Espectral/métodos
5.
Neural Comput ; 33(11): 2908-2950, 2021 10 12.
Artigo em Inglês | MEDLINE | ID: mdl-34474476

RESUMO

Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks.


Assuntos
Aprendizado Profundo , Algoritmos , Animais , Hipocampo , Redes Neurais de Computação , Reforço Psicológico , Sono
6.
Sci Rep ; 11(1): 5331, 2021 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-33674620

RESUMO

Brains demonstrate varying spatial scales of nested hierarchical clustering. Identifying the brain's neuronal cluster size to be presented as nodes in a network computation is critical to both neuroscience and artificial intelligence, as these define the cognitive blocks capable of building intelligent computation. Experiments support various forms and sizes of neural clustering, from handfuls of dendrites to thousands of neurons, and hint at their behavior. Here, we use computational simulations with a brain-derived fMRI network to show that not only do brain networks remain structurally self-similar across scales but also neuron-like signal integration functionality ("integrate and fire") is preserved at particular clustering scales. As such, we propose a coarse-graining of neuronal networks to ensemble-nodes, with multiple spikes making up its ensemble-spike and time re-scaling factor defining its ensemble-time step. This fractal-like spatiotemporal property, observed in both structure and function, permits strategic choice in bridging across experimental scales for computational modeling while also suggesting regulatory constraints on developmental and evolutionary "growth spurts" in brain size, as per punctuated equilibrium theories in evolutionary biology.


Assuntos
Córtex Cerebelar/citologia , Simulação por Computador , Modelos Neurológicos , Rede Nervosa/citologia , Neurônios/citologia , Humanos
7.
Proc Natl Acad Sci U S A ; 117(47): 29872-29882, 2020 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-33154155

RESUMO

The prefrontal cortex encodes and stores numerous, often disparate, schemas and flexibly switches between them. Recent research on artificial neural networks trained by reinforcement learning has made it possible to model fundamental processes underlying schema encoding and storage. Yet how the brain is able to create new schemas while preserving and utilizing old schemas remains unclear. Here we propose a simple neural network framework that incorporates hierarchical gating to model the prefrontal cortex's ability to flexibly encode and use multiple disparate schemas. We show how gating naturally leads to transfer learning and robust memory savings. We then show how neuropsychological impairments observed in patients with prefrontal damage are mimicked by lesions of our network. Our architecture, which we call DynaMoE, provides a fundamental framework for how the prefrontal cortex may handle the abundance of schemas necessary to navigate the real world.


Assuntos
Aprendizagem/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Córtex Pré-Frontal/fisiologia , Reforço Psicológico , Técnicas de Observação do Comportamento , Transtornos Cognitivos/etiologia , Transtornos Cognitivos/fisiopatologia , Humanos , Transtornos Mentais/etiologia , Transtornos Mentais/fisiopatologia , Córtex Pré-Frontal/lesões
8.
Nat Commun ; 11(1): 4069, 2020 08 13.
Artigo em Inglês | MEDLINE | ID: mdl-32792531

RESUMO

Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as 'generative replay', which can successfully - and surprisingly efficiently - prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network's own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain.

9.
Int J Mol Sci ; 21(3)2020 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-32046179

RESUMO

We propose to use a Gibbs free energy function as a measure of the human brain development. We adopt this approach to the development of the human brain over the human lifespan: from a prenatal stage to advanced age. We used proteomic expression data with the Gibbs free energy to quantify human brain's protein-protein interaction networks. The data, obtained from BioGRID, comprised tissue samples from the 16 main brain areas, at different ages, of 57 post-mortem human brains. We found a consistent functional dependence of the Gibbs free energies on age for most of the areas and both sexes. A significant upward trend in the Gibbs function was found during the fetal stages, which is followed by a sharp drop at birth with a subsequent period of relative stability and a final upward trend toward advanced age. We interpret these data in terms of structure formation followed by its stabilization and eventual deterioration. Furthermore, gender data analysis has uncovered the existence of functional differences, showing male Gibbs function values lower than female at prenatal and neonatal ages, which become higher at ages 8 to 40 and finally converging at late adulthood with the corresponding female Gibbs functions.


Assuntos
Envelhecimento/metabolismo , Encéfalo/metabolismo , Termodinâmica , Adolescente , Adulto , Encéfalo/embriologia , Encéfalo/crescimento & desenvolvimento , Criança , Pré-Escolar , Feminino , Humanos , Lactente , Masculino , Pessoa de Meia-Idade , Mapas de Interação de Proteínas , Transcriptoma
10.
Neural Netw ; 120: 108-115, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31500931

RESUMO

Deep Reinforcement Learning (RL) demonstrates excellent performance on tasks that can be solved by trained policy. It plays a dominant role among cutting-edge machine learning approaches using multi-layer Neural networks (NNs). At the same time, Deep RL suffers from high sensitivity to noisy, incomplete, and misleading input data. Following biological intuition, we involve Spiking Neural Networks (SNNs) to address some deficiencies of deep RL solutions. Previous studies in image classification domain demonstrated that standard NNs (with ReLU nonlinearity) trained using supervised learning can be converted to SNNs with negligible deterioration in performance. In this paper, we extend those conversion results to the domain of Q-Learning NNs trained using RL. We provide a proof of principle of the conversion of standard NN to SNN. In addition, we show that the SNN has improved robustness to occlusion in the input image. Finally, we introduce results with converting full-scale Deep Q-network to SNN, paving the way for future research to robust Deep RL applications.


Assuntos
Aprendizado de Máquina/normas , Teoria dos Jogos
11.
Neural Netw ; 119: 332-340, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31499357

RESUMO

In recent years, spiking neural networks (SNNs) have demonstrated great success in completing various machine learning tasks. We introduce a method for learning image features with locally connected layers in SNNs using a spike-timing-dependent plasticity (STDP) rule. In our approach, sub-networks compete via inhibitory interactions to learn features from different locations of the input space. These locally-connected SNNs (LC-SNNs) manifest key topological features of the spatial interaction of biological neurons. We explore a biologically inspired n-gram classification approach allowing parallel processing over various patches of the image space. We report the classification accuracy of simple two-layer LC-SNNs on two image datasets, which respectively match state-of-art performance and are the first results to date. LC-SNNs have the advantage of fast convergence to a dataset representation, and they require fewer learnable parameters than other SNN approaches with unsupervised learning. Robustness tests demonstrate that LC-SNNs exhibit graceful degradation of performance despite the random deletion of large numbers of synapses and neurons. Our results have been obtained using the BindsNET library, which allows efficient machine learning implementations of spiking neural networks.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Modelos Neurológicos
12.
Front Neuroinform ; 12: 89, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30631269

RESUMO

The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.

13.
Front Neurosci ; 11: 80, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28289370

RESUMO

Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications.

14.
Int J Neural Syst ; 24(8): 1450029, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25354762

RESUMO

We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Plasticidade Neuronal
15.
Artigo em Inglês | MEDLINE | ID: mdl-24653679

RESUMO

A unique delayed self-inhibitory pathway mediated by layer 5 Martinotti Cells was studied in a biologically inspired neural network simulation. Inclusion of this pathway along with layer 5 basket cell lateral inhibition caused balanced competitive learning, which led to the formation of neuronal clusters as were indeed reported in the same region. Martinotti pathway proves to act as a learning "conscience," causing overly successful regions in the network to restrict themselves and let others fire. It thus spreads connectivity more evenly throughout the net and solves the "dead unit" problem of clustering algorithms in a local and biologically plausible manner.


Assuntos
Simulação por Computador , Aprendizagem/fisiologia , Modelos Neurológicos , Neocórtex/fisiologia , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Animais , Sinapses/fisiologia
16.
Prog Biophys Mol Biol ; 113(1): 117-26, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23583352

RESUMO

Biological processes are often compared to computation and modeled on the Universal Turing Machine. While many systems or aspects of systems can be well described in this manner, Turing computation can only compute what it has been programmed for. It has no ability to learn or adapt to new situations. Yet, adaptation, choice and learning are all hallmarks of living organisms. This suggests that there must be a different form of computation capable of this sort of calculation. It also suggests that there are current computational models of biological systems that may be fundamentally incorrect. We argue that the Super-Turing model is both capable of modeling adaptive computation, and furthermore, a possible answer to the computational model searched for by Turing himself.


Assuntos
Adaptação Fisiológica/fisiologia , Algoritmos , Inteligência Artificial , Simulação por Computador , Matemática , Modelos Biológicos , Biologia de Sistemas/métodos , Biofísica/métodos , Retroalimentação , Biologia Molecular/métodos , Integração de Sistemas
17.
Neural Comput ; 24(4): 996-1019, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22295978

RESUMO

In classical computation, rational- and real-weighted recurrent neural networks were shown to be respectively equivalent to and strictly more powerful than the standard Turing machine model. Here, we study the computational power of recurrent neural networks in a more biologically oriented computational framework, capturing the aspects of sequential interactivity and persistence of memory. In this context, we prove that so-called interactive rational- and real-weighted neural networks show the same computational powers as interactive Turing machines and interactive Turing machines with advice, respectively. A mathematical characterization of each of these computational powers is also provided. It follows from these results that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.


Assuntos
Simulação por Computador , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Memória/fisiologia
18.
Curr Opin Genet Dev ; 20(6): 644-9, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-20961750

RESUMO

Logical models provide insight about key control elements of biological networks. Based solely on the logical structure, we can determine state transition diagrams that give the allowed possible transitions in a coarse grained phase space. Attracting pathways and stable nodes in the state transition diagram correspond to robust attractors that would be found in several different types of dynamical systems that have the same logical structure. Attracting nodes in the state transition diagram correspond to stable steady states. Furthermore, the sequence of logical states appearing in biological networks with robust attracting pathways would be expected to appear also in Boolean networks, asynchronous switching networks, and differential equations having the same underlying structure. This provides a basis for investigating naturally occurring and synthetic systems, both to predict the dynamics if the structure is known, and to determine the structure if the transitions are known.


Assuntos
Modelos Biológicos , Animais , Simulação por Computador , Humanos , Software
19.
Chaos ; 20(3): 037112, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20887078

RESUMO

One of the brain's most basic functions is integrating sensory data from diverse sources. This ability causes us to question whether the neural system is computationally capable of intelligently integrating data, not only when sources have known, fixed relative dependencies but also when it must determine such relative weightings based on dynamic conditions, and then use these learned weightings to accurately infer information about the world. We suggest that the brain is, in fact, fully capable of computing this parallel task in a single network and describe a neural inspired circuit with this property. Our implementation suggests the possibility that evidence learning requires a more complex organization of the network than was previously assumed, where neurons have different specialties, whose emergence brings the desired adaptivity seen in human online inference.


Assuntos
Aprendizagem , Neurônios/fisiologia , Teorema de Bayes , Humanos , Modelos Biológicos , Rede Nervosa/fisiologia , Sinapses/fisiologia
20.
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA