Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 75
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38827109

ABSTRACT

Motivation: The proliferation of genetic testing and consumer genomics represents a logistic challenge to the personalized use of GWAS data in VCF format. Specifically, the challenge of retrieving target genetic variation from large compressed files filled with unrelated variation information. Compounding the data traversal challenge, privacy-sensitive VCF files are typically managed as large stand-alone single files (no companion index file) composed of variable-sized compressed chunks, hosted in consumer-facing environments with no native support for hosted execution. Results: A portable JavaScript module was developed to support in-browser fetching of partial content using byte-range requests. This includes on-the-fly decompressing irregularly positioned compressed chunks, coupled with a binary search algorithm iteratively identifying chromosome-position ranges. The in-browser zero-footprint solution (no downloads, no installations) enables the interoperability, reusability, and user-facing governance advanced by the FAIR principles for stewardship of scientific data. Availability - https://episphere.github.io/vcf, including supplementary material.

2.
Natl Sci Rev ; 11(5): nwad301, 2024 May.
Article in English | MEDLINE | ID: mdl-38577672

ABSTRACT

The author provides 4 design principles of how to make cortical microcircuits into neuromorphic hardwares, shedding light for the next generation neuromorphic hardware design.

3.
Sci Rep ; 14(1): 8557, 2024 04 12.
Article in English | MEDLINE | ID: mdl-38609429

ABSTRACT

Spiking neural networks are of high current interest, both from the perspective of modelling neural networks of the brain and for porting their fast learning capability and energy efficiency into neuromorphic hardware. But so far we have not been able to reproduce fast learning capabilities of the brain in spiking neural networks. Biological data suggest that a synergy of synaptic plasticity on a slow time scale with network dynamics on a faster time scale is responsible for fast learning capabilities of the brain. We show here that a suitable orchestration of this synergy between synaptic plasticity and network dynamics does in fact reproduce fast learning capabilities of generic recurrent networks of spiking neurons. This points to the important role of recurrent connections in spiking networks, since these are necessary for enabling salient network dynamics. We show more specifically that the proposed synergy enables synaptic weights to encode more general information such as priors and task structures, since moment-to-moment processing of new information can be delegated to the network dynamics.


Subject(s)
Brain , Learning , Neuronal Plasticity , Drugs, Generic , Neural Networks, Computer
4.
Nat Commun ; 15(1): 2344, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38490999

ABSTRACT

Planning and problem solving are cornerstones of higher brain function. But we do not know how the brain does that. We show that learning of a suitable cognitive map of the problem space suffices. Furthermore, this can be reduced to learning to predict the next observation through local synaptic plasticity. Importantly, the resulting cognitive map encodes relations between actions and observations, and its emergent high-dimensional geometry provides a sense of direction for reaching distant goals. This quasi-Euclidean sense of direction provides a simple heuristic for online planning that works almost as well as the best offline planning algorithms from AI. If the problem space is a physical space, this method automatically extracts structural regularities from the sequence of observations that it receives so that it can generalize to unseen parts. This speeds up learning of navigation in 2D mazes and the locomotion with complex actuator systems, such as legged bodies. The cognitive map learner that we propose does not require a teacher, similar to self-attention networks (Transformers). But in contrast to Transformers, it does not require backpropagation of errors or very large datasets for learning. Hence it provides a blue-print for future energy-efficient neuromorphic hardware that acquires advanced cognitive capabilities through autonomous on-chip learning.

5.
PLoS Comput Biol ; 20(3): e1011921, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38452057

ABSTRACT

In an ever-changing visual world, animals' survival depends on their ability to perceive and respond to rapidly changing motion cues. The primary visual cortex (V1) is at the forefront of this sensory processing, orchestrating neural responses to perturbations in visual flow. However, the underlying neural mechanisms that lead to distinct cortical responses to such perturbations remain enigmatic. In this study, our objective was to uncover the neural dynamics that govern V1 neurons' responses to visual flow perturbations using a biologically realistic computational model. By subjecting the model to sudden changes in visual input, we observed opposing cortical responses in excitatory layer 2/3 (L2/3) neurons, namely, depolarizing and hyperpolarizing responses. We found that this segregation was primarily driven by the competition between external visual input and recurrent inhibition, particularly within L2/3 and L4. This division was not observed in excitatory L5/6 neurons, suggesting a more prominent role for inhibitory mechanisms in the visual processing of the upper cortical layers. Our findings share similarities with recent experimental studies focusing on the opposing influence of top-down and bottom-up inputs in the mouse primary visual cortex during visual flow perturbations.


Subject(s)
Visual Cortex , Mice , Animals , Visual Cortex/physiology , Photic Stimulation , Neurons/physiology , Sensation , Visual Perception/physiology
6.
Article in German | MEDLINE | ID: mdl-38086924

ABSTRACT

Since December 2019, digital health applications (DiGA) have been included in standard care in Germany and are therefore reimbursed by the statutory health insurance funds to support patients in the treatment of diseases or impairments. There are 48 registered DiGA listed in the directory of the Federal Institute of Drugs and Medical Devices (BfArM), mainly in the areas of mental health; hormones and metabolism; and muscles, bones, and joints. In this article, the "Digital Health" specialist group of the German Informatics Society describes the current developments around DiGA as well as the current sentiment on topics such as user-centricity, patient and practitioner acceptance, and innovation potential. In summary, over the past three years, DiGA have experienced a positive development, characterized by a gradually increasing availability of various DiGA and coverage areas as well as prescription numbers. Nevertheless, significant regulatory adjustments are still required in some areas to establish DiGA as a well-established instrument in long-term routine healthcare. Key challenges include user-centeredness and the sustainable use of the applications.


Subject(s)
Academies and Institutes , Digital Health , Humans , Germany
7.
Bioinform Adv ; 3(1): vbad145, 2023.
Article in English | MEDLINE | ID: mdl-37868335

ABSTRACT

Motivation: Currently, the Polygenic Score (PGS) Catalog curates over 400 publications on over 500 traits corresponding to over 3000 polygenic risk scores (PRSs). To assess the feasibility of privately calculating the underlying multivariate relative risk for individuals with consumer genomics data, we developed an in-browserPRS calculator for genomic data that does not circulate any data or engage in any computation outside of the user's personal device. Results: A prototype personal risk score calculator, created for research purposes, was developed to demonstrate how the PGS Catalog can be privately and readily applied to readily available direct-to-consumer genetic testing services, such as 23andMe. No software download, installation, or configuration is needed. The PRS web calculator matches individual PGS catalog entries with an individual's 23andMe genome data composed of 600k to 1.4 M single-nucleotide polymorphisms (SNPs). Beta coefficients provide researchers with a convenient assessment of risk associated with matched SNPs. This in-browser application was tested in a variety of personal devices, including smartphones, establishing the feasibility of privately calculating personal risk scores with up to a few thousand reference genetic variations and from the full 23andMe SNP data file (compressed or not). Availability and implementation: The PRScalc web application is developed in JavaScript, HTML, and CSS and is available at GitHub repository (https://episphere.github.io/prs) under an MIT license. The datasets were derived from sources in the public domain: [PGS Catalog, Personal Genome Project].

8.
Sci Adv ; 8(44): eabq7592, 2022 Nov 04.
Article in English | MEDLINE | ID: mdl-36322646

ABSTRACT

We analyze visual processing capabilities of a large-scale model for area V1 that arguably provides the most comprehensive accumulation of anatomical and neurophysiological data to date. We find that this brain-like neural network model can reproduce a number of characteristic visual processing capabilities of the brain, in particular the capability to solve diverse visual processing tasks, also on temporally dispersed visual information, with remarkable robustness to noise. This V1 model, whose architecture and neurons markedly differ from those of deep neural networks used in current artificial intelligence (AI), such as convolutional neural networks (CNNs), also reproduces a number of characteristic neural coding properties of the brain, which provides explanations for its superior noise robustness. Because visual processing is substantially more energy efficient in the brain compared with CNNs in AI, such brain-like neural networks are likely to have an impact on future technology: as blueprints for visual processing in more energy-efficient neuromorphic hardware.

9.
Front Surg ; 9: 962844, 2022.
Article in English | MEDLINE | ID: mdl-35990096

ABSTRACT

The pandemic led to a significant change in the clinical routine of many orthopaedic surgeons. To observe the impact of the pandemic on scientific output all studies published in the fields of orthopaedics listed in the Web of Science databases were analysed regarding the scientific merit of the years 2019, 2020, and 2021. Subsequently, correlation analyses were performed with parameters of regional pandemic situation (obtained from WHO) and economic strength (obtained from the World Bank). The investigations revealed that the Covid-19 pandemic led to a decrease in the annual publication rate for the first time in 20 years (2020 to 2021: -5.69%). There were regional differences in the publication rate, which correlated significantly with the respective Covid-19 case count (r = -.77, p < 0.01), associated death count (r = -.63, p < 0.01), and the gross domestic product per capita (r = -.40, p < 0.01) but not with the number of vaccinations (r = .09, p = 0.30). Furthermore, there was a drastic decrease in funding from private agencies (relative share: 2019: 36.43%, 2020: 22.66%, 2021: 19.22%), and a balanced decrease in publication output for research areas of acute and elective patient care. The Covid-19 pandemic resulted in a decline in orthopaedic annual publication rates for the first time in 20 years. This reduction was subject to marked regional differences and correlated directly with the pandemic load and was associated with decreased research funding from the private sector.

10.
Elife ; 102021 07 26.
Article in English | MEDLINE | ID: mdl-34310281

ABSTRACT

For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex - especially in higher areas of the human neocortex - moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.


Subject(s)
Action Potentials/physiology , Models, Neurological , Neocortex/physiology , Nerve Net/physiology , Neurons/physiology , Adaptation, Physiological , Computers, Molecular , Humans , Neural Networks, Computer
11.
Neuron ; 109(4): 571-575, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33600754

ABSTRACT

Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.


Subject(s)
Biomedical Engineering/trends , Models, Neurological , Neural Networks, Computer , Neurosciences/trends , Biomedical Engineering/methods , Forecasting , Humans , Neurons/physiology , Neurosciences/methods
12.
Nat Commun ; 11(1): 3625, 2020 07 17.
Article in English | MEDLINE | ID: mdl-32681001

ABSTRACT

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method-called e-prop-approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.


Subject(s)
Brain/physiology , Models, Neurological , Nerve Net/physiology , Neurons/physiology , Reward , Action Potentials/physiology , Animals , Brain/cytology , Deep Learning , Humans , Mice , Neuronal Plasticity/physiology
13.
Proc Natl Acad Sci U S A ; 117(25): 14464-14472, 2020 06 23.
Article in English | MEDLINE | ID: mdl-32518114

ABSTRACT

Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results.


Subject(s)
Cerebral Cortex/physiology , Cognition/physiology , Models, Neurological , Neurons/physiology , Synapses/physiology , Cerebral Cortex/cytology , Computer Simulation , Humans , Language
14.
eNeuro ; 7(3)2020.
Article in English | MEDLINE | ID: mdl-32381648

ABSTRACT

Humans can reason at an abstract level and structure information into abstract categories, but the underlying neural processes have remained unknown. Recent experimental data provide the hint that this is likely to involve specific subareas of the brain from which structural information can be decoded. Based on this data, we introduce the concept of assembly projections, a general principle for attaching structural information to content in generic networks of spiking neurons. According to the assembly projections principle, structure-encoding assemblies emerge and are dynamically attached to content representations through Hebbian plasticity mechanisms. This model provides the basis for explaining a number of experimental data and provides a basis for modeling abstract computational operations of the brain.


Subject(s)
Models, Neurological , Neural Networks, Computer , Brain , Humans , Neurons
15.
Cereb Cortex ; 30(3): 952-968, 2020 03 14.
Article in English | MEDLINE | ID: mdl-31403679

ABSTRACT

Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.


Subject(s)
Memory/physiology , Models, Neurological , Neural Networks, Computer , Neuronal Plasticity , Neurons/physiology , Action Potentials , Humans , Learning/physiology
16.
Front Neurorobot ; 13: 81, 2019.
Article in English | MEDLINE | ID: mdl-31632262

ABSTRACT

The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.

17.
Front Neurosci ; 13: 483, 2019.
Article in English | MEDLINE | ID: mdl-31178681

ABSTRACT

Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations.

18.
IEEE Trans Biomed Circuits Syst ; 13(3): 579-591, 2019 06.
Article in English | MEDLINE | ID: mdl-30932847

ABSTRACT

Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources. However, the efficiency is often lost when one tries to port these findings to a silicon substrate, since brain-inspired algorithms often make extensive use of complex functions, such as random number generators, that are expensive to compute on standard general purpose hardware. The prototype chip of the second generation SpiNNaker system is designed to overcome this problem. Low-power advanced RISC machine (ARM) processors equipped with a random number generator and an exponential function accelerator enable the efficient execution of brain-inspired algorithms. We implement the recently introduced reward-based synaptic sampling model that employs structural plasticity to learn a function or task. The numerical simulation of the model requires to update the synapse variables in each time step including an explorative random term. To the best of our knowledge, this is the most complex synapse model implemented so far on the SpiNNaker system. By making efficient use of the hardware accelerators and numerical optimizations, the computation time of one plasticity update is reduced by a factor of 2. This, combined with fitting the model into to the local static random access memory (SRAM), leads to 62% energy reduction compared to the case without accelerators and the use of external dynamic random access memory (DRAM). The model implementation is integrated into the SpiNNaker software framework allowing for scalability onto larger systems. The hardware-software system presented in this paper paves the way for power-efficient mobile and biomedical applications with biologically plausible brain-inspired algorithms.


Subject(s)
Brain/physiology , Machine Learning , Models, Neurological , Neural Networks, Computer , Software , Synapses/physiology , Humans
19.
Front Neurosci ; 12: 840, 2018.
Article in English | MEDLINE | ID: mdl-30505263

ABSTRACT

The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.

20.
eNeuro ; 5(2)2018.
Article in English | MEDLINE | ID: mdl-29696150

ABSTRACT

Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.


Subject(s)
Connectome , Models, Theoretical , Nerve Net/physiology , Neural Networks, Computer , Neuronal Plasticity/physiology , Reward , Synapses/physiology , Animals , Computer Simulation , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...