Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Neurosci Biobehav Rev ; 157: 105508, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38097096

RESUMO

Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive processing theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive processing paradigm.


Assuntos
Sonhos , Imaginação , Humanos , Sonhos/fisiologia , Imaginação/fisiologia , Sono , Encéfalo , Sensação
2.
Proc Natl Acad Sci U S A ; 120(32): e2300558120, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37523562

RESUMO

While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.


Assuntos
Modelos Neurológicos , N-Metilaspartato , Aprendizagem/fisiologia , Neurônios/fisiologia , Percepção
3.
Acta Biomater ; 169: 118-129, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37507032

RESUMO

The liver is a highly vascularized organ where fluid properties, including vascular pressure, vessel integrity and fluid viscosity, play a critical role in gross mechanical properties. To study the effects of portal pressure, liver confinement, fluid viscosity, and tissue crosslinking on liver stiffness, water diffusion, and vessel size, we applied multiparametric magnetic resonance imaging (mpMRI), including multifrequency magnetic resonance elastography (MRE) and apparent diffusion coefficient (ADC) measurements, to ex vivo livers from healthy male rats (13.6±1.6 weeks) at room temperature. Four scenarios including altered liver confinement, tissue crosslinking, and vascular fluid viscosity were investigated with mpMRI at different portal pressure levels (0-17.5 cmH2O). Our experiments demonstrated that, with increasing portal pressure, rat livers showed higher water content, water diffusivity, and increased vessel sizes quantified by the vessel tissue volume fraction (VTVF). These effects were most pronounced in native, unconfined livers (VTVF: 300±120%, p<0.05, ADC: 88±29%, p<0.01), while still significant under confinement (confined: VTVF: 53±32%, p<0.01, ADC: 28±19%, p<0.05; confined-fixed: VTVF: 52±20%, p<0.001, ADC: 11±2%, p<0.01; confined-viscous: VTVF: 210±110%, p<0.01, ADC: 26±9%, p<0.001). Softening with elevated portal pressure (-12±5, p<0.05) occurred regardless of confinement and fixation. However, the liver stiffened when exposed to a more viscous inflow fluid (11±4%, p<0.001). Taken together, our results elucidate the complex relationship between macroscopic-biophysical parameters of liver tissue measured by mpMRI and vascular-fluid properties. Influenced by portal pressure, vascular permeability, and matrix crosslinking, liver stiffness is sensitive to intrinsic poroelastic properties, which, alongside vascular architecture and water diffusivity, may aid in the differential diagnosis of liver disease. STATEMENT OF SIGNIFICANCE: Using highly controllable ex vivo rat liver phantoms, hepatic biophysical properties such as tissue-vascular structure, stiffness, and water diffusivity were investigated using multiparametric MRI including multifrequency magnetic resonance elastography (MRE) and diffusion-weighted imaging (DWI). Through elaborate tuning of the experimental conditions such as the static portal pressure, flow viscosity, amount and distribution of fluid content in the liver, we identified the contributions of the fluid component to the overall imaging-based biophysical properties of the liver. Our finding demonstrated the sensitivity of liver stiffness to the hepatic poroelastic properties, which may aid in the differential diagnosis of liver diseases.


Assuntos
Técnicas de Imagem por Elasticidade , Hepatopatias , Masculino , Animais , Ratos , Pressão na Veia Porta , Fígado/diagnóstico por imagem , Fígado/patologia , Imagem de Difusão por Ressonância Magnética/métodos , Hepatopatias/patologia , Água , Imageamento por Ressonância Magnética/métodos
4.
Sci Adv ; 9(8): eade5839, 2023 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-36812315

RESUMO

The structure and dynamics of isolated nanosamples in free flight can be directly visualized via single-shot coherent diffractive imaging using the intense and short pulses of x-ray free-electron lasers. Wide-angle scattering images encode three-dimensional (3D) morphological information of the samples, but its retrieval remains a challenge. Up to now, effective 3D morphology reconstructions from single shots were only achieved via fitting with highly constrained models, requiring a priori knowledge about possible geometries. Here, we present a much more generic imaging approach. Relying on a model that allows for any sample morphology described by a convex polyhedron, we reconstruct wide-angle diffraction patterns from individual silver nanoparticles. In addition to known structural motives with high symmetries, we retrieve imperfect shapes and agglomerates that were not previously accessible. Our results open unexplored routes toward true 3D structure determination of single nanoparticles and, ultimately, 3D movies of ultrafast nanoscale dynamics.

5.
Front Neuroinform ; 16: 837549, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35645755

RESUMO

Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.

6.
Elife ; 112022 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-35384841

RESUMO

Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences. We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs). Learning in our model is organized across three different global brain states mimicking wakefulness, non-rapid eye movement (NREM), and REM sleep, optimizing different, but complementary, objective functions. We train the model on standard datasets of natural images and evaluate the quality of the learned representations. Our results suggest that generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations. The model provides a new computational perspective on sleep states, memory replay, and dreams, and suggests a cortical implementation of GANs.


Assuntos
Sonhos , Sono de Ondas Lentas , Animais , Sono , Sono REM , Vigília
7.
Magn Reson Med ; 87(3): 1435-1445, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34752638

RESUMO

PURPOSE: The zebrafish (Danio rerio) has become an important animal model in a wide range of biomedical research disciplines. Growing awareness of the role of biomechanical properties in tumor progression and neuronal development has led to an increasing interest in the noninvasive mapping of the viscoelastic properties of zebrafish by elastography methods applicable to bulky and nontranslucent tissues. METHODS: Microscopic multifrequency MR elastography is introduced for mapping shear wave speed (SWS) and loss angle (φ) as markers of stiffness and viscosity of muscle, brain, and neuroblastoma tumors in postmortem zebrafish with 60 µm in-plane resolution. Experiments were performed in a 7 Tesla MR scanner at 1, 1.2, and 1.4 kHz driving frequencies. RESULTS: Detailed zebrafish viscoelasticity maps revealed that the midbrain region (SWS = 3.1 ± 0.7 m/s, φ = 1.2 ± 0.3 radian [rad]) was stiffer and less viscous than telencephalon (SWS = 2.6 ± 0. 5 m/s, φ = 1.4 ± 0.2 rad) and optic tectum (SWS = 2.6 ± 0.5 m/s, φ = 1.3 ± 0.4 rad), whereas the cerebellum (SWS = 2.9 ± 0.6 m/s, φ = 0.9 ± 0.4 rad) was stiffer but less viscous than both (all p < .05). Overall, brain tissue (SWS = 2.9 ± 0.4 m/s, φ = 1.2 ± 0.2 rad) had similar stiffness but lower viscosity values than muscle tissue (SWS = 2.9 ± 0.5 m/s, φ = 1.4 ± 0.2 rad), whereas neuroblastoma (SWS = 2.4 ± 0.3 m/s, φ = 0.7 ± 0.1 rad, all p < .05) was the softest and least viscous tissue. CONCLUSION: Microscopic multifrequency MR elastography-generated maps of zebrafish show many details of viscoelasticity and resolve tissue regions, of great interest in neuromechanical and oncological research and for which our study provides first reference values.


Assuntos
Técnicas de Imagem por Elasticidade , Animais , Encéfalo/diagnóstico por imagem , Valores de Referência , Viscosidade , Peixe-Zebra
8.
Acta Biomater ; 140: 389-397, 2022 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-34818577

RESUMO

An abdominal aortic aneurysm (AAA) is a permanent dilatation of the abdominal aorta, usually accompanied by thrombus formation. The current clinical imaging modalities cannot reliably visualize the thrombus composition. Remodeling of the extracellular matrix (ECM) during AAA development leads to stiffness changes, providing a potential imaging marker. 14 apolipoprotein E-deficient mice underwent surgery for angiotensin II-loaded osmotic minipump implantation. 4 weeks post-op, 5 animals developed an AAA. The aneurysm was imaged ex vivo by microscopic multifrequency magnetic resonance elastography (µMMRE) with an in-plane resolution of 40 microns. Experiments were performed on a 7-Tesla preclinical magnetic resonance imaging scanner with drive frequencies between 1000 Hz and 1400 Hz. Shear wave speed (SWS) maps indicating stiffness were computed based on tomoelastography multifrequency inversion. As control, the aortas of 5 C57BL/6J mice were examined with the same imaging protocol. The regional variation of SWS in the thrombus ranging from 0.44 ± 0.07 to 1.20 ± 0.31 m/s was correlated fairly strong with regional histology-quantified ECM accumulation (R2 = 0.79). Our results suggest that stiffness changes in aneurysmal thrombus reflect ECM remodeling, which is critical for AAA risk assessment. In the future, µMMRE could be used for a mechanics-based clinical characterization of AAAs in patients. STATEMENT OF SIGNIFICANCE: To our knowledge, this is the first study mapping the stiffness of abdominal aortic aneurysms with microscopic resolution of 40 µm. Our work revealed that stiffness critically changes due to extracellular matrix (ECM) remodeling in the aneurysmal thrombus. We were able to image various levels of ECM remodeling in the aneurysm reflected in distinct shear wave speed patterns with a strong correlation to regional histology-quantified ECM accumulation. The generated results are significant for the application of microscopic multifrequency magnetic resonance elastography for quantification of pathological remodeling of the ECM and may be of great interest for detailed characterization of AAAs in patients.


Assuntos
Aneurisma da Aorta Abdominal , Técnicas de Imagem por Elasticidade , Animais , Aorta Abdominal/diagnóstico por imagem , Aorta Abdominal/patologia , Aneurisma da Aorta Abdominal/diagnóstico por imagem , Aneurisma da Aorta Abdominal/patologia , Modelos Animais de Doenças , Matriz Extracelular/patologia , Humanos , Imageamento por Ressonância Magnética , Camundongos , Camundongos Endogâmicos C57BL
9.
Elife ; 102021 10 28.
Artigo em Inglês | MEDLINE | ID: mdl-34709176

RESUMO

Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called 'plasticity rules', is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.


Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition ­ that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural ­ and reasonable ­ tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on 'evolutionary algorithms'. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner ­ an example of reinforcement learning. Finally, in the third 'supervised learning' scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers 'learn' will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.


Assuntos
Rede Nervosa , Plasticidade Neuronal , Neurônios/fisiologia , Animais , Humanos , Modelos Neurológicos
10.
Elife ; 102021 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-33494860

RESUMO

Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models at any level of complexity. We show that (back-propagating) action potentials, Ca2+ spikes, and N-methyl-D-aspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.


Assuntos
Potenciais de Ação/fisiologia , Dendritos/fisiologia , Sinapses/fisiologia
11.
Front Neuroinform ; 15: 785068, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35300490

RESUMO

Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.

12.
Front Neuroinform ; 14: 12, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32431602

RESUMO

Investigating the dynamics and function of large-scale spiking neuronal networks with realistic numbers of synapses is made possible today by state-of-the-art simulation code that scales to the largest contemporary supercomputers. However, simulations that involve electrical interactions, also called gap junctions, besides chemical synapses scale only poorly due to a communication scheme that collects global data on each compute node. In comparison to chemical synapses, gap junctions are far less abundant. To improve scalability we exploit this sparsity by integrating an existing framework for continuous interactions with a recently proposed directed communication scheme for spikes. Using a reference implementation in the NEST simulator we demonstrate excellent scalability of the integrated framework, accelerating large-scale simulations with gap junctions by more than an order of magnitude. This allows, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.

13.
Sci Rep ; 9(1): 18303, 2019 12 04.
Artigo em Inglês | MEDLINE | ID: mdl-31797943

RESUMO

Neuronal network models of high-level brain functions such as memory recall and reasoning often rely on the presence of some form of noise. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. In vivo, synaptic background input has been suggested to serve as the main source of noise in biological neuronal networks. However, the finiteness of the number of such noise sources constitutes a challenge to this idea. Here, we show that shared-noise correlations resulting from a finite number of independent noise sources can substantially impair the performance of stochastic network models. We demonstrate that this problem is naturally overcome by replacing the ensemble of independent noise sources by a deterministic recurrent neuronal network. By virtue of inhibitory feedback, such networks can generate small residual spatial correlations in their activity which, counter to intuition, suppress the detrimental effect of shared input. We exploit this mechanism to show that a single recurrent network of a few hundred neurons can serve as a natural noise source for a large ensemble of functional networks performing probabilistic computations, each comprising thousands of units.

14.
Front Comput Neurosci ; 13: 46, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31427939

RESUMO

Neural network simulation is an important tool for generating and evaluating hypotheses on the structure, dynamics, and function of neural circuits. For scientific questions addressing organisms operating autonomously in their environments, in particular where learning is involved, it is crucial to be able to operate such simulations in a closed-loop fashion. In such a set-up, the neural agent continuously receives sensory stimuli from the environment and provides motor signals that manipulate the environment or move the agent within it. So far, most studies requiring such functionality have been conducted with custom simulation scripts and manually implemented tasks. This makes it difficult for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. The resulting toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments with various levels of complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym. We compare its performance to a previously suggested neural network model of reinforcement learning in the basal ganglia and a generic Q-learning algorithm.

16.
Front Neuroinform ; 12: 2, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29503613

RESUMO

State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA