Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51.933
Filter
1.
Nat Commun ; 15(1): 3542, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38719802

ABSTRACT

Understanding the functional connectivity between brain regions and its emergent dynamics is a central challenge. Here we present a theory-experiment hybrid approach involving iteration between a minimal computational model and in vivo electrophysiological measurements. Our model not only predicted spontaneous persistent activity (SPA) during Up-Down-State oscillations, but also inactivity (SPI), which has never been reported. These were confirmed in vivo in the membrane potential of neurons, especially from layer 3 of the medial and lateral entorhinal cortices. The data was then used to constrain two free parameters, yielding a unique, experimentally determined model for each neuron. Analytic and computational analysis of the model generated a dozen quantitative predictions about network dynamics, which were all confirmed in vivo to high accuracy. Our technique predicted functional connectivity; e. g. the recurrent excitation is stronger in the medial than lateral entorhinal cortex. This too was confirmed with connectomics data. This technique uncovers how differential cortico-entorhinal dialogue generates SPA and SPI, which could form an energetically efficient working-memory substrate and influence the consolidation of memories during sleep. More broadly, our procedure can reveal the functional connectivity of large networks and a theory of their emergent dynamics.


Subject(s)
Entorhinal Cortex , Models, Neurological , Neurons , Entorhinal Cortex/physiology , Animals , Neurons/physiology , Male , Connectome , Nerve Net/physiology , Membrane Potentials/physiology , Neural Pathways/physiology , Computer Simulation , Mice
2.
Commun Biol ; 7(1): 550, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38719883

ABSTRACT

Perceptual and cognitive processing relies on flexible communication among cortical areas; however, the underlying neural mechanism remains unclear. Here we report a mechanism based on the realistic spatiotemporal dynamics of propagating wave patterns in neural population activity. Using a biophysically plausible, multiarea spiking neural circuit model, we demonstrate that these wave patterns, characterized by their rich and complex dynamics, can account for a wide variety of empirically observed neural processes. The coordinated interactions of these wave patterns give rise to distributed and dynamic communication (DDC) that enables flexible and rapid routing of neural activity across cortical areas. We elucidate how DDC unifies the previously proposed oscillation synchronization-based and subspace-based views of interareal communication, offering experimentally testable predictions that we validate through the analysis of Allen Institute Neuropixels data. Furthermore, we demonstrate that DDC can be effectively modulated during attention tasks through the interplay of neuromodulators and cortical feedback loops. This modulation process explains many neural effects of attention, underscoring the fundamental functional role of DDC in cognition.


Subject(s)
Attention , Models, Neurological , Attention/physiology , Humans , Cerebral Cortex/physiology , Animals , Nerve Net/physiology , Visual Perception/physiology , Neurons/physiology , Cognition/physiology
3.
Sci Rep ; 14(1): 10536, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38719897

ABSTRACT

Precisely timed and reliably emitted spikes are hypothesized to serve multiple functions, including improving the accuracy and reproducibility of encoding stimuli, memories, or behaviours across trials. When these spikes occur as a repeating sequence, they can be used to encode and decode a potential time series. Here, we show both analytically and in simulations that the error incurred in approximating a time series with precisely timed and reliably emitted spikes decreases linearly with the number of neurons or spikes used in the decoding. This was verified numerically with synthetically generated patterns of spikes. Further, we found that if spikes were imprecise in their timing, or unreliable in their emission, the error incurred in decoding with these spikes would be sub-linear. However, if the spike precision or spike reliability increased with network size, the error incurred in decoding a time-series with sequences of spikes would maintain a linear decrease with network size. The spike precision had to increase linearly with network size, while the probability of spike failure had to decrease with the square-root of the network size. Finally, we identified a candidate circuit to test this scaling relationship: the repeating sequences of spikes with sub-millisecond precision in area HVC (proper name) of the zebra finch. This scaling relationship can be tested using both neural data and song-spectrogram-based recordings while taking advantage of the natural fluctuation in HVC network size due to neurogenesis.


Subject(s)
Action Potentials , Models, Neurological , Neurons , Animals , Action Potentials/physiology , Neurons/physiology , Vocalization, Animal/physiology , Reproducibility of Results
4.
Commun Biol ; 7(1): 555, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724614

ABSTRACT

Spatio-temporal activity patterns have been observed in a variety of brain areas in spontaneous activity, prior to or during action, or in response to stimuli. Biological mechanisms endowing neurons with the ability to distinguish between different sequences remain largely unknown. Learning sequences of spikes raises multiple challenges, such as maintaining in memory spike history and discriminating partially overlapping sequences. Here, we show that anti-Hebbian spike-timing dependent plasticity (STDP), as observed at cortico-striatal synapses, can naturally lead to learning spike sequences. We design a spiking model of the striatal output neuron receiving spike patterns defined as sequential input from a fixed set of cortical neurons. We use a simple synaptic plasticity rule that combines anti-Hebbian STDP and non-associative potentiation for a subset of the presented patterns called rewarded patterns. We study the ability of striatal output neurons to discriminate rewarded from non-rewarded patterns by firing only after the presentation of a rewarded pattern. In particular, we show that two biological properties of striatal networks, spiking latency and collateral inhibition, contribute to an increase in accuracy, by allowing a better discrimination of partially overlapping sequences. These results suggest that anti-Hebbian STDP may serve as a biological substrate for learning sequences of spikes.


Subject(s)
Corpus Striatum , Learning , Neuronal Plasticity , Neuronal Plasticity/physiology , Learning/physiology , Corpus Striatum/physiology , Models, Neurological , Animals , Action Potentials/physiology , Neurons/physiology , Humans
5.
Elife ; 122024 May 03.
Article in English | MEDLINE | ID: mdl-38700934

ABSTRACT

Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or 'iconic' memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store.


Subject(s)
Memory, Short-Term , Models, Neurological , Humans , Memory, Short-Term/physiology , Visual Perception/physiology , Adult , Mental Recall/physiology , Male , Female , Young Adult
6.
PLoS Comput Biol ; 20(5): e1011350, 2024 May.
Article in English | MEDLINE | ID: mdl-38701063

ABSTRACT

A fundamental challenge in neuroscience is accurately defining brain states and predicting how and where to perturb the brain to force a transition. Here, we investigated resting-state fMRI data of patients suffering from disorders of consciousness (DoC) after coma (minimally conscious and unresponsive wakefulness states) and healthy controls. We applied model-free and model-based approaches to help elucidate the underlying brain mechanisms of patients with DoC. The model-free approach allowed us to characterize brain states in DoC and healthy controls as a probabilistic metastable substate (PMS) space. The PMS of each group was defined by a repertoire of unique patterns (i.e., metastable substates) with different probabilities of occurrence. In the model-based approach, we adjusted the PMS of each DoC group to a causal whole-brain model. This allowed us to explore optimal strategies for promoting transitions by applying off-line in silico probing. Furthermore, this approach enabled us to evaluate the impact of local perturbations in terms of their global effects and sensitivity to stimulation, which is a model-based biomarker providing a deeper understanding of the mechanisms underlying DoC. Our results show that transitions were obtained in a synchronous protocol, in which the somatomotor network, thalamus, precuneus and insula were the most sensitive areas to perturbation. This motivates further work to continue understanding brain function and treatments of disorders of consciousness.


Subject(s)
Brain , Computer Simulation , Consciousness Disorders , Magnetic Resonance Imaging , Models, Neurological , Humans , Magnetic Resonance Imaging/methods , Brain/physiopathology , Brain/diagnostic imaging , Consciousness Disorders/physiopathology , Consciousness Disorders/diagnostic imaging , Male , Female , Computational Biology , Adult , Middle Aged , Consciousness/physiology , Brain Mapping/methods , Aged
7.
Nat Commun ; 15(1): 4084, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38744847

ABSTRACT

Animals can quickly adapt learned movements to external perturbations, and their existing motor repertoire likely influences their ease of adaptation. Long-term learning causes lasting changes in neural connectivity, which shapes the activity patterns that can be produced during adaptation. Here, we examined how a neural population's existing activity patterns, acquired through de novo learning, affect subsequent adaptation by modeling motor cortical neural population dynamics with recurrent neural networks. We trained networks on different motor repertoires comprising varying numbers of movements, which they acquired following various learning experiences. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization in the available population activity patterns. This structure facilitated adaptation, but only when the changes imposed by the perturbation were congruent with the organization of the inputs and the structure in neural activity acquired during de novo learning. These results highlight trade-offs in skill acquisition and demonstrate how different learning experiences can shape the geometrical properties of neural population activity and subsequent adaptation.


Subject(s)
Adaptation, Physiological , Learning , Models, Neurological , Motor Cortex , Learning/physiology , Adaptation, Physiological/physiology , Motor Cortex/physiology , Animals , Neural Networks, Computer , Neurons/physiology , Movement/physiology , Nerve Net/physiology
8.
PLoS Comput Biol ; 20(5): e1012074, 2024 May.
Article in English | MEDLINE | ID: mdl-38696532

ABSTRACT

We investigate the ability of the pairwise maximum entropy (PME) model to describe the spiking activity of large populations of neurons recorded from the visual, auditory, motor, and somatosensory cortices. To quantify this performance, we use (1) Kullback-Leibler (KL) divergences, (2) the extent to which the pairwise model predicts third-order correlations, and (3) its ability to predict the probability that multiple neurons are simultaneously active. We compare these with the performance of a model with independent neurons and study the relationship between the different performance measures, while varying the population size, mean firing rate of the chosen population, and the bin size used for binarizing the data. We confirm the previously reported excellent performance of the PME model for small population sizes N < 20. But we also find that larger mean firing rates and bin sizes generally decreases performance. The performance for larger populations were generally not as good. For large populations, pairwise models may be good in terms of predicting third-order correlations and the probability of multiple neurons being active, but still significantly worse than small populations in terms of their improvement over the independent model in KL-divergence. We show that these results are independent of the cortical area and of whether approximate methods or Boltzmann learning are used for inferring the pairwise couplings. We compared the scaling of the inferred couplings with N and find it to be well explained by the Sherrington-Kirkpatrick (SK) model, whose strong coupling regime shows a complex phase with many metastable states. We find that, up to the maximum population size studied here, the fitted PME model remains outside its complex phase. However, the standard deviation of the couplings compared to their mean increases, and the model gets closer to the boundary of the complex phase as the population size grows.


Subject(s)
Entropy , Models, Neurological , Neurons , Animals , Neurons/physiology , Cerebral Cortex/physiology , Action Potentials/physiology , Computational Biology , Computer Simulation
9.
Elife ; 122024 May 07.
Article in English | MEDLINE | ID: mdl-38712831

ABSTRACT

Representational drift refers to the dynamic nature of neural representations in the brain despite the behavior being seemingly stable. Although drift has been observed in many different brain regions, the mechanisms underlying it are not known. Since intrinsic neural excitability is suggested to play a key role in regulating memory allocation, fluctuations of excitability could bias the reactivation of previously stored memory ensembles and therefore act as a motor for drift. Here, we propose a rate-based plastic recurrent neural network with slow fluctuations of intrinsic excitability. We first show that subsequent reactivations of a neural ensemble can lead to drift of this ensemble. The model predicts that drift is induced by co-activation of previously active neurons along with neurons with high excitability which leads to remodeling of the recurrent weights. Consistent with previous experimental works, the drifting ensemble is informative about its temporal history. Crucially, we show that the gradual nature of the drift is necessary for decoding temporal information from the activity of the ensemble. Finally, we show that the memory is preserved and can be decoded by an output neuron having plastic synapses with the main region.


Subject(s)
Models, Neurological , Neuronal Plasticity , Neurons , Neurons/physiology , Neuronal Plasticity/physiology , Memory/physiology , Brain/physiology , Nerve Net/physiology , Animals , Humans , Action Potentials/physiology
10.
Sci Rep ; 14(1): 10180, 2024 05 03.
Article in English | MEDLINE | ID: mdl-38702384

ABSTRACT

In this manuscript, a mathematical model known as the Heimburg model is investigated analytically to get the soliton solutions. Both biomembranes and nerves can be studied using this model. The cell membrane's lipid bilayer is regarded by the model as a substance that experiences phase transitions. It implies that the membrane responds to electrical disruptions in a nonlinear way. The importance of ionic conductance in nerve impulse propagation is shown by Heimburg's model. The dynamics of the electromechanical pulse in a nerve are analytically investigated using the Hirota Bilinear method. The various types of solitons are investigates, such as homoclinic breather waves, interaction via double exponents, lump waves, multi-wave, mixed type solutions, and periodic cross kink solutions. The electromechanical pulse's ensuing three-dimensional and contour shapes offer crucial insight into how nerves function and may one day be used in medicine and the biological sciences. Our grasp of soliton dynamics is improved by this research, which also opens up new directions for biomedical investigation and medical developments. A few 3D and contour profiles have also been created for new solutions, and interaction behaviors have also been shown.


Subject(s)
Cell Membrane , Cell Membrane/physiology , Lipid Bilayers/chemistry , Lipid Bilayers/metabolism , Humans , Models, Neurological , Models, Biological , Models, Theoretical
11.
Nat Commun ; 15(1): 3689, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38693165

ABSTRACT

Human visual neurons rely on event-driven, energy-efficient spikes for communication, while silicon image sensors do not. The energy-budget mismatch between biological systems and machine vision technology has inspired the development of artificial visual neurons for use in spiking neural network (SNN). However, the lack of multiplexed data coding schemes reduces the ability of artificial visual neurons in SNN to emulate the visual perception ability of biological systems. Here, we present an artificial visual spiking neuron that enables rate and temporal fusion (RTF) coding of external visual information. The artificial neuron can code visual information at different spiking frequencies (rate coding) and enables precise and energy-efficient time-to-first-spike (TTFS) coding. This multiplexed sensory coding scheme could improve the computing capability and efficacy of artificial visual neurons. A hardware-based SNN with the RTF coding scheme exhibits good consistency with real-world ground truth data and achieves highly accurate steering and speed predictions for self-driving vehicles in complex conditions. The multiplexed RTF coding scheme demonstrates the feasibility of developing highly efficient spike-based neuromorphic hardware.


Subject(s)
Action Potentials , Neural Networks, Computer , Neurons , Visual Perception , Humans , Neurons/physiology , Action Potentials/physiology , Visual Perception/physiology , Models, Neurological
12.
Int J Neural Syst ; 34(6): 2450028, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38706265

ABSTRACT

Spiking neural membrane systems (or spiking neural P systems, SNP systems) are a new type of computation model which have attracted the attention of plentiful scholars for parallelism, time encoding, interpretability and extensibility. The original SNP systems only consider the time delay caused by the execution of rules within neurons, but not caused by the transmission of spikes via synapses between neurons and its adaptive adjustment. In view of the importance of time delay for SNP systems, which are a time encoding computation model, this study proposes SNP systems with adaptive synaptic time delay (ADSNP systems) based on the dynamic regulation mechanism of synaptic transmission delay in neural systems. In ADSNP systems, besides neurons, astrocytes that can generate adenosine triphosphate (ATP) are introduced. After receiving spikes, astrocytes convert spikes into ATP and send ATP to the synapses controlled by them to change the synaptic time delays. The Turing universality of ADSNP systems in number generating and accepting modes is proved. In addition, a small universal ADSNP system using 93 neurons and astrocytes is given. The superiority of the ADSNP system is demonstrated by comparison with the six variants. Finally, an ADSNP system is constructed for credit card fraud detection, which verifies the feasibility of the ADSNP system for solving real-world problems. By considering the adaptive synaptic delay, ADSNP systems better restore the process of information transmission in biological neural networks, and enhance the adaptability of SNP systems, making the control of time more accurate.


Subject(s)
Astrocytes , Models, Neurological , Neural Networks, Computer , Neurons , Synapses , Synaptic Transmission , Synapses/physiology , Astrocytes/physiology , Neurons/physiology , Synaptic Transmission/physiology , Action Potentials/physiology , Adenosine Triphosphate/metabolism , Time Factors , Humans
13.
Chaos ; 34(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717399

ABSTRACT

Neuronal activity gives rise to behavior, and behavior influences neuronal dynamics, in a closed-loop control system. Is it possible then, to find a relationship between the statistical properties of behavior and neuronal dynamics? Measurements of neuronal activity and behavior have suggested a direct relationship between scale-free neuronal and behavioral dynamics. Yet, these studies captured only local dynamics in brain sub-networks. Here, we investigate the relationship between internal dynamics and output statistics in a mathematical model system where we have access to the dynamics of all network units. We train a recurrent neural network (RNN), initialized in a high-dimensional chaotic state, to sustain behavioral states for durations following a power-law distribution as observed experimentally. Changes in network connectivity due to training affect the internal dynamics of neuronal firings, leading to neuronal avalanche size distributions approximating power-laws over some ranges. Yet, randomizing the changes in network connectivity can leave these power-law features largely unaltered. Specifically, whereas neuronal avalanche duration distributions show some variations between RNNs with trained and randomized decoders, neuronal avalanche size distributions are invariant, in the total population and in output-correlated sub-populations. This is true independent of whether the randomized decoders preserve power-law distributed behavioral dynamics. This demonstrates that a one-to-one correspondence between the considered statistical features of behavior and neuronal dynamics cannot be established and their relationship is non-trivial. Our findings also indicate that statistical properties of the intrinsic dynamics may be preserved, even as the internal state responsible for generating the desired output dynamics is perturbed.


Subject(s)
Models, Neurological , Neurons , Neurons/physiology , Neural Networks, Computer , Nerve Net/physiology , Nonlinear Dynamics , Behavior , Humans , Animals
14.
Sci Adv ; 10(18): eadk7257, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38701208

ABSTRACT

Neuromodulators have been shown to alter the temporal profile of short-term synaptic plasticity (STP); however, the computational function of this neuromodulation remains unexplored. Here, we propose that the neuromodulation of STP provides a general mechanism to scale neural dynamics and motor outputs in time and space. We trained recurrent neural networks that incorporated STP to produce complex motor trajectories-handwritten digits-with different temporal (speed) and spatial (size) scales. Neuromodulation of STP produced temporal and spatial scaling of the learned dynamics and enhanced temporal or spatial generalization compared to standard training of the synaptic weights in the absence of STP. The model also accounted for the results of two experimental studies involving flexible sensorimotor timing. Neuromodulation of STP provides a unified and biologically plausible mechanism to control the temporal and spatial scales of neural dynamics and sensorimotor behaviors.


Subject(s)
Neuronal Plasticity , Neuronal Plasticity/physiology , Humans , Models, Neurological , Neurotransmitter Agents/metabolism , Animals , Learning/physiology , Neural Networks, Computer
15.
Nat Commun ; 15(1): 3722, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38697981

ABSTRACT

An important difference between brains and deep neural networks is the way they learn. Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed way. Further, synaptic plasticity in the brain depends only on information local to synapses. Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, independent, identically distributed setting. Understanding how neural networks learn under the same constraints as the brain is an open problem for neuroscience and neuromorphic computing. A standard approach to this problem has yet to be established. In this paper, we propose that discrete graphical models that learn via an online maximum a posteriori learning algorithm could provide such an approach. We implement this kind of model in a neural network called the Sparse Quantized Hopfield Network. We show our model outperforms state-of-the-art neural networks on associative memory tasks, outperforms these networks in online, continual settings, learns efficiently with noisy inputs, and is better than baselines on an episodic memory task.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Memory/physiology , Models, Neurological , Brain/physiology , Neuronal Plasticity/physiology , Deep Learning
16.
Commun Biol ; 7(1): 528, 2024 May 04.
Article in English | MEDLINE | ID: mdl-38704445

ABSTRACT

Neuronal dysfunction and cognitive deterioration in Alzheimer's disease (AD) are likely caused by multiple pathophysiological factors. However, mechanistic evidence in humans remains scarce, requiring improved non-invasive techniques and integrative models. We introduce personalized AD computational models built on whole-brain Wilson-Cowan oscillators and incorporating resting-state functional MRI, amyloid-ß (Aß) and tau-PET from 132 individuals in the AD spectrum to evaluate the direct impact of toxic protein deposition on neuronal activity. This subject-specific approach uncovers key patho-mechanistic interactions, including synergistic Aß and tau effects on cognitive impairment and neuronal excitability increases with disease progression. The data-derived neuronal excitability values strongly predict clinically relevant AD plasma biomarker concentrations (p-tau217, p-tau231, p-tau181, GFAP) and grey matter atrophy obtained through voxel-based morphometry. Furthermore, reconstructed EEG proxy quantities show the hallmark AD electrophysiological alterations (theta band activity enhancement and alpha reductions) which occur with Aß-positivity and after limbic tau involvement. Microglial activation influences on neuronal activity are less definitive, potentially due to neuroimaging limitations in mapping neuroprotective vs detrimental activation phenotypes. Mechanistic brain activity models can further clarify intricate neurodegenerative processes and accelerate preventive/treatment interventions.


Subject(s)
Alzheimer Disease , Amyloid beta-Peptides , Brain , tau Proteins , Alzheimer Disease/metabolism , Alzheimer Disease/physiopathology , Humans , tau Proteins/metabolism , Amyloid beta-Peptides/metabolism , Brain/metabolism , Brain/diagnostic imaging , Brain/pathology , Male , Female , Aged , Magnetic Resonance Imaging , Middle Aged , Positron-Emission Tomography , Models, Neurological , Biomarkers/blood , Aged, 80 and over , Electroencephalography , Neurons/metabolism
17.
Biointerphases ; 19(3)2024 May 01.
Article in English | MEDLINE | ID: mdl-38738941

ABSTRACT

This paper introduces a physical neuron model that incorporates magnetoelectric nanoparticles (MENPs) as an essential electrical circuit component to wirelessly control local neural activity. Availability of such a model is important as MENPs, due to their magnetoelectric effect, can wirelessly and noninvasively modulate neural activity, which, in turn, has implications for both finding cures for neurological diseases and creating a wireless noninvasive high-resolution brain-machine interface. When placed on a neuronal membrane, MENPs act as magnetic-field-controlled finite-size electric dipoles that generate local electric fields across the membrane in response to magnetic fields, thus allowing to controllably activate local ion channels and locally initiate an action potential. Herein, the neuronal electrical characteristic description is based on ion channel activation and inhibition mechanisms. A MENP-based memristive Hodgkin-Huxley circuit model is extracted by combining the Hodgkin-Huxley model and an equivalent circuit model for a single MENP. In this model, each MENP becomes an integral part of the neuron, thus enabling wireless local control of the neuron's electric circuit itself. Furthermore, the model is expanded to include multiple MENPs to describe collective effects in neural systems.


Subject(s)
Neurons , Neurons/physiology , Neurons/drug effects , Nanoparticles/chemistry , Humans , Models, Neurological , Action Potentials/drug effects , Action Potentials/physiology , Magnetic Fields
18.
Proc Natl Acad Sci U S A ; 121(19): e2318757121, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38691591

ABSTRACT

How breathing is generated by the preBötzinger complex (preBötC) remains divided between two ideological frameworks, and a persistent sodium current (INaP) lies at the heart of this debate. Although INaP is widely expressed, the pacemaker hypothesis considers it essential because it endows a small subset of neurons with intrinsic bursting or "pacemaker" activity. In contrast, burstlet theory considers INaP dispensable because rhythm emerges from "preinspiratory" spiking activity driven by feed-forward network interactions. Using computational modeling, we find that small changes in spike shape can dissociate INaP from intrinsic bursting. Consistent with many experimental benchmarks, conditional effects on spike shape during simulated changes in oxygenation, development, extracellular potassium, and temperature alter the prevalence of intrinsic bursting and preinspiratory spiking without altering the role of INaP. Our results support a unifying hypothesis where INaP and excitatory network interactions, but not intrinsic bursting or preinspiratory spiking, are critical interdependent features of preBötC rhythmogenesis.


Subject(s)
Action Potentials , Animals , Action Potentials/physiology , Models, Neurological , Neurons/physiology , Respiration , Nerve Net/physiology , Respiratory Center/physiology , Computer Simulation , Sodium/metabolism
19.
J Math Biol ; 89(1): 3, 2024 May 13.
Article in English | MEDLINE | ID: mdl-38740613

ABSTRACT

Dynamical systems on networks typically involve several dynamical processes evolving at different timescales. For instance, in Alzheimer's disease, the spread of toxic protein throughout the brain not only disrupts neuronal activity but is also influenced by neuronal activity itself, establishing a feedback loop between the fast neuronal activity and the slow protein spreading. Motivated by the case of Alzheimer's disease, we study the multiple-timescale dynamics of a heterodimer spreading process on an adaptive network of Kuramoto oscillators. Using a minimal two-node model, we establish that heterogeneous oscillatory activity facilitates toxic outbreaks and induces symmetry breaking in the spreading patterns. We then extend the model formulation to larger networks and perform numerical simulations of the slow-fast dynamics on common network motifs and on the brain connectome. The simulations corroborate the findings from the minimal model, underscoring the significance of multiple-timescale dynamics in the modeling of neurodegenerative diseases.


Subject(s)
Alzheimer Disease , Brain , Computer Simulation , Mathematical Concepts , Models, Neurological , Neurons , Humans , Alzheimer Disease/physiopathology , Neurons/physiology , Brain/physiopathology , Connectome , Neurodegenerative Diseases/physiopathology , Neurodegenerative Diseases/pathology , Nerve Net/physiopathology , Nerve Net/physiology
20.
PLoS Biol ; 22(4): e3002623, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38687807

ABSTRACT

How the activities of large neural populations are integrated in the brain to ensure accurate perception and behavior remains a central problem in systems neuroscience. Here, we investigated population coding of naturalistic self-motion by neurons within early vestibular pathways in rhesus macaques (Macacca mulatta). While vestibular neurons displayed similar dynamic tuning to self-motion, inspection of their spike trains revealed significant heterogeneity. Further analysis revealed that, during natural but not artificial stimulation, heterogeneity resulted primarily from variability across neurons as opposed to trial-to-trial variability. Interestingly, vestibular neurons displayed different correlation structures during naturalistic and artificial self-motion. Specifically, while correlations due to the stimulus (i.e., signal correlations) did not differ, correlations between the trial-to-trial variabilities of neural responses (i.e., noise correlations) were instead significantly positive during naturalistic but not artificial stimulation. Using computational modeling, we show that positive noise correlations during naturalistic stimulation benefits information transmission by heterogeneous vestibular neural populations. Taken together, our results provide evidence that neurons within early vestibular pathways are adapted to the statistics of natural self-motion stimuli at the population level. We suggest that similar adaptations will be found in other systems and species.


Subject(s)
Macaca mulatta , Motion Perception , Neurons , Vestibule, Labyrinth , Animals , Macaca mulatta/physiology , Neurons/physiology , Vestibule, Labyrinth/physiology , Motion Perception/physiology , Action Potentials/physiology , Male , Adaptation, Physiological/physiology , Models, Neurological
SELECTION OF CITATIONS
SEARCH DETAIL
...