Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52.210
Filter
1.
Chaos ; 34(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39088349

ABSTRACT

In biological neural networks, it has been well recognized that a healthy brain exhibits 1/f noise patterns. However, in artificial neural networks that are increasingly matching or even out-performing human cognition, this phenomenon has yet to be established. In this work, we found that similar to that of their biological counterparts, 1/f noise exists in artificial neural networks when trained on time series classification tasks. Additionally, we found that the activations of the neurons are the closest to 1/f noise when the neurons are highly utilized. Conversely, if the network is too large and many neurons are underutilized, the neuron activations deviate from 1/f noise patterns toward that of white noise.


Subject(s)
Neural Networks, Computer , Neurons , Humans , Neurons/physiology , Models, Neurological
2.
Nat Commun ; 15(1): 6497, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090084

ABSTRACT

Behavioral flexibility relies on the brain's ability to switch rapidly between multiple tasks, even when the task rule is not explicitly cued but must be inferred through trial and error. The underlying neural circuit mechanism remains poorly understood. We investigated recurrent neural networks (RNNs) trained to perform an analog of the classic Wisconsin Card Sorting Test. The networks consist of two modules responsible for rule representation and sensorimotor mapping, respectively, where each module is comprised of a circuit with excitatory neurons and three major types of inhibitory neurons. We found that rule representation by self-sustained persistent activity across trials, error monitoring and gated sensorimotor mapping emerged from training. Systematic dissection of trained RNNs revealed a detailed circuit mechanism that is consistent across networks trained with different hyperparameters. The networks' dynamical trajectories for different rules resided in separate subspaces of population activity; the subspaces collapsed and performance was reduced to chance level when dendrite-targeting somatostatin-expressing interneurons were silenced, illustrating how a phenomenological description of representational subspaces is explained by a specific circuit mechanism.


Subject(s)
Models, Neurological , Neural Networks, Computer , Animals , Nerve Net/physiology , Neurons/physiology , Interneurons/physiology , Brain/physiology , Humans
3.
Nat Commun ; 15(1): 6479, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090091

ABSTRACT

Animals likely use a variety of strategies to solve laboratory tasks. Traditionally, combined analysis of behavioral and neural recording data across subjects employing different strategies may obscure important signals and give confusing results. Hence, it is essential to develop techniques that can infer strategy at the single-subject level. We analyzed an experiment in which two male monkeys performed a visually cued rule-based task. The analysis of their performance shows no indication that they used a different strategy. However, when we examined the geometry of stimulus representations in the state space of the neural activities recorded in dorsolateral prefrontal cortex, we found striking differences between the two monkeys. Our purely neural results induced us to reanalyze the behavior. The new analysis showed that the differences in representational geometry are associated with differences in the reaction times, revealing behavioral differences we were unaware of. All these analyses suggest that the monkeys are using different strategies. Finally, using recurrent neural network models trained to perform the same task, we show that these strategies correlate with the amount of training, suggesting a possible explanation for the observed neural and behavioral differences.


Subject(s)
Behavior, Animal , Macaca mulatta , Prefrontal Cortex , Animals , Male , Behavior, Animal/physiology , Prefrontal Cortex/physiology , Macaca mulatta/physiology , Reaction Time/physiology , Neural Networks, Computer , Nerve Net/physiology , Cues , Neurons/physiology , Models, Neurological
4.
Elife ; 122024 Aug 01.
Article in English | MEDLINE | ID: mdl-39088258

ABSTRACT

Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies. However, these still fall short of, and therefore fail to provide insight into how the brain supports strong forms of generalization of which humans are capable. One such case is out-of-distribution (OOD) generalization - successful performance on test examples that lie outside the distribution of the training set. Here, we identify properties of processing in the brain that may contribute to this ability. We describe a two-part algorithm that draws on specific features of neural computation to achieve OOD generalization, and provide a proof of concept by evaluating performance on two challenging cognitive tasks. First we draw on the fact that the mammalian brain represents metric spaces using grid cell code (e.g., in the entorhinal cortex): abstract representations of relational structure, organized in recurring motifs that cover the representational space. Second, we propose an attentional mechanism that operates over the grid cell code using determinantal point process (DPP), that we call DPP attention (DPP-A) - a transformation that ensures maximum sparseness in the coverage of that space. We show that a loss function that combines standard task-optimized error with DPP-A can exploit the recurring motifs in the grid cell code, and can be integrated with common architectures to achieve strong OOD generalization performance on analogy and arithmetic tasks. This provides both an interpretation of how the grid cell code in the mammalian brain may contribute to generalization performance, and at the same time a potential means for improving such capabilities in artificial neural networks.


Subject(s)
Grid Cells , Neural Networks, Computer , Humans , Grid Cells/physiology , Algorithms , Models, Neurological , Animals , Attention/physiology , Brain/physiology , Entorhinal Cortex/physiology
5.
Nat Commun ; 15(1): 5865, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38997282

ABSTRACT

The macroscale connectome is the network of physical, white-matter tracts between brain areas. The connections are generally weighted and their values interpreted as measures of communication efficacy. In most applications, weights are either assigned based on imaging features-e.g. diffusion parameters-or inferred using statistical models. In reality, the ground-truth weights are unknown, motivating the exploration of alternative edge weighting schemes. Here, we explore a multi-modal, regression-based model that endows reconstructed fiber tracts with directed and signed weights. We find that the model fits observed data well, outperforming a suite of null models. The estimated weights are subject-specific and highly reliable, even when fit using relatively few training samples, and the networks maintain a number of desirable features. In summary, we offer a simple framework for weighting connectome data, demonstrating both its ease of implementation while benchmarking its utility for typical connectome analyses, including graph theoretic modeling and brain-behavior associations.


Subject(s)
Brain , Connectome , White Matter , Humans , Brain/diagnostic imaging , Brain/anatomy & histology , Brain/physiology , White Matter/diagnostic imaging , White Matter/anatomy & histology , White Matter/physiology , Male , Female , Adult , Models, Neurological , Nerve Net/physiology , Nerve Net/diagnostic imaging , Nerve Net/anatomy & histology , Diffusion Tensor Imaging/methods , Young Adult , Magnetic Resonance Imaging/methods
6.
Commun Biol ; 7(1): 852, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38997325

ABSTRACT

Astrocytes play a key role in the regulation of synaptic strength and are thought to orchestrate synaptic plasticity and memory. Yet, how specifically astrocytes and their neuroactive transmitters control learning and memory is currently an open question. Recent experiments have uncovered an astrocyte-mediated feedback loop in CA1 pyramidal neurons which is started by the release of endocannabinoids by active neurons and closed by astrocytic regulation of the D-serine levels at the dendrites. D-serine is a co-agonist for the NMDA receptor regulating the strength and direction of synaptic plasticity. Activity-dependent D-serine release mediated by astrocytes is therefore a candidate for mediating between long-term synaptic depression (LTD) and potentiation (LTP) during learning. Here, we show that the mathematical description of this mechanism leads to a biophysical model of synaptic plasticity consistent with the phenomenological model known as the BCM model. The resulting mathematical framework can explain the learning deficit observed in mice upon disruption of the D-serine regulatory mechanism. It shows that D-serine enhances plasticity during reversal learning, ensuring fast responses to changes in the external environment. The model provides new testable predictions about the learning process, driving our understanding of the functional role of neuron-glia interaction in learning.


Subject(s)
Astrocytes , Neuronal Plasticity , Reversal Learning , Animals , Astrocytes/physiology , Astrocytes/metabolism , Neuronal Plasticity/physiology , Mice , Reversal Learning/physiology , Serine/metabolism , Models, Neurological , Receptors, N-Methyl-D-Aspartate/metabolism
7.
PLoS Comput Biol ; 20(7): e1012240, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38985828

ABSTRACT

The efficient coding approach proposes that neural systems represent as much sensory information as biological constraints allow. It aims at formalizing encoding as a constrained optimal process. A different approach, that aims at formalizing decoding, proposes that neural systems instantiate a generative model of the sensory world. Here, we put forth a normative framework that characterizes neural systems as jointly optimizing encoding and decoding. It takes the form of a variational autoencoder: sensory stimuli are encoded in the noisy activity of neurons to be interpreted by a flexible decoder; encoding must allow for an accurate stimulus reconstruction from neural activity. Jointly, neural activity is required to represent the statistics of latent features which are mapped by the decoder into distributions over sensory stimuli; decoding correspondingly optimizes the accuracy of the generative model. This framework yields in a family of encoding-decoding models, which result in equally accurate generative models, indexed by a measure of the stimulus-induced deviation of neural activity from the marginal distribution over neural activity. Each member of this family predicts a specific relation between properties of the sensory neurons-such as the arrangement of the tuning curve means (preferred stimuli) and widths (degrees of selectivity) in the population-as a function of the statistics of the sensory world. Our approach thus generalizes the efficient coding approach. Notably, here, the form of the constraint on the optimization derives from the requirement of an accurate generative model, while it is arbitrary in efficient coding models. Moreover, solutions do not require the knowledge of the stimulus distribution, but are learned on the basis of data samples; the constraint further acts as regularizer, allowing the model to generalize beyond the training data. Finally, we characterize the family of models we obtain through alternate measures of performance, such as the error in stimulus reconstruction. We find that a range of models admits comparable performance; in particular, a population of sensory neurons with broad tuning curves as observed experimentally yields both low reconstruction stimulus error and an accurate generative model that generalizes robustly to unseen data.


Subject(s)
Computational Biology , Models, Neurological , Animals , Sensory Receptor Cells/physiology , Humans , Neurons/physiology , Algorithms , Action Potentials/physiology , Computer Simulation
8.
Elife ; 122024 Jul 22.
Article in English | MEDLINE | ID: mdl-39037765

ABSTRACT

Hippocampal place cells in freely moving rodents display both theta phase precession and procession, which is thought to play important roles in cognition, but the neural mechanism for producing theta phase shift remains largely unknown. Here, we show that firing rate adaptation within a continuous attractor neural network causes the neural activity bump to oscillate around the external input, resembling theta sweeps of decoded position during locomotion. These forward and backward sweeps naturally account for theta phase precession and procession of individual neurons, respectively. By tuning the adaptation strength, our model explains the difference between 'bimodal cells' showing interleaved phase precession and procession, and 'unimodal cells' in which phase precession predominates. Our model also explains the constant cycling of theta sweeps along different arms in a T-maze environment, the speed modulation of place cells' firing frequency, and the continued phase shift after transient silencing of the hippocampus. We hope that this study will aid an understanding of the neural mechanism supporting theta phase coding in the brain.


Subject(s)
Action Potentials , Place Cells , Theta Rhythm , Animals , Theta Rhythm/physiology , Place Cells/physiology , Action Potentials/physiology , Models, Neurological , Hippocampus/physiology , Hippocampus/cytology , Adaptation, Physiological , Rats
9.
J Neural Eng ; 21(4)2024 Jul 24.
Article in English | MEDLINE | ID: mdl-38994790

ABSTRACT

We define and explain the quasistatic approximation (QSA) as applied to field modeling for electrical and magnetic stimulation. Neuromodulation analysis pipelines include discrete stages, and QSA is applied specifically when calculating the electric and magnetic fields generated in tissues by a given stimulation dose. QSA simplifies the modeling equations to support tractable analysis, enhanced understanding, and computational efficiency. The application of QSA in neuromodulation is based on four underlying assumptions: (A1) no wave propagation or self-induction in tissue, (A2) linear tissue properties, (A3) purely resistive tissue, and (A4) non-dispersive tissue. As a consequence of these assumptions, each tissue is assigned a fixed conductivity, and the simplified equations (e.g. Laplace's equation) are solved for the spatial distribution of the field, which is separated from the field's temporal waveform. Recognizing that electrical tissue properties may be more complex, we explain how QSA can be embedded in parallel or iterative pipelines to model frequency dependence or nonlinearity of conductivity. We survey the history and validity of QSA across specific applications, such as microstimulation, deep brain stimulation, spinal cord stimulation, transcranial electrical stimulation, and transcranial magnetic stimulation. The precise definition and explanation of QSA in neuromodulation are essential for rigor when using QSA models or testing their limits.


Subject(s)
Transcranial Magnetic Stimulation , Humans , Transcranial Magnetic Stimulation/methods , Models, Neurological , Deep Brain Stimulation/methods , Electric Stimulation/methods , Animals , Computer Simulation
10.
PLoS Comput Biol ; 20(7): e1011826, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38995970

ABSTRACT

Electrical stimulation of peripheral nerves has been used in various pathological contexts for rehabilitation purposes or to alleviate the symptoms of neuropathologies, thus improving the overall quality of life of patients. However, the development of novel therapeutic strategies is still a challenging issue requiring extensive in vivo experimental campaigns and technical development. To facilitate the design of new stimulation strategies, we provide a fully open source and self-contained software framework for the in silico evaluation of peripheral nerve electrical stimulation. Our modeling approach, developed in the popular and well-established Python language, uses an object-oriented paradigm to map the physiological and electrical context. The framework is designed to facilitate multi-scale analysis, from single fiber stimulation to whole multifascicular nerves. It also allows the simulation of complex strategies such as multiple electrode combinations and waveforms ranging from conventional biphasic pulses to more complex modulated kHz stimuli. In addition, we provide automated support for stimulation strategy optimization and handle the computational backend transparently to the user. Our framework has been extensively tested and validated with several existing results in the literature.


Subject(s)
Computational Biology , Computer Simulation , Peripheral Nerves , Software , Peripheral Nerves/physiology , Humans , Electric Stimulation/methods , Electric Stimulation Therapy/methods , Models, Neurological
11.
eNeuro ; 11(7)2024 Jul.
Article in English | MEDLINE | ID: mdl-39054054

ABSTRACT

The role of gamma rhythm (30-80 Hz) in visual processing is debated; stimuli like gratings and hue patches generate strong gamma, but many natural images do not. Could image gamma responses be predicted by approximating images as gratings or hue patches? Surprisingly, this question remains unanswered, since the joint dependence of gamma on multiple features is poorly understood. We recorded local field potentials and electrocorticogram from two female monkeys while presenting natural images and parametric stimuli varying along several feature dimensions. Gamma responses to different grating/hue features were separable, allowing for a multiplicative model based on individual features. By fitting a hue patch to the image around the receptive field, this simple model could predict gamma responses to chromatic images across scales with reasonably high accuracy. Our results provide a simple "baseline" model to predict gamma from local image properties, against which more complex models of natural vision can be tested.


Subject(s)
Color Perception , Gamma Rhythm , Photic Stimulation , Animals , Female , Gamma Rhythm/physiology , Photic Stimulation/methods , Color Perception/physiology , Electrocorticography , Macaca mulatta , Visual Cortex/physiology , Models, Neurological
12.
Sci Rep ; 14(1): 16714, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39030197

ABSTRACT

Studies on the neural correlates of navigation in 3D environments are plagued by several issues that need to be solved. For example, experimental studies show markedly different place cell responses in rats and bats, both navigating in 3D environments. In this study, we focus on modelling the spatial cells in rodents in a 3D environment. We propose a deep autoencoder network to model the place and grid cells in a simulated agent navigating in a 3D environment. The input layer to the autoencoder network model is the HD layer, which encodes the agent's HD in terms of azimuth (θ) and pitch angles (ϕ). The output of this layer is given as input to the Path Integration (PI) layer, which computes displacement in all the preferred directions. The bottleneck layer of the autoencoder model encodes the spatial cell-like responses. Both grid cell and place cell-like responses are observed. The proposed model is verified using two experimental studies with two 3D environments. This model paves the way for a holistic approach using deep neural networks to model spatial cells in 3D navigation.


Subject(s)
Hippocampus , Animals , Hippocampus/physiology , Hippocampus/cytology , Rats , Models, Neurological , Place Cells/physiology , Neural Networks, Computer , Spatial Navigation/physiology , Grid Cells/physiology , Rodentia
13.
Sci Rep ; 14(1): 16682, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39030222

ABSTRACT

Preparatory brain activity is a cornerstone of proactive cognitive control, a top-down process optimizing attention, perception, and inhibition, fostering cognitive flexibility and adaptive attention control in the human brain. In this study, we proposed a neuroimaging-informed convolutional neural network model to predict cognitive control performance from the baseline pre-stimulus preparatory electrophysiological activity of core cognitive control regions. Particularly, combined with perturbation-based occlusion sensitivity analysis, we pinpointed regions with the most predictive preparatory activity for proactive cognitive control. We found that preparatory arrhythmic broadband neural dynamics in the right anterior insula, right precentral gyrus, and the right opercular part of inferior frontal gyrus (posterior ventrolateral prefrontal cortex), are highly predictive of prospective cognitive control performance.  The pre-stimulus preparatory activity in these regions corresponds to readiness for conflict detection, inhibitory control, and overall elaborate attentional processing. We integrated the convolutional neural network with biologically inspired Jansen-Rit neural mass model to investigate neurostimulation effects on cognitive control. High-frequency stimulation (130 Hz) of the left anterior insula provides significant cognitive enhancement, especially in reducing conflict errors, despite the right anterior insula's higher predictive value for prospective cognitive control performance. Thus, effective neurostimulation targets may differ from regions showing biomarker activity. Finally, we validated our theoretical finding by evaluating intrinsic neuromodulation through neurofeedback-guided volitional control in an independent dataset. We found that left anterior insula was intrinsically modulated in real-time by volitional control of emotional valence, but not arousal. Our findings further highlight central role of anterior insula in orchestrating proactive cognitive control processes, positioning it at the top of hierarchy for cognitive control.


Subject(s)
Cognition , Neural Networks, Computer , Humans , Male , Female , Cognition/physiology , Adult , Insular Cortex/physiology , Insular Cortex/diagnostic imaging , Young Adult , Attention/physiology , Brain Mapping/methods , Conflict, Psychological , Models, Neurological , Electroencephalography
14.
Nat Commun ; 15(1): 6118, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-39033142

ABSTRACT

A fundamental task for the brain is to generate predictions of future sensory inputs, and signal errors in these predictions. Many neurons have been shown to signal omitted stimuli during periodic stimulation, even in the retina. However, the mechanisms of this error signaling are unclear. Here we show that depressing inhibitory synapses shape the timing of the response to an omitted stimulus in the retina. While ganglion cells, the retinal output, responded to an omitted flash with a constant latency over many frequencies of the flash sequence, we found that this was not the case once inhibition was blocked. We built a simple circuit model and showed that depressing inhibitory synapses were a necessary component to reproduce our experimental findings. A new prediction of our model is that the accuracy of the constant latency requires a sufficient amount of flashes in the stimulus, which we could confirm experimentally. Depressing inhibitory synapses could thus be a key component to generate the predictive responses observed in the retina, and potentially in many brain areas.


Subject(s)
Retinal Ganglion Cells , Synapses , Retinal Ganglion Cells/physiology , Synapses/physiology , Animals , Photic Stimulation , Models, Neurological , Retina/physiology , Neural Inhibition/physiology , Mice
15.
PLoS Comput Biol ; 20(7): e1012257, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38959262

ABSTRACT

Neuromechanical studies investigate how the nervous system interacts with the musculoskeletal (MSK) system to generate volitional movements. Such studies have been supported by simulation models that provide insights into variables that cannot be measured experimentally and allow a large number of conditions to be tested before the experimental analysis. However, current simulation models of electromyography (EMG), a core physiological signal in neuromechanical analyses, remain either limited in accuracy and conditions or are computationally heavy to apply. Here, we provide a computational platform to enable future work to overcome these limitations by presenting NeuroMotion, an open-source simulator that can modularly test a variety of approaches to the full-spectrum synthesis of EMG signals during voluntary movements. We demonstrate NeuroMotion using three sample modules. The first module is an upper-limb MSK model with OpenSim API to estimate the muscle fibre lengths and muscle activations during movements. The second module is BioMime, a deep neural network-based EMG generator that receives nonstationary physiological parameter inputs, like the afore-estimated muscle fibre lengths, and efficiently outputs motor unit action potentials (MUAPs). The third module is a motor unit pool model that transforms the muscle activations into discharge timings of motor units. The discharge timings are convolved with the output of BioMime to simulate EMG signals during the movement. We first show how MUAP waveforms change during different levels of physiological parameter variations and different movements. We then show that the synthetic EMG signals during two-degree-of-freedom hand and wrist movements can be used to augment experimental data for regressing joint angles. Ridge regressors trained on the synthetic dataset were directly used to predict joint angles from experimental data. In this way, NeuroMotion was able to generate full-spectrum EMG for the first use-case of human forearm electrophysiology during voluntary hand, wrist, and forearm movements. All intermediate variables are available, which allows the user to study cause-effect relationships in the complex neuromechanical system, fast iterate algorithms before collecting experimental data, and validate algorithms that estimate non-measurable parameters in experiments. We expect this modular platform will enable validation of generative EMG models, complement experimental approaches and empower neuromechanical research.


Subject(s)
Computational Biology , Electromyography , Movement , Muscle, Skeletal , Electromyography/methods , Humans , Movement/physiology , Muscle, Skeletal/physiology , Neural Networks, Computer , Biomechanical Phenomena/physiology , Computer Simulation , Action Potentials/physiology , Models, Neurological
16.
Hum Brain Mapp ; 45(10): e26782, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-38989630

ABSTRACT

This study assesses the reliability of resting-state dynamic causal modelling (DCM) of magnetoencephalography (MEG) under conductance-based canonical microcircuit models, in terms of both posterior parameter estimates and model evidence. We use resting-state MEG data from two sessions, acquired 2 weeks apart, from a cohort with high between-subject variance arising from Alzheimer's disease. Our focus is not on the effect of disease, but on the reliability of the methods (as within-subject between-session agreement), which is crucial for future studies of disease progression and drug intervention. To assess the reliability of first-level DCMs, we compare model evidence associated with the covariance among subject-specific free energies (i.e., the 'quality' of the models) with versus without interclass correlations. We then used parametric empirical Bayes (PEB) to investigate the differences between the inferred DCM parameter probability distributions at the between subject level. Specifically, we examined the evidence for or against parameter differences (i) within-subject, within-session, and between-epochs; (ii) within-subject between-session; and (iii) within-site between-subjects, accommodating the conditional dependency among parameter estimates. We show that for data acquired close in time, and under similar circumstances, more than 95% of inferred DCM parameters are unlikely to differ, speaking to mutual predictability over sessions. Using PEB, we show a reciprocal relationship between a conventional definition of 'reliability' and the conditional dependency among inferred model parameters. Our analyses confirm the reliability and reproducibility of the conductance-based DCMs for resting-state neurophysiological data. In this respect, the implicit generative modelling is suitable for interventional and longitudinal studies of neurological and psychiatric disorders.


Subject(s)
Alzheimer Disease , Magnetoencephalography , Humans , Magnetoencephalography/methods , Magnetoencephalography/standards , Reproducibility of Results , Alzheimer Disease/physiopathology , Male , Female , Aged , Models, Neurological , Bayes Theorem
17.
Hum Brain Mapp ; 45(10): e26720, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-38994740

ABSTRACT

Electro/Magneto-EncephaloGraphy (EEG/MEG) source imaging (EMSI) of epileptic activity from deep generators is often challenging due to the higher sensitivity of EEG/MEG to superficial regions and to the spatial configuration of subcortical structures. We previously demonstrated the ability of the coherent Maximum Entropy on the Mean (cMEM) method to accurately localize the superficial cortical generators and their spatial extent. Here, we propose a depth-weighted adaptation of cMEM to localize deep generators more accurately. These methods were evaluated using realistic MEG/high-density EEG (HD-EEG) simulations of epileptic activity and actual MEG/HD-EEG recordings from patients with focal epilepsy. We incorporated depth-weighting within the MEM framework to compensate for its preference for superficial generators. We also included a mesh of both hippocampi, as an additional deep structure in the source model. We generated 5400 realistic simulations of interictal epileptic discharges for MEG and HD-EEG involving a wide range of spatial extents and signal-to-noise ratio (SNR) levels, before investigating EMSI on clinical HD-EEG in 16 patients and MEG in 14 patients. Clinical interictal epileptic discharges were marked by visual inspection. We applied three EMSI methods: cMEM, depth-weighted cMEM and depth-weighted minimum norm estimate (MNE). The ground truth was defined as the true simulated generator or as a drawn region based on clinical information available for patients. For deep sources, depth-weighted cMEM improved the localization when compared to cMEM and depth-weighted MNE, whereas depth-weighted cMEM did not deteriorate localization accuracy for superficial regions. For patients' data, we observed improvement in localization for deep sources, especially for the patients with mesial temporal epilepsy, for which cMEM failed to reconstruct the initial generator in the hippocampus. Depth weighting was more crucial for MEG (gradiometers) than for HD-EEG. Similar findings were found when considering depth weighting for the wavelet extension of MEM. In conclusion, depth-weighted cMEM improved the localization of deep sources without or with minimal deterioration of the localization of the superficial sources. This was demonstrated using extensive simulations with MEG and HD-EEG and clinical MEG and HD-EEG for epilepsy patients.


Subject(s)
Electroencephalography , Entropy , Magnetoencephalography , Humans , Magnetoencephalography/methods , Electroencephalography/methods , Adult , Female , Male , Computer Simulation , Young Adult , Epilepsy/physiopathology , Epilepsy/diagnostic imaging , Middle Aged , Brain Mapping/methods , Brain/diagnostic imaging , Brain/physiopathology , Hippocampus/diagnostic imaging , Hippocampus/physiopathology , Models, Neurological
18.
Nat Commun ; 15(1): 5861, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38997274

ABSTRACT

Electrical stimulation is a key tool in neuroscience, both in brain mapping studies and in many therapeutic applications such as cochlear, vestibular, and retinal neural implants. Due to safety considerations, stimulation is restricted to short biphasic pulses. Despite decades of research and development, neural implants lead to varying restoration of function in patients. In this study, we use computational modeling to provide an explanation for how pulsatile stimulation affects axonal channels and therefore leads to variability in restoration of neural responses. The phenomenological explanation is transformed into equations that predict induced firing rate as a function of pulse rate, pulse amplitude, and spontaneous firing rate. We show that these equations predict simulated responses to pulsatile stimulation with a variety of parameters as well as several features of experimentally recorded primate vestibular afferent responses to pulsatile stimulation. We then discuss the implications of these effects for improving clinical stimulation paradigms and electrical stimulation-based experiments.


Subject(s)
Electric Stimulation , Animals , Electric Stimulation/methods , Models, Neurological , Macaca mulatta , Action Potentials/physiology , Neurons/physiology , Computer Simulation , Humans , Vestibule, Labyrinth/physiology
19.
Adv Neurobiol ; 38: 237-257, 2024.
Article in English | MEDLINE | ID: mdl-39008019

ABSTRACT

Memory engrams in mice brains are potentially related to groups of concept cells in human brains. A single concept cell in human hippocampus responds, for example, not only to different images of the same object or person but also to its name written down in characters. Importantly, a single mental concept (object or person) is represented by several concept cells and each concept cell can respond to more than one concept. Computational work shows how mental concepts can be embedded in recurrent artificial neural networks as memory engrams and how neurons that are shared between different engrams can lead to associations between concepts. Therefore, observations at the level of neurons can be linked to cognitive notions of memory recall and association chains between memory items.


Subject(s)
Hippocampus , Memory , Neural Networks, Computer , Animals , Humans , Mice , Brain/physiology , Hippocampus/physiology , Memory/physiology , Mental Recall/physiology , Models, Neurological , Neurons/physiology
20.
Proc Natl Acad Sci U S A ; 121(29): e2316765121, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-38990946

ABSTRACT

How does the brain simultaneously process signals that bring complementary information, like raw sensory signals and their transformed counterparts, without any disruptive interference? Contemporary research underscores the brain's adeptness in using decorrelated responses to reduce such interference. Both neurophysiological findings and artificial neural networks support the notion of orthogonal representation for signal differentiation and parallel processing. Yet, where, and how raw sensory signals are transformed into more abstract representations remains unclear. Using a temporal pattern discrimination task in trained monkeys, we revealed that the second somatosensory cortex (S2) efficiently segregates faithful and transformed neural responses into orthogonal subspaces. Importantly, S2 population encoding for transformed signals, but not for faithful ones, disappeared during a nondemanding version of this task, which suggests that signal transformation and their decoding from downstream areas are only active on-demand. A mechanistic computation model points to gain modulation as a possible biological mechanism for the observed context-dependent computation. Furthermore, individual neural activities that underlie the orthogonal population representations exhibited a continuum of responses, with no well-determined clusters. These findings advocate that the brain, while employing a continuum of heterogeneous neural responses, splits population signals into orthogonal subspaces in a context-dependent fashion to enhance robustness, performance, and improve coding efficiency.


Subject(s)
Macaca mulatta , Somatosensory Cortex , Animals , Somatosensory Cortex/physiology , Models, Neurological , Male
SELECTION OF CITATIONS
SEARCH DETAIL