Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
J Comput Neurosci ; 52(2): 145-164, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38607466

RESUMEN

Traveling waves of neural activity emerge in cortical networks both spontaneously and in response to stimuli. The spatiotemporal structure of waves can indicate the information they encode and the physiological processes that sustain them. Here, we investigate the stimulus-response relationships of traveling waves emerging in adaptive neural fields as a model of visual motion processing. Neural field equations model the activity of cortical tissue as a continuum excitable medium, and adaptive processes provide negative feedback, generating localized activity patterns. Synaptic connectivity in our model is described by an integral kernel that weakens dynamically due to activity-dependent synaptic depression, leading to marginally stable traveling fronts (with attenuated backs) or pulses of a fixed speed. Our analysis quantifies how weak stimuli shift the relative position of these waves over time, characterized by a wave response function we obtain perturbatively. Persistent and continuously visible stimuli model moving visual objects. Intermittent flashes that hop across visual space can produce the experience of smooth apparent visual motion. Entrainment of waves to both kinds of moving stimuli are well characterized by our theory and numerical simulations, providing a mechanistic description of the perception of visual motion.


Asunto(s)
Modelos Neurológicos , Percepción de Movimiento , Estimulación Luminosa , Percepción de Movimiento/fisiología , Humanos , Neuronas/fisiología , Animales , Simulación por Computador , Corteza Visual/fisiología , Adaptación Fisiológica/fisiología
2.
PLoS Comput Biol ; 19(11): e1011622, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37943956

RESUMEN

Experience shapes our expectations and helps us learn the structure of the environment. Inference models render such learning as a gradual refinement of the observer's estimate of the environmental prior. For instance, when retaining an estimate of an object's features in working memory, learned priors may bias the estimate in the direction of common feature values. Humans display such biases when retaining color estimates on short time intervals. We propose that these systematic biases emerge from modulation of synaptic connectivity in a neural circuit based on the experienced stimulus history, shaping the persistent and collective neural activity that encodes the stimulus estimate. Resulting neural activity attractors are aligned to common stimulus values. Using recently published human response data from a delayed-estimation task in which stimuli (colors) were drawn from a heterogeneous distribution that did not necessarily correspond with reported population biases, we confirm that most subjects' response distributions are better described by experience-dependent learning models than by models with fixed biases. This work suggests systematic limitations in working memory reflect efficient representations of inferred environmental structure, providing new insights into how humans integrate environmental knowledge into their cognitive strategies.


Asunto(s)
Aprendizaje , Memoria a Corto Plazo , Humanos , Memoria a Corto Plazo/fisiología
3.
PLoS Comput Biol ; 18(7): e1010323, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35853038

RESUMEN

Solutions to challenging inference problems are often subject to a fundamental trade-off between: 1) bias (being systematically wrong) that is minimized with complex inference strategies, and 2) variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to forms of inference based on suboptimal strategies. We examined inference problems applied to rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that varied in form and complexity. In general, subjects using more complex strategies tended to have lower bias and variance, but with a dependence on the form of strategy that reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but higher bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but lower, near-normative bias. Our results help define new principles that govern individual differences in behavior that depends on rare-event inference and, more generally, about the information-processing trade-offs that can be sensitive to not just the complexity, but also the optimality, of the inference process.


Asunto(s)
Cognición , Teorema de Bayes , Sesgo , Humanos
4.
SIAM J Appl Dyn Syst ; 21(4): 2579-2609, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-38250343

RESUMEN

Localized persistent cortical neural activity is a validated neural substrate of parametric working memory. Such activity "bumps" represent the continuous location of a cue over several seconds. Pyramidal (excitatory (E)) and interneuronal (inhibitory (I)) subpopulations exhibit tuned bumps of activity, linking neural dynamics to behavioral inaccuracies observed in memory recall. However, many bump attractor models collapse these subpopulations into a single joint E/I(lateral inhibitory) population and do not consider the role of interpopulation neural architecture and noise correlations. Both factors have a high potential to impinge upon the stochastic dynamics of these bumps, ultimately shaping behavioral response variance. In our study, we consider a neural field model with separate E/I populations and leverage asymptotic analysis to derive a nonlinear Langevin system describing E/I bump interactions. While the E bump attracts the I bump, the I bump stabilizes but can also repel the E bump, which can result in prolonged relaxation dynamics when both bumps are perturbed. Furthermore, the structure of noise correlations within and between subpopulations strongly shapes the variance in bump position. Surprisingly, higher interpopulation correlations reduce variance.

5.
J Math Biol ; 83(2): 20, 2021 07 29.
Artículo en Inglés | MEDLINE | ID: mdl-34324069

RESUMEN

Honey bees make decisions regarding foraging and nest-site selection in groups ranging from hundreds to thousands of individuals. To effectively make these decisions, bees need to communicate within a spatially distributed group. However, the spatiotemporal dynamics of honey bee communication have been mostly overlooked in models of collective decisions, focusing primarily on mean field models of opinion dynamics. We analyze how the spatial properties of the nest or hive, and the movement of individuals with different belief states (uncommitted or committed) therein affect the rate of information transmission using spatially-extended models of collective decision-making within a hive. Honeybees waggle-dance to recruit conspecifics with an intensity that is a threshold nonlinear function of the waggler concentration. Our models range from treating the hive as a chain of discrete patches to a continuous line (long narrow hive). The combination of population-thresholded recruitment and compartmentalized populations generates tradeoffs between rapid information propagation with strong population dispersal and recruitment failures resulting from excessive population diffusion and also creates an effective colony-level signal-detection mechanism whereby recruitment to low quality objectives is blocked.


Asunto(s)
Comunicación Animal , Movimiento , Animales , Abejas
6.
Phys Rev Lett ; 125(21): 218302, 2020 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-33274999

RESUMEN

How does temporally structured private and social information shape collective decisions? To address this question we consider a network of rational agents who independently accumulate private evidence that triggers a decision upon reaching a threshold. When seen by the whole network, the first agent's choice initiates a wave of new decisions; later decisions have less impact. In heterogeneous networks, first decisions are made quickly by impulsive individuals who need little evidence to make a choice but, even when wrong, can reveal the correct options to nearly everyone else. We conclude that groups comprised of diverse individuals can make more efficient decisions than homogenous ones.


Asunto(s)
Modelos Teóricos , Red Social , Árboles de Decisión
7.
J Comput Neurosci ; 48(2): 177-192, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32338341

RESUMEN

Ambiguous visual images can generate dynamic and stochastic switches in perceptual interpretation known as perceptual rivalry. Such dynamics have primarily been studied in the context of rivalry between two percepts, but there is growing interest in the neural mechanisms that drive rivalry between more than two percepts. In recent experiments, we showed that split images presented to each eye lead to subjects perceiving four stochastically alternating percepts (Jacot-Guillarmod et al. Vision research, 133, 37-46, 2017): two single eye images and two interocularly grouped images. Here we propose a hierarchical neural network model that exhibits dynamics consistent with our experimental observations. The model consists of two levels, with the first representing monocular activity, and the second representing activity in higher visual areas. The model produces stochastically switching solutions, whose dependence on task parameters is consistent with four generalized Levelt Propositions, and with experiments. Moreover, dynamics restricted to invariant subspaces of the model demonstrate simpler forms of bistable rivalry. Thus, our hierarchical model generalizes past, validated models of binocular rivalry. This neuromechanistic model also allows us to probe the roles of interactions between populations at the network level. Generalized Levelt's Propositions hold as long as feedback from the higher to lower visual areas is weak, and the adaptation and mutual inhibition at the higher level is not too strong. Our results suggest constraints on the architecture of the visual system and show that complex visual stimuli can be used in perceptual rivalry experiments to develop more detailed mechanistic models of perceptual processing.


Asunto(s)
Fenómenos Fisiológicos Oculares , Percepción Visual/fisiología , Algoritmos , Predominio Ocular/fisiología , Retroalimentación Sensorial/fisiología , Humanos , Modelos Neurológicos , Redes Neurales de la Computación , Procesos Estocásticos , Disparidad Visual/fisiología , Visión Monocular/fisiología
8.
SIAM J Appl Dyn Syst ; 19(3): 1884-1919, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-36051948

RESUMEN

To make decisions we are guided by the evidence we collect and the opinions of friends and neighbors. How do we combine our private beliefs with information we obtain from our social network? To understand the strategies humans use to do so, it is useful to compare them to observers that optimally integrate all evidence. Here we derive network models of rational (Bayes optimal) agents who accumulate private measurements and observe the decisions of their neighbors to make an irreversible choice between two options. The resulting information exchange dynamics has interesting properties: When decision thresholds are asymmetric, the absence of a decision can be increasingly informative over time. In a recurrent network of two agents, the absence of a decision can lead to a sequence of belief updates akin to those in the literature on common knowledge. On the other hand, in larger networks a single decision can trigger a cascade of agreements and disagreements that depend on the private information agents have gathered. Our approach provides a bridge between social decision making models in the economics literature, which largely ignore the temporal dynamics of decisions, and the single-observer evidence accumulator models used widely in neuroscience and psychology.

9.
J Comput Neurosci ; 47(2-3): 205-222, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31734803

RESUMEN

Decision-making in dynamic environments typically requires adaptive evidence accumulation that weights new evidence more heavily than old observations. Recent experimental studies of dynamic decision tasks require subjects to make decisions for which the correct choice switches stochastically throughout a single trial. In such cases, an ideal observer's belief is described by an evolution equation that is doubly stochastic, reflecting stochasticity in the both observations and environmental changes. In these contexts, we show that the probability density of the belief can be represented using differential Chapman-Kolmogorov equations, allowing efficient computation of ensemble statistics. This allows us to reliably compare normative models to near-normative approximations using, as model performance metrics, decision response accuracy and Kullback-Leibler divergence of the belief distributions. Such belief distributions could be obtained empirically from subjects by asking them to report their decision confidence. We also study how response accuracy is affected by additional internal noise, showing optimality requires longer integration timescales as more noise is added. Lastly, we demonstrate that our method can be applied to tasks in which evidence arrives in a discrete, pulsatile fashion, rather than continuously.


Asunto(s)
Toma de Decisiones/fisiología , Modelos Psicológicos , Algoritmos , Humanos
10.
J Comput Neurosci ; 44(3): 273-295, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29546529

RESUMEN

Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a circuit-based explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.


Asunto(s)
Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Neuronas/fisiología , Sinapsis/fisiología , Potenciales de Acción/fisiología , Atención , Simulación por Computador , Humanos , Dinámicas no Lineales , Estimulación Luminosa , Percepción Visual/fisiología
11.
Neural Comput ; 29(6): 1561-1610, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-28333591

RESUMEN

In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is an update of the posterior probability of all possible change point counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation-based plasticity rule. We thus show how optimal observers accumulate evidence in changing environments and map this computation to reduced models that perform inference using plausible neural mechanisms.

12.
J Comput Neurosci ; 40(2): 137-55, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26754972

RESUMEN

Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal's current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal's knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541-4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.


Asunto(s)
Retroalimentación Sensorial/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Dinámicas no Lineales , Células de Lugar/fisiología , Percepción Espacial/fisiología , Animales , Simulación por Computador , Señales (Psicología) , Humanos , Procesos Estocásticos
13.
J Math Biol ; 73(5): 1131-1160, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-26972459

RESUMEN

Slow oscillations in firing rate of neural populations are commonly observed during slow wave sleep. These oscillations are partitioned into up and down states, where the population switches between high and low firing rates (Sanchez-Vives and McCormick in Nat Neurosci 3:1027-1034, 2000). Transitions between up and down states can be synchronized at considerably long ranges (Volgushev et al. in J Neurosci 26:5665-5672, 2006). To explore how external perturbations shape the phase of slow oscillations, we analyze a reduced model of up and down state transitions involving a population neural activity variable and a global adaptation variable. The adaptation variable represents the average of all the slow hyperpolarizing currents received by neurons in a large population. Recurrent connectivity leads to a bistable neural population, where a low firing rate state coexists with a high firing rate state, where persistent activity is maintained via excitatory connections. Adaptation eventually inactivates the high activity state, and the low activity state then persists until adaptation has significantly decayed. We analyze the phase response of the rate model by taking advantage of the separation of timescales between the fast activity and slow adaptation variables. This analysis reveals that perturbations to the neural activity variable have a considerably weaker effect on the oscillation phase than adaptation perturbations. When noise is not incorporated into the rate model, the period of the slow oscillation is determined by the timescale of the slow adaptation variable. In the presence of noise, times at which the population transitions between the low and high activity states become variable. This is because the rise and decay of the adaptation variable is now stochastically-driven, leading to a distribution of transition times. Interestingly, common noise in the adaptation variable can lead to a correlation of two distinct slow oscillating populations. This effect is still significant in the event that each population contains its own local sources of noise. We also show this phenomenon can occur a spiking network. Our results demonstrate the relative contributions of excitatory input and hyperpolarizing current fluctuations on the phase of slow oscillations.


Asunto(s)
Modelos Neurológicos , Neuronas/fisiología , Procesos Estocásticos
14.
J Comput Neurosci ; 39(3): 235-54, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26334992

RESUMEN

Neuronal circuits can learn and replay firing patterns evoked by sequences of sensory stimuli. After training, a brief cue can trigger a spatiotemporal pattern of neural activity similar to that evoked by a learned stimulus sequence. Network models show that such sequence learning can occur through the shaping of feedforward excitatory connectivity via long term plasticity. Previous models describe how event order can be learned, but they typically do not explain how precise timing can be recalled. We propose a mechanism for learning both the order and precise timing of event sequences. In our recurrent network model, long term plasticity leads to the learning of the sequence, while short term facilitation enables temporally precise replay of events. Learned synaptic weights between populations determine the time necessary for one population to activate another. Long term plasticity adjusts these weights so that the trained event times are matched during playback. While we chose short term facilitation as a time-tracking process, we also demonstrate that other mechanisms, such as spike rate adaptation, can fulfill this role. We also analyze the impact of trial-to-trial variability, showing how observational errors as well as neuronal noise result in variability in learned event times. The dynamics of the playback process determines how stochasticity is inherited in learned sequence timings. Future experiments that characterize such variability can therefore shed light on the neural mechanisms of sequence learning.


Asunto(s)
Simulación por Computador , Modelos Neurológicos , Red Nerviosa/fisiología , Redes Neurales de la Computación , Tiempo , Humanos , Aprendizaje/fisiología , Aprendizaje Automático , Plasticidad Neuronal , Células Piramidales/fisiología , Sinapsis/fisiología , Percepción del Tiempo/fisiología
15.
Biophys J ; 106(9): 2071-81, 2014 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-24806939

RESUMEN

In mammals, most cells in the brain and peripheral tissues generate circadian (∼24 h) rhythms autonomously. These self-sustained rhythms are coordinated and entrained by a master circadian clock in the suprachiasmatic nucleus (SCN). Within the SCN, the individual rhythms of each neuron are synchronized through intercellular signaling. One important feature of SCN is that the synchronized period is close to the population mean of cells' intrinsic periods. In this way, the synchronized period of the SCN stays close to the periods of cells in peripheral tissues. This is important because the SCN must entrain cells throughout the body. However, the mechanism that drives the period of the coupled SCN cells to the population mean is not known. We use mathematical modeling and analysis to show that the mechanism of transcription repression in the intracellular feedback loop plays a pivotal role in regulating the coupled period. Specifically, we use phase response curve analysis to show that the coupled period within the SCN stays near the population mean if transcriptional repression occurs via protein sequestration. In contrast, the coupled period is far from the mean if repression occurs through highly nonlinear Hill-type regulation (e.g., oligomer- or phosphorylation-based repression), as widely assumed in previous mathematical models. Furthermore, we find that the timescale of intercellular coupling needs to be fast compared to that of intracellular feedback to maintain the mean period. These findings reveal the important relationship between the intracellular transcriptional feedback loop and intercellular coupling. This relationship explains why transcriptional repression appears to occur via protein sequestration in multicellular organisms, mammals, and Drosophila, in contrast with the phosphorylation-based repression in unicellular organisms and syncytia. That is, transition to protein sequestration is essential for synchronizing multiple cells with a period close to the population mean (∼24 h).


Asunto(s)
Relojes Circadianos/genética , Regulación de la Expresión Génica , Mamíferos , Modelos Biológicos , Animales , Retroalimentación Fisiológica , Espacio Intracelular/metabolismo , Análisis de la Célula Individual , Núcleo Supraquiasmático/citología , Núcleo Supraquiasmático/fisiología , Transcripción Genética
16.
J Neurosci ; 33(48): 18999-9011, 2013 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-24285904

RESUMEN

A neural correlate of parametric working memory is a stimulus-specific rise in neuron firing rate that persists long after the stimulus is removed. Network models with local excitation and broad inhibition support persistent neural activity, linking network architecture and parametric working memory. Cortical neurons receive noisy input fluctuations that cause persistent activity to diffusively wander about the network, degrading memory over time. We explore how cortical architecture that supports parametric working memory affects the diffusion of persistent neural activity. Studying both a spiking network and a simplified potential well model, we show that spatially heterogeneous excitatory coupling stabilizes a discrete number of persistent states, reducing the diffusion of persistent activity over the network. However, heterogeneous coupling also coarse-grains the stimulus representation space, limiting the storage capacity of parametric working memory. The storage errors due to coarse-graining and diffusion trade off so that information transfer between the initial and recalled stimulus is optimized at a fixed network heterogeneity. For sufficiently long delay times, the optimal number of attractors is less than the number of possible stimuli, suggesting that memory networks can under-represent stimulus space to optimize performance. Our results clearly demonstrate the combined effects of network architecture and stochastic fluctuations on parametric memory storage.


Asunto(s)
Corteza Cerebral/fisiología , Memoria a Corto Plazo/fisiología , Algoritmos , Corteza Cerebral/citología , Difusión , Entropía , Humanos , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Desempeño Psicomotor/fisiología , Receptores AMPA/fisiología , Receptores de GABA/fisiología , Receptores de N-Metil-D-Aspartato/fisiología , Procesos Estocásticos
17.
J Comput Neurosci ; 37(1): 29-48, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24271061

RESUMEN

Persistent activity in neuronal populations has been shown to represent the spatial position of remembered stimuli. Networks that support bump attractors are often used to model such persistent activity. Such models usually exhibit translational symmetry. Thus activity bumps are neutrally stable, and perturbations in position do not decay away. We extend previous work on bump attractors by constructing model networks capable of encoding the certainty or salience of a stimulus stored in memory. Such networks support bumps that are not only neutrally stable to perturbations in position, but also perturbations in amplitude. Possible bump solutions then lie on a two-dimensional attractor, determined by a continuum of positions and amplitudes. Such an attractor requires precisely balancing the strength of recurrent synaptic connections. The amplitude of activity bumps represents certainty, and is determined by the initial input to the system. Moreover, bumps with larger amplitudes are more robust to noise, and over time provide a more faithful representation of the stored stimulus. In networks with separate excitatory and inhibitory populations, generating bumps with a continuum of possible amplitudes, requires tuning the strength of inhibition to precisely cancel background excitation.


Asunto(s)
Potenciales de Acción/fisiología , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Animales , Humanos , Inhibición Neural
18.
ArXiv ; 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38259347

RESUMEN

Decisions are often made by heterogeneous groups of individuals, each with distinct initial biases and access to information of different quality. We show that in large groups of independent agents who accumulate evidence the first to decide are those with the strongest initial biases. Their decisions align with their initial bias, regardless of the underlying truth. In contrast, agents who decide last make decisions as if they were initially unbiased, and hence make better choices. We obtain asymptotic expressions in the large population limit that quantify how agents' initial inclinations shape early decisions. Our analysis shows how bias, information quality, and decision order interact in non-trivial ways to determine the reliability of decisions in a group.

19.
ArXiv ; 2023 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-37426453

RESUMEN

The distinct timescales of synaptic plasticity and neural activity dynamics play an important role in the brain's learning and memory systems. Activity-dependent plasticity reshapes neural circuit architecture, determining spontaneous and stimulus-encoding spatiotemporal patterns of neural activity. Neural activity bumps maintain short term memories of continuous parameter values, emerging in spatially-organized models with short term excitation and long-range inhibition. Previously, we demonstrated nonlinear Langevin equations derived using an interface method accurately describe the dynamics of bumps in continuum neural fields with separate excitatory/inhibitory populations. Here we extend this analysis to incorporate effects of slow short term plasticity that modifies connectivity described by an integral kernel. Linear stability analysis adapted to these piecewise smooth models with Heaviside firing rates further indicate how plasticity shapes bumps' local dynamics. Facilitation (depression), which strengthens (weakens) synaptic connectivity originating from active neurons, tends to increase (decrease) stability of bumps when acting on excitatory synapses. The relationship is inverted when plasticity acts on inhibitory synapses. Multiscale approximations of the stochastic dynamics of bumps perturbed by weak noise reveal the plasticity variables evolve to slowly diffusing and blurred versions of that arising in the stationary solution. Nonlinear Langevin equations associated with bump positions or interfaces coupled to slowly evolving projections of plasticity variables accurately describe the wandering of bumps underpinned by these smoothed synaptic efficacy profiles.

20.
ArXiv ; 2023 Dec 11.
Artículo en Inglés | MEDLINE | ID: mdl-38168459

RESUMEN

Traveling waves of neural activity emerge in cortical networks both spontaneously and in response to stimuli. The spatiotemporal structure of waves can indicate the information they encode and the physiological processes that sustain them. Here, we investigate the stimulus-response relationships of traveling waves emerging in adaptive neural fields as a model of visual motion processing. Neural field equations model the activity of cortical tissue as a continuum excitable medium, and adaptive processes provide negative feedback, generating localized activity patterns. Synaptic connectivity in our model is described by an integral kernel that weakens dynamically due to activity-dependent synaptic depression, leading to marginally stable traveling fronts (with attenuated backs) or pulses of a fixed speed. Our analysis quantifies how weak stimuli shift the relative position of these waves over time, characterized by a wave response function we obtain perturbatively. Persistent and continuously visible stimuli model moving visual objects. Intermittent flashes that hop across visual space can produce the experience of smooth apparent visual motion. Entrainment of waves to both kinds of moving stimuli are well characterized by our theory and numerical simulations, providing a mechanistic description of the perception of visual motion.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA