Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Math Biol ; 86(2): 30, 2023 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-36637504

RESUMEN

May and Leonard (SIAM J Appl Math 29:243-253, 1975) introduced a three-species Lotka-Volterra type population model that exhibits heteroclinic cycling. Rather than producing a periodic limit cycle, the trajectory takes longer and longer to complete each "cycle", passing closer and closer to unstable fixed points in which one population dominates and the others approach zero. Aperiodic heteroclinic dynamics have subsequently been studied in ecological systems (side-blotched lizards; colicinogenic Escherichia coli), in the immune system, in neural information processing models ("winnerless competition"), and in models of neural central pattern generators. Yet as May and Leonard observed "Biologically, the behavior (produced by the model) is nonsense. Once it is conceded that the variables represent animals, and therefore cannot fall below unity, it is clear that the system will, after a few cycles, converge on some single population, extinguishing the other two." Here, we explore different ways of introducing discrete stochastic dynamics based on May and Leonard's ODE model, with application to ecological population dynamics, and to a neuromotor central pattern generator system. We study examples of several quantitatively distinct asymptotic behaviors, including total extinction of all species, extinction to a single species, and persistent cyclic dominance with finite mean cycle length.


Asunto(s)
Ecosistema , Modelos Biológicos , Animales , Dinámica Poblacional , Matemática , Conducta Predatoria , Procesos Estocásticos
2.
J Comput Neurosci ; 47(2-3): 205-222, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31734803

RESUMEN

Decision-making in dynamic environments typically requires adaptive evidence accumulation that weights new evidence more heavily than old observations. Recent experimental studies of dynamic decision tasks require subjects to make decisions for which the correct choice switches stochastically throughout a single trial. In such cases, an ideal observer's belief is described by an evolution equation that is doubly stochastic, reflecting stochasticity in the both observations and environmental changes. In these contexts, we show that the probability density of the belief can be represented using differential Chapman-Kolmogorov equations, allowing efficient computation of ensemble statistics. This allows us to reliably compare normative models to near-normative approximations using, as model performance metrics, decision response accuracy and Kullback-Leibler divergence of the belief distributions. Such belief distributions could be obtained empirically from subjects by asking them to report their decision confidence. We also study how response accuracy is affected by additional internal noise, showing optimality requires longer integration timescales as more noise is added. Lastly, we demonstrate that our method can be applied to tasks in which evidence arrives in a discrete, pulsatile fashion, rather than continuously.


Asunto(s)
Toma de Decisiones/fisiología , Modelos Psicológicos , Algoritmos , Humanos
3.
Elife ; 112022 10 25.
Artículo en Inglés | MEDLINE | ID: mdl-36282065

RESUMEN

Models based on normative principles have played a major role in our understanding of how the brain forms decisions. However, these models have typically been derived for simple, stable conditions, and their relevance to decisions formed under more naturalistic, dynamic conditions is unclear. We previously derived a normative decision model in which evidence accumulation is adapted to fluctuations in the evidence-generating process that occur during a single decision (Glaze et al., 2015), but the evolution of commitment rules (e.g. thresholds on the accumulated evidence) under dynamic conditions is not fully understood. Here, we derive a normative model for decisions based on changing contexts, which we define as changes in evidence quality or reward, over the course of a single decision. In these cases, performance (reward rate) is maximized using decision thresholds that respond to and even anticipate these changes, in contrast to the static thresholds used in many decision models. We show that these adaptive thresholds exhibit several distinct temporal motifs that depend on the specific predicted and experienced context changes and that adaptive models perform robustly even when implemented imperfectly (noisily). We further show that decision models with adaptive thresholds outperform those with constant or urgency-gated thresholds in accounting for human response times on a task with time-varying evidence quality and average reward. These results further link normative and neural decision-making while expanding our view of both as dynamic, adaptive processes that update and use expectations to govern both deliberation and commitment.


How do we make good choices? Should I have cake or yoghurt for breakfast? The strategies we use to make decisions are important not just for our daily lives, but also for learning more about how the brain works. Decision-making strategies have two components: first, a deliberation period (when we gather information to determine which choice is 'best'); and second, a decision 'rule' (which tells us when to stop deliberating and commit to a choice). Although deliberation is relatively well-understood, less is known about the decision rules people use, or how those rules produce different outcomes. Another issue is that even the simplest decisions must sometimes adapt to a changing world. For example, if it starts raining while you are deciding which route to walk into town, you would probably choose the driest route ­ even if it did not initially look the best. However, most studies of decision strategies have assumed that the decision-maker's environment does not change during the decision process. In other words, we know much less about the decision rules used in real-life situations, where the environment changes. Barendregt et al. therefore wanted to extend the approaches previously used to study decisions in static environments, to determine which decision rules might be best suited to more realistic environments that change over time. First, Barendregt et al. constructed a computer simulation of decision-making with environmental changes built in. These changes were either alterations in the quality of evidence for or against a particular choice, or the 'reward' from a choice, i.e., feedback on how good the decision was. They then used the computer simulation to model single decisions where these changes took place. These virtual experiments showed that the best performance ­ for example, the most accurate decisions ­ resulted when the threshold for moving from deliberation (i.e., considering the evidence) to selecting an option could respond to, or even anticipate, the changing situations. Importantly, the simulations' results also predicted real-world choices made by human participants when given a decision-making task with similar variations in evidence and reward over time. In other words, the virtual decision-making rules could explain real behavior. This study sheds new light on how we make decisions in a changing environment. In the future, Barendregt et al. hope that this will contribute to a broader understanding of decision-making and behavior in a wide range of contexts, from psychology to economics and even ecology.


Asunto(s)
Toma de Decisiones , Recompensa , Humanos , Toma de Decisiones/fisiología , Tiempo de Reacción , Encéfalo , Adaptación Fisiológica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA