Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 26
1.
J Vis ; 24(5): 17, 2024 May 01.
Article En | MEDLINE | ID: mdl-38819805

What is the link between eye movements and sensory learning? Although some theories have argued for an automatic interaction between what we know and where we look that continuously modulates human information gathering behavior during both implicit and explicit learning, there exists limited experimental evidence supporting such an ongoing interplay. To address this issue, we used a visual statistical learning paradigm combined with a gaze-contingent stimulus presentation and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, spatial eye movement patterns systematically and gradually changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount and type of knowledge the observers acquired. This suggests that eye movements are potential indicators of active learning, a process where long-term knowledge, current visual stimuli and an inherent tendency to reduce uncertainty about the visual environment jointly determine where we look.


Eye Movements , Learning , Photic Stimulation , Humans , Eye Movements/physiology , Learning/physiology , Male , Young Adult , Female , Adult , Photic Stimulation/methods , Visual Perception/physiology , Fixation, Ocular/physiology
2.
Nat Hum Behav ; 8(4): 679-691, 2024 Apr.
Article En | MEDLINE | ID: mdl-38216691

Normative and descriptive models have long vied to explain and predict human risky choices, such as those between goods or gambles. A recent study reported the discovery of a new, more accurate model of human decision-making by training neural networks on a new online large-scale dataset, choices13k. Here we systematically analyse the relationships between several models and datasets using machine-learning methods and find evidence for dataset bias. Because participants' choices in stochastically dominated gambles were consistently skewed towards equipreference in the choices13k dataset, we hypothesized that this reflected increased decision noise. Indeed, a probabilistic generative model adding structured decision noise to a neural network trained on data from a laboratory study transferred best, that is, outperformed all models apart from those trained on choices13k. We conclude that a careful combination of theory and data analysis is still required to understand the complex interactions of machine-learning models and data of human risky choices.


Decision Making , Machine Learning , Humans , Neural Networks, Computer , Risk-Taking , Datasets as Topic , Models, Psychological , Choice Behavior , Adult , Bias
3.
Psychol Res ; 88(1): 167-186, 2024 Feb.
Article En | MEDLINE | ID: mdl-37083875

People can use the constant target-heading (CTH) strategy or the constant bearing (CB) strategy to guide their locomotor interception. But it is still unclear whether people can learn new interception behavior. Here, we investigated how people learn to adjust their steering to intercept targets faster. Participants steered a car to intercept a moving target in a virtual environment similar to a natural open field. Their baseline interceptions were better accounted for by the CTH strategy. After five learning sessions across multiple days, in which participants received feedback about their interception durations, they adopted a two-stage control: a quick initial burst of turning accompanied by an increase of the target-heading angle during early interception was followed by significantly less turning with small changes in target-heading angle during late interception. The target's bearing angle did not only show this two-stage pattern but also changed comparatively little during late interception, leaving it unclear which strategy participants had adopted. In a following test session, the two-stage pattern of participants' turning adjustment and the target-heading angle transferred to new target conditions and a new environment without visual information about an allocentric reference frame, which should preclude participants from using the CB strategy. Indeed, the pattern of the target's bearing angle did not transfer to all the new conditions. These results suggest that participants learned a two-stage control for faster interception: they learned to quickly increase the target-heading angle during early interception and subsequently follow the CTH strategy during late interception.


Motion Perception , Psychomotor Performance , Humans , Learning , Nonoxynol
4.
J Vis Exp ; (194)2023 04 21.
Article En | MEDLINE | ID: mdl-37154551

To grasp an object successfully, we must select appropriate contact regions for our hands on the surface of the object. However, identifying such regions is challenging. This paper describes a workflow to estimate the contact regions from marker-based tracking data. Participants grasp real objects, while we track the 3D position of both the objects and the hand, including the fingers' joints. We first determine the joint Euler angles from a selection of tracked markers positioned on the back of the hand. Then, we use state-of-the-art hand mesh reconstruction algorithms to generate a mesh model of the participant's hand in the current pose and the 3D position. Using objects that were either 3D printed or 3D scanned-and are, thus, available as both real objects and mesh data-allows the hand and object meshes to be co-registered. In turn, this allows the estimation of approximate contact regions by calculating the intersections between the hand mesh and the co-registered 3D object mesh. The method may be used to estimate where and how humans grasp objects under a variety of conditions. Therefore, the method could be of interest to researchers studying visual and haptic perception, motor control, human-computer interaction in virtual and augmented reality, and robotics.


Hand , Robotics , Humans , Hand Strength
5.
Int J Psychophysiol ; 181: 125-140, 2022 11.
Article En | MEDLINE | ID: mdl-36116610

It is hypothesized that the ability to discriminate between threat and safety is impaired in individuals with high dispositional negativity, resulting in maladaptive behavior. A large body of research investigated differential learning during fear conditioning and extinction protocols depending on individual differences in intolerance of uncertainty (IU) and trait anxiety (TA), two closely-related dimensions of dispositional negativity, with heterogenous results. These might be due to varying degrees of induced threat/safety uncertainty. Here, we compared two groups with high vs. low IU/TA during periods of low (instructed fear acquisition) and high levels of uncertainty (delayed non-instructed extinction training and reinstatement). Dependent variables comprised subjective (US expectancy, valence, arousal), psychophysiological (skin conductance response, SCR, and startle blink), and neural (fMRI BOLD) measures of threat responding. During fear acquisition, we found strong threat/safety discrimination for both groups. During early extinction (high uncertainty), the low IU/TA group showed an increased physiological response to the safety signal, resulting in a lack of CS discrimination. In contrast, the high IU/TA group showed strong initial threat/safety discrimination in physiology, lacking discriminative learning on startle, and reduced neural activation in regions linked to threat/safety processing throughout extinction training indicating sustained but non-adaptive and rigid responding. Similar neural patterns were found after the reinstatement test. Taken together, we provide evidence that high dispositional negativity, as indicated here by IU and TA, is associated with greater responding to threat cues during the beginning of delayed extinction, and, thus, demonstrates altered learning patterns under changing environments.


Extinction, Psychological , Galvanic Skin Response , Anxiety , Extinction, Psychological/physiology , Fear/physiology , Humans , Uncertainty
6.
Elife ; 112022 09 29.
Article En | MEDLINE | ID: mdl-36173094

Psychophysical methods are a cornerstone of psychology, cognitive science, and neuroscience where they have been used to quantify behavior and its neural correlates for a vast range of mental phenomena. Their power derives from the combination of controlled experiments and rigorous analysis through signal detection theory. Unfortunately, they require many tedious trials and preferably highly trained participants. A recently developed approach, continuous psychophysics, promises to transform the field by abandoning the rigid trial structure involving binary responses and replacing it with continuous behavioral adjustments to dynamic stimuli. However, what has precluded wide adoption of this approach is that current analysis methods do not account for the additional variability introduced by the motor component of the task and therefore recover perceptual thresholds that are larger compared to equivalent traditional psychophysical experiments. Here, we introduce a computational analysis framework for continuous psychophysics based on Bayesian inverse optimal control. We show via simulations and previously published data that this not only recovers the perceptual thresholds but additionally estimates subjects' action variability, internal behavioral costs, and subjective beliefs about the experimental stimulus dynamics. Taken together, we provide further evidence for the importance of including acting uncertainties, subjective beliefs, and, crucially, the intrinsic costs of behavior, even in experiments seemingly only investigating perception.


Humans often perceive the world around them subjectively. Factors like light brightness, the speed of a moving object, or an individual's interpretation of facial expressions may influence perception. Understanding how humans perceive the world can provide valuable insights into neuroscience, psychology, and even people's spending habits, making human perception studies important. However, these so-called psychophysical studies often consist of thousands of simple yes or no questions, which are tedious for adult volunteers, and nearly impossible for children. A new approach called 'continuous psychophysics' makes perception studies shorter, easier, and more fun for participants. Instead of answering yes or no questions (like in classical psychophysics experiments), the participants follow an object on a screen with their fingers or eyes. One question about this new approach is whether it accounts for differences that affect how well participants follow the object. For example, some people may have jittery hands, while others may be unmotivated to complete the task. To overcome this issue, Straub and Rothkopf have developed a mathematical model that can correct for differences between participants in the variability of their actions, their internal costs of actions, and their subjective beliefs about how the target moves. Accounting for these factors in a model can lead to more reliable study results. Straub and Rothkopf used data from three previous continuous psychophysics studies to construct a mathematical model that could best predict the experimental results. To test their model, they then used it on data from a continuous psychophysics study conducted alongside a classical psychophysics study. The model was able to correct the results of the continuous psychophysics study so they were more consistent with the results of the classical study. This new technique may enable wider use of continuous psychophysics to study a range of human behavior. It will allow larger, more complex studies that would not have been possible with conventional approaches, as well as enable research on perception in infants and children. Brain scientists may also use this technique to understand how brain activity relates to perception.


Perception , Bayes Theorem , Humans , Psychophysics/methods
7.
Cogn Neuropsychol ; 38(7-8): 440-454, 2021.
Article En | MEDLINE | ID: mdl-34877918

The success of visuomotor interactions in everyday activities such as grasping or sliding a cup is inescapably governed by the laws of physics. Research on intuitive physics has predominantly investigated reasoning about objects' behaviour involving binary forced choice responses. We investigated how the type of visuomotor response influences participants' beliefs about physical quantities and their lawful relationship implicit in their active behaviour. Participants propelled pucks towards targets positioned at different distances. Analysis with a probabilistic model of interactions showed that subjects adopted the non-linear control prescribed by Newtonian physics when sliding real pucks in a virtual environment even in the absence of visual feedback. However, they used a linear heuristic when viewing the scene on a monitor and interactions were implemented through key presses. These results support the notion of probabilistic internal physics models but additionally suggest that humans can take advantage of embodied, sensorimotor, multimodal representations in physical scenarios.


Hand Strength , Physics , Humans
8.
Q J Exp Psychol (Hove) ; 74(10): 1686-1696, 2021 Oct.
Article En | MEDLINE | ID: mdl-33749396

Which strategy people use to guide locomotor interception remains unclear despite considerable research and the importance of an answer with ramification into the heuristics and biases debate. Because the constant bearing (CB) strategy corresponds to the target-heading (CTH) strategy with an additional constraint, these two strategies can be confounded experimentally. But, the two strategies are distinct in the information they require: while the CTH strategy only requires access to the relative angle between the direction of motion and the target, the CB strategy requires access to a stable allocentric reference frame. Here, we manipulated the visual information about allocentric reference frames in three virtual environments and asked participants to steer a car to intercept a moving target. Participants' interception paths showed different degrees of curvature and their target-heading angles were approximately constant, consistent with the CTH strategy. By contrast, the target's bearing angle continuously changed in all participants except one. This particular participant produced linear interception paths with little change in the target's bearing angle, seemingly consistent with both strategies. This participant continued this pattern of steering even in the environment without any visual information about allocentric reference frames. Therefore, this pattern of steering is attributed to the CTH strategy rather than the CB strategy. The overall results add important evidence for the conclusion that locomotor interception is better accounted for by the CTH strategy and that experimentally observing a straight interception trajectory with a CB angle is not sufficient evidence for the CB strategy.


Automobiles , Motion Perception , Heuristics , Humans , Motion , Psychomotor Performance
9.
Front Psychol ; 12: 641471, 2021.
Article En | MEDLINE | ID: mdl-33692732

The efficient coding hypothesis posits that sensory systems are tuned to the regularities of their natural input. The statistics of natural image databases have been the topic of many studies, which have revealed biases in the distribution of orientations that are related to neural representations as well as behavior in psychophysical tasks. However, commonly used natural image databases contain images taken with a camera with a planar image sensor and limited field of view. Thus, these images do not incorporate the physical properties of the visual system and its active use reflecting body and eye movements. Here, we investigate quantitatively, whether the active use of the visual system influences image statistics across the visual field by simulating visual behaviors in an avatar in a naturalistic virtual environment. Images with a field of view of 120° were generated during exploration of a virtual forest environment both for a human and cat avatar. The physical properties of the visual system were taken into account by projecting the images onto idealized retinas according to models of the eyes' geometrical optics. Crucially, different active gaze behaviors were simulated to obtain image ensembles that allow investigating the consequences of active visual behaviors on the statistics of the input to the visual system. In the central visual field, the statistics of the virtual images matched photographic images regarding their power spectra and a bias in edge orientations toward cardinal directions. At larger eccentricities, the cardinal bias was superimposed with a gradually increasing radial bias. The strength of this effect depends on the active visual behavior and the physical properties of the eye. There were also significant differences between the upper and lower visual field, which became stronger depending on how the environment was actively sampled. Taken together, the results show that quantitatively relating natural image statistics to neural representations and psychophysical behavior requires not only to take the structure of the environment into account, but also the physical properties of the visual system, and its active use in behavior.

10.
PLoS Comput Biol ; 16(10): e1007730, 2020 10.
Article En | MEDLINE | ID: mdl-33075051

While interacting with objects during every-day activities, e.g. when sliding a glass on a counter top, people obtain constant feedback whether they are acting in accordance with physical laws. However, classical research on intuitive physics has revealed that people's judgements systematically deviate from predictions of Newtonian physics. Recent research has explained at least some of these deviations not as consequence of misconceptions about physics but instead as the consequence of the probabilistic interaction between inevitable perceptual uncertainties and prior beliefs. How intuitive physical reasoning relates to visuomotor actions is much less known. Here, we present an experiment in which participants had to slide pucks under the influence of naturalistic friction in a simulated virtual environment. The puck was controlled by the duration of a button press, which needed to be scaled linearly with the puck's mass and with the square-root of initial distance to reach a target. Over four phases of the experiment, uncertainties were manipulated by altering the availability of sensory feedback and providing different degrees of knowledge about the physical properties of pucks. A hierarchical Bayesian model of the visuomotor interaction task incorporating perceptual uncertainty and press-time variability found substantial evidence that subjects adjusted their button-presses so that the sliding was in accordance with Newtonian physics. After observing collisions between pucks, which were analyzed with a hierarchical Bayesian model of the perceptual observation task, subjects transferred the relative masses inferred perceptually to adjust subsequent sliding actions. Crucial in the modeling was the inclusion of a cost function, which quantitatively captures participants' implicit sensitivity to errors due to their motor variability. Taken together, in the present experiment we find evidence that our participants transferred their intuitive physical reasoning to a subsequent visuomotor control task consistent with Newtonian physics and weighed potential outcomes with a cost functions based on their knowledge about their own variability.


Heuristics/physiology , Learning/physiology , Models, Psychological , Physics , Psychomotor Performance/physiology , Adolescent , Adult , Bayes Theorem , Computational Biology , Female , Humans , Knowledge , Male , Uncertainty , Young Adult
11.
J Vis ; 19(14): 11, 2019 12 02.
Article En | MEDLINE | ID: mdl-31830240

The visually guided interception of a moving target is a fundamental visuomotor task that humans can do with ease. But how humans carry out this task is still unclear despite numerous empirical investigations. Measurements of angular variables during human interception have suggested three possible strategies: the pursuit strategy, the constant bearing angle strategy, and the constant target-heading strategy. Here, we review previous experimental paradigms and show that some of them do not allow one to distinguish among the three strategies. Based on this analysis, we devised a virtual driving task that allows investigating which of the three strategies best describes human interception. Crucially, we measured participants' steering, head, and gaze directions over time for three different target velocities. Subjects initially aligned head and gaze in the direction of the car's heading. When the target appeared, subjects centered their gaze on the target, pointed their head slightly off the heading direction toward the target, and maintained an approximately constant target-heading angle, whose magnitude varied across participants, while the target's bearing angle continuously changed. With a second condition, in which the target was partially occluded, we investigated several alternative hypotheses about participants' visual strategies. Overall, the results suggest that interceptive steering is best described by the constant target-heading strategy and that gaze and head are coordinated to continuously acquire visual information to achieve successful interception.


Automobile Driving , Automobiles , Motion Perception , Psychomotor Performance , Vision, Ocular , Adolescent , Adult , Eye Movements , Female , Head Movements , Humans , Male , Young Adult
12.
Sci Rep ; 9(1): 144, 2019 01 15.
Article En | MEDLINE | ID: mdl-30644423

The capability of directing gaze to relevant parts in the environment is crucial for our survival. Computational models have proposed quantitative accounts of human gaze selection in a range of visual search tasks. Initially, models suggested that gaze is directed to the locations in a visual scene at which some criterion such as the probability of target location, the reduction of uncertainty or the maximization of reward appear to be maximal. But subsequent studies established, that in some tasks humans instead direct their gaze to locations, such that after the single next look the criterion is expected to become maximal. However, in tasks going beyond a single action, the entire action sequence may determine future rewards thereby necessitating planning beyond a single next gaze shift. While previous empirical studies have suggested that human gaze sequences are planned, quantitative evidence for whether the human visual system is capable of finding optimal eye movement sequences according to probabilistic planning is missing. Here we employ a series of computational models to investigate whether humans are capable of looking ahead more than the next single eye movement. We found clear evidence that subjects' behavior was better explained by the model of a planning observer compared to a myopic, greedy observer, which selects only a single saccade at a time. In particular, the location of our subjects' first fixation differed depending on the stimulus and the time available for the search, which was well predicted quantitatively by a probabilistic planning model. Overall, our results are the first evidence that the human visual system's gaze selection agrees with optimal planning under uncertainty.


Eye Movements/physiology , Fixation, Ocular/physiology , Appetitive Behavior/physiology , Humans , Planning Techniques , Probability , Reward , Uncertainty
13.
Front Behav Neurosci ; 12: 253, 2018.
Article En | MEDLINE | ID: mdl-30515084

Theories of embodied cognition postulate that the world can serve as an external memory. This implies that instead of storing visual information in working memory the information may be equally retrieved by appropriate eye movements. Given this assumption, the question arises, how we balance the effort of memorization with the effort of visual sampling our environment. We analyzed eye-tracking data in a sensorimotor task where participants had to produce a copy of a LEGO®-blocks-model displayed on a computer screen. In the unconstrained condition, the model appeared immediately after eye-fixation on the model. In the constrained condition, we introduced a 0.7 s delay before uncovering the model. The model disappeared as soon as participants made a saccade outside of the Model Area. To successfully copy a model of 8 blocks participants made saccades to the Model Area on average 7.9 times in the unconstrained condition and 5.2 times in the constrained condition. However, the mean duration of a trial was 2.9 s (14%) longer in the constrained condition even when taking into account the delayed visibility of the model. In this study, we found evidence for an adaptive shift in subjects' behavior toward memorization by introducing a price for a certain type of saccades. However, the response is not adaptive; it is maladaptive, as memorization leads to longer overall performance time.

14.
PLoS Comput Biol ; 14(10): e1006518, 2018 10.
Article En | MEDLINE | ID: mdl-30359364

Although a standard reinforcement learning model can capture many aspects of reward-seeking behaviors, it may not be practical for modeling human natural behaviors because of the richness of dynamic environments and limitations in cognitive resources. We propose a modular reinforcement learning model that addresses these factors. Based on this model, a modular inverse reinforcement learning algorithm is developed to estimate both the rewards and discount factors from human behavioral data, which allows predictions of human navigation behaviors in virtual reality with high accuracy across different subjects and with different tasks. Complex human navigation trajectories in novel environments can be reproduced by an artificial agent that is based on the modular model. This model provides a strategy for estimating the subjective value of actions and how they influence sensory-motor decisions in natural behavior.


Decision Making/physiology , Psychomotor Performance/physiology , Reinforcement, Psychology , Algorithms , Computational Biology , Humans , Models, Biological , Reward
15.
Proc Natl Acad Sci U S A ; 115(9): 2246-2251, 2018 02 27.
Article En | MEDLINE | ID: mdl-29444860

Eye blinking is one of the most frequent human actions. The control of blinking is thought to reflect complex interactions between maintaining clear and healthy vision and influences tied to central dopaminergic functions including cognitive states, psychological factors, and medical conditions. The most imminent consequence of blinking is a temporary loss of vision. Minimizing this loss of information is a prominent explanation for changes in blink rates and temporarily suppressed blinks, but quantifying this loss is difficult, as environmental regularities are usually complex and unknown. Here we used a controlled detection experiment with parametrically generated event statistics to investigate human blinking control. Subjects were able to learn environmental regularities and adapted their blinking behavior strategically to better detect future events. Crucially, our design enabled us to develop a computational model that allows quantifying the consequence of blinking in terms of task performance. The model formalizes ideas from active perception by describing blinking in terms of optimal control in trading off intrinsic costs for blink suppression with task-related costs for missing an event under perceptual uncertainty. Remarkably, this model not only is sufficient to reproduce key characteristics of the observed blinking behavior such as blink suppression and blink compensation but also predicts without further assumptions the well-known and diverse distributions of time intervals between blinks, for which an explanation has long been elusive.


Behavior/physiology , Blinking/physiology , Environment , Saccades/physiology , Vision, Ocular/physiology , Adult , Computer Simulation , Female , Humans , Male , Middle Aged , Models, Biological , Normal Distribution , Probability , Young Adult
16.
PLoS Comput Biol ; 13(8): e1005632, 2017 Aug.
Article En | MEDLINE | ID: mdl-28767646

The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.


Action Potentials/physiology , Learning/physiology , Models, Neurological , Neuronal Plasticity/physiology , Computational Biology , Humans
17.
Proc Natl Acad Sci U S A ; 113(29): 8332-7, 2016 07 19.
Article En | MEDLINE | ID: mdl-27382164

During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.


Eye Movements/physiology , Adolescent , Adult , Environment , Female , Humans , Learning , Male , Models, Biological , Photic Stimulation , Young Adult
18.
Biol Cybern ; 107(4): 477-90, 2013 Aug.
Article En | MEDLINE | ID: mdl-23832417

In a large variety of situations one would like to have an expressive and accurate model of observed animal or human behavior. While general purpose mathematical models may capture successfully properties of observed behavior, it is desirable to root models in biological facts. Because of ample empirical evidence for reward-based learning in visuomotor tasks, we use a computational model based on the assumption that the observed agent is balancing the costs and benefits of its behavior to meet its goals. This leads to using the framework of reinforcement learning, which additionally provides well-established algorithms for learning of visuomotor task solutions. To quantify the agent's goals as rewards implicit in the observed behavior, we propose to use inverse reinforcement learning, which quantifies the agent's goals as rewards implicit in the observed behavior. Based on the assumption of a modular cognitive architecture, we introduce a modular inverse reinforcement learning algorithm that estimates the relative reward contributions of the component tasks in navigation, consisting of following a path while avoiding obstacles and approaching targets. It is shown how to recover the component reward weights for individual tasks and that variability in observed trajectories can be explained succinctly through behavioral goals. It is demonstrated through simulations that good estimates can be obtained already with modest amounts of observation data, which in turn allows the prediction of behavior in novel configurations.


Learning , Psychomotor Performance , Vision, Ocular , Algorithms , Computer Simulation , Humans , Models, Theoretical
19.
Multisens Res ; 26(1-2): 177-204, 2013.
Article En | MEDLINE | ID: mdl-23713205

Cognition can appear complex owing to the fact that the brain is capable of an enormous repertoire of behaviors. However, this complexity can be greatly reduced when constraints of time and space are taken into account. The brain is constrained by the body to limit its goal-directed behaviors to just a few independent tasks over the scale of 1-2 min, and can pursue only a very small number of independent agendas. These limitations have been characterized from a number of different vantage points such as attention, working memory and dual task performance. It may be possible that the disparate perspectives of all these methodologies can be unified if behaviors can be seen as modular and hierarchically organized. From this vantage point, cognition can be seen as having a central problem of scheduling behaviors to achieve short-term goals. Thus dual-task paradigms can be seen as studying the concurrent management of simultaneous, competing agendas. Attention can be seen as focusing on the decision as to whether to interrupt the current agenda or persevere. Working memory can be seen as the bookkeeping necessary to manage the state of the current active agenda items.


Brain/physiology , Cognition/physiology , Models, Neurological , Reward , Attention/physiology , Calibration , Executive Function/physiology , Humans , Memory, Short-Term/physiology , Visual Perception/physiology
20.
J Vis ; 12(13): 19, 2012 Dec 21.
Article En | MEDLINE | ID: mdl-23262151

Eye movements during natural tasks are well coordinated with ongoing task demands and many variables could influence gaze strategies. Sprague and Ballard (2003) proposed a gaze-scheduling model that uses a utility-weighted uncertainty metric to prioritize fixations on task-relevant objects and predicted that human gaze should be influenced by both reward structure and task-relevant uncertainties. To test this conjecture, we tracked the eye movements of participants in a simulated driving task where uncertainty and implicit reward (via task priority) were varied. Participants were instructed to simultaneously perform a Follow Task where they followed a lead car at a specific distance and a Speed Task where they drove at an exact speed. We varied implicit reward by instructing the participants to emphasize one task over the other and varied uncertainty in the Speed Task with the presence or absence of uniform noise added to the car's velocity. Subjects' gaze data were classified for the image content near fixation and segmented into looks. Gaze measures, including look proportion, duration and interlook interval, showed that drivers more closely monitor the speedometer if it had a high level of uncertainty, but only if it was also associated with high task priority or implicit reward. The interaction observed appears to be an example of a simple mechanism whereby the reduction of visual uncertainty is gated by behavioral relevance. This lends qualitative support for the primary variables controlling gaze allocation proposed in the Sprague and Ballard model.


Automobile Driving/psychology , Computer Simulation , Eye Movements/physiology , Psychomotor Performance/physiology , Uncertainty , Adult , Female , Fixation, Ocular , Humans , Male
...