ABSTRACT
Humans explore visual scenes by alternating short fixations with saccades directing the fovea to points of interest. During fixation, the visual system not only examines the foveal stimulus at high resolution, but it also processes the extrafoveal input to plan the next saccade. Although foveal analysis and peripheral selection occur in parallel, little is known about the temporal dynamics of foveal and peripheral processing upon saccade landing, during fixation. Here we investigate whether the ability to localize changes across the visual field differs depending on when the change occurs during fixation, and on whether the change localization involves foveal, extrafoveal processing, or both. Our findings reveal that the ability to localize changes in peripheral areas of the visual field improves as a function of time after fixation onset, whereas localization accuracy for foveal stimuli remains approximately constant. Importantly, this pattern holds regardless of whether individuals monitor only foveal or peripheral stimuli, or both simultaneously. Altogether, these results show that the visual system is more attuned to the foveal input early on during fixation, whereas change localization for peripheral stimuli progressively improves throughout fixation, possibly as a consequence of an increased readiness to plan the next saccade.
Subject(s)
Fixation, Ocular , Fovea Centralis , Saccades , Visual Fields , Humans , Fixation, Ocular/physiology , Fovea Centralis/physiology , Saccades/physiology , Male , Female , Adult , Visual Fields/physiology , Young Adult , Photic Stimulation/methods , Visual Perception/physiologyABSTRACT
Accurately estimating time to contact (TTC) is crucial for successful interactions with moving objects, yet it is challenging under conditions of sensory and contextual uncertainty, such as occlusion. In this study, participants engaged in a prediction motion task, monitoring a target that moved rightward and an occluder. The participants' task was to press a key when they predicted the target would be aligned with the occluder's right edge. We manipulated sensory uncertainty by varying the visible and occluded periods of the target, thereby modulating the time available to integrate sensory information and the duration over which motion must be extrapolated. Additionally, contextual uncertainty was manipulated by having a predictable and unpredictable condition, meaning the occluder either reliably indicated where the moving target would disappear or provided no such indication. Results showed differences in accuracy between the predictable and unpredictable occluder conditions, with different eye movement patterns in each case. Importantly, the ratio of the time the target was visible, which allows for the integration of sensory information, to the occlusion time, which determines perceptual uncertainty, was a key factor in determining performance. This ratio is central to our proposed model, which provides a robust framework for understanding and predicting human performance in dynamic environments with varying degrees of uncertainty.
Subject(s)
Motion Perception , Humans , Motion Perception/physiology , Uncertainty , Male , Female , Adult , Young Adult , Photic Stimulation/methods , Eye Movements/physiology , Reaction Time/physiology , Time Perception/physiology , Psychomotor Performance/physiologyABSTRACT
Reaching movements are guided by estimates of the target object's location. Since the precision of instantaneous estimates is limited, one might accumulate visual information over time. However, if the object is not stationary, accumulating information can bias the estimate. How do people deal with this trade-off between improving precision and reducing the bias? To find out, we asked participants to tap on targets. The targets were stationary or moving, with jitter added to their positions. By analysing the response to the jitter, we show that people continuously use the latest available information about the target's position. When the target is moving, they combine this instantaneous target position with an extrapolation based on the target's average velocity during the last several hundred milliseconds. This strategy leads to a bias if the target's velocity changes systematically. Having people tap on accelerating targets showed that the bias that results from ignoring systematic changes in velocity is removed by compensating for endpoint errors if such errors are consistent across trials. We conclude that combining simple continuous updating of visual information with the low-pass filter characteristics of muscles, and adjusting movements to compensate for errors made in previous trials, leads to the precise and accurate human goal-directed movements.
Subject(s)
Motion Perception , Psychomotor Performance , Humans , Psychomotor Performance/physiology , Motion Perception/physiology , Uncertainty , Motion , Movement/physiologyABSTRACT
When intercepting moving targets, people perform slightly better if they follow their natural tendency to pursue the target with their eyes. Is this because the velocity is judged more precisely when pursuing the target? To find out, we compared how well people could determine which of two sequentially presented moving bars was moving faster. There was always also a static bar on the screen. People judged the moving bar's velocity about 10% more precisely when pursuing it than when fixating the static bar.
Subject(s)
Motion Perception , Humans , Pursuit, SmoothABSTRACT
People usually follow a moving object with their gaze if they intend to interact with it. What would happen if they did not? We recorded eye and finger movements while participants moved a cursor toward a moving target. An unpredictable delay in updating the position of the cursor on the basis of that of the invisible finger made it essential to use visual information to guide the finger's ongoing movement. Decreasing the contrast between the cursor and the background from trial to trial made it difficult to see the cursor without looking at it. In separate experiments, either participants were free to hit the target anywhere along its trajectory or they had to move along a specified path. In the two experiments, participants tracked the cursor rather than the target with their gaze on 13% and 32% of the trials, respectively. They hit fewer targets when the contrast was low or a path was imposed. Not looking at the target did not disrupt the visual guidance that was required to deal with the delays that we imposed. Our results suggest that peripheral vision can be used to guide one item to another, irrespective of which item one is looking at.
Subject(s)
Eye Movements , Motion , Movement , Psychomotor Performance , Visual Perception/physiology , Adult , Female , Fingers/physiology , Humans , Male , Middle Aged , Reaction Time , Time Factors , Young AdultABSTRACT
Does the predictability of a target's movement and of the interception location influence how the target is intercepted? In a first experiment, we manipulated the predictability of the interception location. A target moved along a haphazardly curved path, and subjects attempted to tap on it when it entered a hitting zone. The hitting zone was either a large ring surrounding the target's starting position (ring condition) or a small disk that became visible before the target appeared (disk condition). The interception location gradually became apparent in the ring condition, whereas it was immediately apparent in the disk condition. In the ring condition, subjects pursued the target with their gaze. Their heads and hands gradually moved in the direction of the future tap position. In the disk condition, subjects immediately directed their gaze toward the hitting zone by moving both their eyes and heads. They also moved their hands to the future tap position sooner than in the ring condition. In a second and third experiment, we made the target's movement more predictable. Although this made the targets easier to pursue, subjects now shifted their gaze to the hitting zone soon after the target appeared in the ring condition. In the disk condition, they still usually shifted their gaze to the hitting zone at the beginning of the trial. Together, the experiments show that predictability of the interception location is more important than predictability of target movement in determining how we move to intercept targets. NEW & NOTEWORTHY We show that if people are required to intercept a target at a known location, they direct their gaze to the interception point as soon as they can rather than pursuing the target with their eyes for as long as possible. The predictability of the interception location rather than the predictability of the path to that location largely determines how the eyes, head, and hand move.
Subject(s)
Fixation, Ocular/physiology , Hand/physiology , Motion Perception/physiology , Movement/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , MaleABSTRACT
The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.
Subject(s)
Adaptation, Physiological/physiology , Eye Movements/physiology , Feedback, Sensory/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Middle Aged , Photic Stimulation , Reaction Time/physiologyABSTRACT
It is well known that when we intentionally make large head movements, the resulting motion parallax helps us judge objects' distances. The information about distance could be obtained in various ways: from the changes in the object's position with respect to ourselves, from the changes in its orientation relative to the line of sight, and from the relative retinal motion between the target's image and that of the background. We explore here whether these motion parallax cues are used when we think we are standing still. To answer this question we asked subjects to indicate the position of a virtual target with their unseen finger. The position and the size of the target changed across trials. There were pairs of trials in which the same target was presented at the same location, except that one or more of the three motion parallax cues indicated that the target was either 10 cm closer or 10 cm farther away than the 'true' distance. Any systematic difference between the positions indicated for the closer and further targets of such pairs indicates that the cues in question influence subjects' judgments. The results show that motion parallax cues have a detectable influence on our judgments, even when the head only moves a few millimeters. Relative retinal image motion has the clearest effect. Subjects did not move their head differently when we presented the targets to only one eye in order to increase the benefit of considering motion parallax.
Subject(s)
Cues , Distance Perception/physiology , Judgment/physiology , Motion Perception/physiology , Orientation/physiology , Retina/physiology , Female , Head Movements/physiology , Humans , Male , Motion , PostureABSTRACT
When vision of the hand is unavailable, movements drift systematically away from their targets. It is unclear, however, why this drift occurs. We investigated whether drift is an active process, in which people deliberately modify their movements based on biased position estimates, causing the real hand to move away from the real target location, or a passive process, in which execution error accumulates because people have diminished sensory feedback and fail to adequately compensate for the execution error. In our study participants reached back and forth between two targets when vision of the hand, targets, or both the hand and targets was occluded. We observed the most drift when hand vision and target vision were occluded and equivalent amounts of drift when either hand vision or target vision was occluded. In a second experiment, we observed movement drift even when no visual target was ever present, providing evidence that drift is not driven by a visual-proprioceptive discrepancy. The observed drift in both experiments was consistent with a model of passive error accumulation in which the amount of drift is determined by the precision of the sensory estimate of movement error.
Subject(s)
Darkness , Feedback, Sensory , Hand , Motor Activity , Visual Perception , Adult , Female , Humans , Male , Middle Aged , Models, Neurological , Models, Psychological , Proprioception , Psychophysics , Young AdultABSTRACT
Hitting a moving target requires that we do not miss the target when it is around the aimed position. The time available for us not to miss the target when it is at the position of interest is usually called the time window and depends on target's speed and size. These variables, among others, have been manipulated in previous studies but kept constant within the same trial or session. Here, we present results of a hitting task in which targets underwent simple harmonic motion, which is defined by a sinusoidal function. Target velocity changes continuously in this motion and so does the time window which is shorter in the centre (peak velocity) and longer at the turning points (lowest velocity) within a single trial. We studied two different conditions in which the target moved with a reliable (across trials) amplitude displacement or reliable peak velocity, respectively, and subjects were free to decide where and when to hit it. Results show that subjects made a compromise between maximum and minimum target's speed, so that they did hit the target at intermediate speed values. Interestingly, the reliability of target peak velocity (or displacement) modulated the point of interception. When target's peak velocity was more reliable, subjects intercepted the target at positions with smaller temporal windows and the reverse was true when displacement was reliable. Subjects adapted the interceptive behaviour to the underlying statistical structure of the targets. Finally, in a control condition in which the temporal window also depended on the instant size and not only on speed, subjects intercepted the target when it moved at similar speeds than when the size was constant. This finding suggests that velocity rather than the temporal window contributed more to controlling the interceptive movements.
Subject(s)
Motion Perception/physiology , Movement/physiology , Psychomotor Performance/physiology , Size Perception/physiology , Analysis of Variance , Female , Humans , Male , Reaction Time/physiology , Space Perception/physiology , Time FactorsABSTRACT
It is known that people can learn to deal with delays between their actions and the consequences of such actions. We wondered whether they do so by adjusting their anticipations about the sensory consequences of their actions or whether they simply learn to move in certain ways when performing specific tasks. To find out, we examined details of how people learn to intercept a moving target with a cursor that follows the hand with a delay and examined the transfer of learning between this task and various other tasks that require temporal precision. Subjects readily learned to intercept the moving target with the delayed cursor. The compensation for the delay generalized across modifications of the task, so subjects did not simply learn to move in a certain way in specific circumstances. The compensation did not generalize to completely different timing tasks, so subjects did not generally expect the consequences of their motor commands to be delayed. We conclude that people specifically learn to control the delayed visual consequences of their actions to perform certain tasks.
Subject(s)
Motion Perception/physiology , Psychomotor Performance/physiology , Transfer, Psychology/physiology , Adaptation, Physiological/physiology , Feedback, Psychological/physiology , Female , Generalization, Psychological/physiology , Hand , Humans , Male , Photic Stimulation , Reaction Time/physiologyABSTRACT
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Subject(s)
Goals , Hand , Movement , Saccades , Hand/physiology , Movement/physiology , Humans , Male , Female , Young Adult , Adult , Saccades/physiology , Time FactorsABSTRACT
Being able to see the object that you are aiming for is evidently useful for guiding the hand to a moving object. We examined to what extent seeing the moving hand also influences performance. Subjects tried to intercept moving targets while either instantaneous or delayed feedback about the moving hand was provided at certain times. After each attempt, subjects had to indicate whether they thought they had hit the target, had passed ahead of it, or had passed behind it. Providing visual feedback early in the movement enabled subjects to use visual information about the moving hand to correct their movements. Providing visual feedback when the moving hand passed the target helped them judge how they had performed. Performance was almost as good when visual feedback about the moving hand was provided only when the hand was passing the target as when it was provided throughout the movement. We conclude that seeing the temporal relationship between the hand and the target as the hand crosses the target's path is instrumental for adapting to a temporal delay.
Subject(s)
Adaptation, Physiological , Feedback, Sensory/physiology , Motion Perception/physiology , Psychomotor Performance/physiology , Humans , Reaction TimeABSTRACT
We constantly perform tasks within complex and dynamic environments. Some of these tasks (e.g., road crossing or playing team sports) require predicting future states of the world to decide which action to unfold and when to do so. However, it remains largely unexplored how the variability in a scene influences perceptual decision-making. Here we examine how increasing the scene variability influences our ability to make perceptual judgements and decisions by using a go/no-go decision task in a dynamic scenario mimicking a road-crossing situation with different levels of stimuli variability. Parameters of psychometric functions revealed that differences in variability do not influence judgements about the objects' time-to-contact, or the difficulty in making such judgements. Nevertheless, increases in the scene variability influence the go/no-go decisions leading people to adopt more conservative criteria. How much the criterion changes across levels of variability is well accounted for by the actual amount of variance in the scene, but the overall criterion is tightly linked to the precision or reliability with which one can estimate perceptual information about the objects' arrival time. These results suggest that the reliability on our own perceptual estimates modulate our criterion when completing perceptual decision-making tasks under different scene variabilities. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Subject(s)
Decision Making , Visual Perception , Humans , Reproducibility of Results , BiasABSTRACT
We often need to interact with targets that move along arbitrary trajectories in the 3D scene. In these situations, information of parameters like speed, time-to-contact, or motion direction is required to solve a broad class of timing tasks (e.g., shooting, or interception). There is a large body of literature addressing how we estimate different parameters when objects move both in the fronto-parallel plane and in depth. However, we do not know to which extent the timing of interceptive actions is affected when motion-in-depth (MID) is involved. Unlike previous studies that have looked at the timing of interceptive actions using constant distances and fronto-parallel motion, we here use immersive virtual reality to look at how differences in the above-mentioned variables influence timing errors in a shooting task performed in a 3D environment. Participants had to shoot at targets that moved following different angles of approach with respect to the observer when those reached designated shooting locations. We recorded the shooting time, the temporal and spatial errors and the head's position and orientation in two conditions that differed in the interval between the shot and the interception of the target's path. Results show a consistent change in the temporal error across approaching angles: the larger the angle, the earlier the error. Interestingly, we also found different error patterns within a given angle that depended on whether participants tracked the whole target's trajectory or only its end-point. These differences had larger impact when the target moved in depth and are consistent with underestimating motion-in-depth in the periphery. We conclude that the strategy participants use to track the target's trajectory interacts with MID and affects timing performance.
ABSTRACT
In daily life we often interact with moving objects in tasks that involve analyzing visual motion, like catching a ball. To do so successfully we track objects with our gaze, using a combination of smooth pursuit and saccades. Previous work has shown that the occurrence and direction of corrective saccades leads to changes in the perceived velocity of moving objects. Here we investigate whether such changes lead to equivalent biases in interception. Participants had to track moving targets with their gaze, and in separate sessions either judge the targets' velocities or intercept them by tapping on them. We separated trials in which target movements were tracked with pure pursuit from trials in which identical target movements were tracked with a combination of pursuit and corrective saccades. Our results show that interception errors are shifted in accordance with the observed influence of corrective saccades on velocity judgments. Furthermore, while the time at which corrective saccades occurred did not affect velocity judgments, it did influence their effect in the interception task. Corrective saccades around 100 ms before the tap had a stronger effect on the endpoint error than earlier saccades. This might explain why participants made earlier corrective saccades in the interception task.
Subject(s)
Fixation, Ocular/physiology , Judgment/physiology , Motion Perception/physiology , Saccades/physiology , Adult , Female , Humans , Male , Middle Aged , Motion , Photic Stimulation , Psychomotor Performance/physiology , Pursuit, Smooth/physiology , Time FactorsABSTRACT
There are two main anatomically and physiologically defined visual pathways connecting the primary visual cortex with higher visual areas: the ventral and the dorsal pathway. The influential two-visual-systems hypothesis postulates that visual attributes are analyzed differently for different functions: in the dorsal pathway visual information is analyzed to guide actions, whereas in the ventral pathway visual information is analyzed for perceptual judgments. We here show that a person who cannot identify objects due to an extensive bilateral ventral brain lesion is able to judge the velocity at which an object moves. Moreover, both his velocity judgements and his interceptive actions are as susceptible to a motion illusion as those of people without brain lesions. These findings speak in favor of the idea that dorsal structures process information about attributes such as velocity, irrespective of whether such information is used for perceptual judgments or to guide actions.
Subject(s)
Illusions , Visual Cortex/physiology , Visual Pathways , Visual Perception , Aged , Brain Diseases/diagnostic imaging , Brain Diseases/pathology , Brain Diseases/physiopathology , Female , Fixation, Ocular , Humans , Judgment , Magnetic Resonance Imaging , Male , Middle Aged , Photic StimulationABSTRACT
It has been hypothesised that our actions are less susceptible to visual illusions than our perceptual judgements because similar information is processed for perception and action in separate pathways. We test this hypothesis for subjects intercepting a moving object that appears to move at a different speed than its true speed due to an illusion. The object was a moving Gabor patch: a sinusoidal grating of which the luminance contrast is modulated by a two-dimensional Gaussian. We manipulated the patch's apparent speed by moving the grating relative to the Gaussian. We used separate two-interval forced choice discrimination tasks to determine how moving the grating influenced ten people's judgements of the object's position and velocity while they were fixating. Based on their perceptual judgements, and knowing that our ability to correct for errors that arise from relying on incorrect judgements are limited by a sensorimotor delay of about 100 msec, we predicted the extent to which subjects would tap ahead of or behind similar targets when trying to intercept them at the fixation location. The predicted errors closely matched the actual errors that subjects made when trying to intercept the targets. This finding does not support the two visual streams hypothesis. The results are consistent with the idea that the extent to which an illusion influences an action tells us something about the extent to which the action relies on the percept in question.
Subject(s)
Judgment/physiology , Motion Perception/physiology , Visual Pathways/physiology , Visual Perception/physiology , Adult , Choice Behavior/physiology , Humans , Psychomotor Performance/physiologyABSTRACT
Humans time their interceptive actions with remarkable precision. This daily-life performance is far too good to be explained by reported experimental perceptual estimates of when an object will arrive at the interception location. One option is that people use general principles to reduce variability such as integrating early estimates from predictive mechanisms with late estimates from online vision. Here we explore this possibility by presenting virtual balls that people had to catch and compared 3 conditions: early, late, and full vision of a parabolic path. If people integrate these different estimates, the precision of the timing under full vision should be higher than when only late vision is available. We tested this hypothesis and found a benefit for full vision, but only for those (steeper) trajectories in which early and late estimates are likely based on different cues. Overall, the integration of the different estimates of the impending interceptive event was optimal and can help explain the observed high temporal precision in many daily-life situations. Finally, by revealing the situations in which people do not take into account early predictions and rely on online visual information only we elucidate the theoretical controversy between predictive versus online control of timed actions.
Subject(s)
Motion Perception/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Time Perception/physiology , Adult , Humans , Young AdultABSTRACT
Many actions involve limb movements toward a target. Visual and proprioceptive estimates are available online, and by optimally combining (Ernst and Banks, 2002) both modalities during the movement, the system can increase the precision of the hand estimate. The notion that both sensory modalities are integrated is also motivated by the intuition that we do not consciously perceive any discrepancy between the felt and seen hand's positions. This coherence as a result of integration does not necessarily imply realignment between the two modalities (Smeets et al., 2006). For example, the two estimates (visual and proprioceptive) might be different without either of them (e.g., proprioception) ever being adjusted after recovering the other (e.g., vision). The implication that the felt and seen positions might be different has a temporal analog. Because the actual feedback from the hand at a given instantaneous position reaches brain areas at different times for proprioception and vision (shorter for proprioception), the corresponding instantaneous unisensory position estimates will be different, with the proprioceptive one being ahead of the visual one. Based on the assumption that the system integrates optimally and online the available evidence from both senses, we introduce a temporal mechanism that explains the reported overestimation of hand positions when vision is occluded for active and passive movements (Gritsenko et al., 2007) without the need to resort to initial feedforward estimates (Wolpert et al., 1995). We set up hypotheses to test the validity of the model, and we contrast simulation-based predictions with empirical data.