Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Vision Res ; 189: 93-103, 2021 12.
Article in English | MEDLINE | ID: mdl-34688109

ABSTRACT

Radial motion is perceived as faster than linear motion when local spatiotemporal properties are matched. This radial speed bias (RSB) is thought to occur because radial motion is partly interpreted as motion-in-depth. Geometry dictates that a fixed amount of radial expansion at increasing eccentricities is consistent with smaller motion in depth, so it is perhaps surprising that the impact of eccentricity on RSB has not been examined. With this issue in mind, across 3 experiments we investigated the RSB as a function of eccentricity. In a 2IFC task, participants judged which of a linear (test - variable speed) or radial (reference - 2 or 4°/s) stimulus appeared to move faster. Linear and radial stimuli comprised 4 Gabor patches arranged left, right, above and below fixation at varying eccentricities (3.5°-14°). For linear stimuli, Gabors all drifted left or right, whereas for radial stimuli Gabors drifted towards or away from the centre. The RSB (difference in perceived speeds between matched linear and radial stimuli) was recovered from fitted psychometric functions. Across all 3 experiments we found that the RSB decreased with eccentricity but this tendency was less marked beyond 7° - i.e. at odds with the geometry, the effect did not continue to decrease as a function of eccentricity. This was true irrespective of whether stimuli were fixed in size (Experiment 1) or varied in size to account for changes in spatial scale across the retina (Experiment 2). It was also true when we removed conflicting stereo cues via monocular viewing (Experiment 3). To further investigate our data, we extended a previous model of speed perception, which suggests perceived motion for such stimuli reflects a balance between two opposing perceptual interpretations, one for motion in depth and the other for object deformation. We propose, in the context of this model, that our data are consistent with placing greater weight on the motion in depth interpretation with increasing eccentricity and this is why the RSB does not continue to reduce in line with purely geometric constraints.


Subject(s)
Motion Perception , Cues , Humans , Motion , Retina
2.
J Vis ; 20(9): 12, 2020 09 02.
Article in English | MEDLINE | ID: mdl-32945848

ABSTRACT

Moving around safely relies critically on our ability to detect object movement. This is made difficult because retinal motion can arise from object movement or our own movement. Here we investigate ability to detect scene-relative object movement using a neural mechanism called optic flow parsing. This mechanism acts to subtract retinal motion caused by self-movement. Because older observers exhibit marked changes in visual motion processing, we consider performance across a broad age range (N = 30, range: 20-76 years). In Experiment 1 we measured thresholds for reliably discriminating the scene-relative movement direction of a probe presented among three-dimensional objects moving onscreen to simulate observer movement. Performance in this task did not correlate with age, suggesting that ability to detect scene-relative object movement from retinal information is preserved in ageing. In Experiment 2 we investigated changes in the underlying optic flow parsing mechanism that supports this ability, using a well-established task that measures the magnitude of globally subtracted optic flow. We found strong evidence for a positive correlation between age and global flow subtraction. These data suggest that the ability to identify object movement during self-movement from visual information is preserved in ageing, but that there are changes in the flow parsing mechanism that underpins this ability. We suggest that these changes reflect compensatory processing required to counteract other impairments in the ageing visual system.


Subject(s)
Aging/physiology , Motion Perception/physiology , Optic Flow/physiology , Retina/physiology , Adult , Aged , Female , Humans , Longevity , Male , Middle Aged , Photic Stimulation/methods , Young Adult
3.
Multisens Res ; 32(8): 771-796, 2019 12 11.
Article in English | MEDLINE | ID: mdl-31291612

ABSTRACT

Multisensory integration typically follows the predictions of a statistically optimal model whereby the contribution of each sensory modality is weighted according to its reliability. Previous research has shown that multisensory integration is affected by ageing, however it is less certain whether older adults follow this statistically optimal model. Additionally, previous studies often present multisensory cues which are conflicting in size, shape or location, yet naturally occurring multisensory cues are usually non-conflicting. Therefore, the mechanisms of integration in older adults might differ depending on whether the multisensory cues are consistent or conflicting. In the current experiment, young ( n = 21) and older ( n = 30) adults were asked to make judgements regarding the height of wooden blocks using visual, haptic or combined visual-haptic information. Dual modality visual-haptic blocks could be presented as equal or conflicting in size. Young and older adults' size discrimination thresholds (i.e., precision) were not significantly different for visual, haptic or visual-haptic cues. In addition, both young and older adults' discrimination thresholds and points of subjective equality did not follow model predictions of optimal integration, for both conflicting and non-conflicting cues. Instead, there was considerable between subject variability as to how visual and haptic cues were processed when presented simultaneously. This finding has implications for the development of multisensory therapeutic aids and interventions to assist older adults with everyday activities, where these should be tailored to the needs of each individual.


Subject(s)
Aging/physiology , Cues , Size Perception/physiology , Touch Perception/physiology , Visual Perception/physiology , Adult , Aged , Female , Humans , Male , Middle Aged , Photic Stimulation , Physical Stimulation , Reproducibility of Results , Young Adult
4.
Vision Res ; 140: 66-72, 2017 11.
Article in English | MEDLINE | ID: mdl-28822716

ABSTRACT

Speed perception is vital for safe activity in the environment. However, considerable evidence suggests that perceived speed changes as a function of stimulus contrast, with some investigators suggesting that this might have meaningful real-world consequences (e.g. driving in fog). In the present study we investigate whether the neural effects of contrast on speed perception occur at the level of local or global motion processing. To do this we examine both speed discrimination thresholds and contrast-dependent speed perception for two global motion configurations that have matched local spatio-temporal structure. Specifically we compare linear and radial configurations, the latter of which arises very commonly due to self-movement. In experiment 1 the stimuli comprised circular grating patches. In experiment 2, to match stimuli even more closely, motion was presented in multiple local Gabor patches equidistant from central fixation. Each patch contained identical linear motion but the global configuration was either consistent with linear or radial motion. In both experiments 1 and 2, discrimination thresholds and contrast-induced speed biases were similar in linear and radial conditions. These results suggest that contrast-based speed effects occur only at the level of local motion processing, irrespective of global structure. This result is interpreted in the context of previous models of speed perception and evidence suggesting differences in perceived speed of locally matched linear and radial stimuli.


Subject(s)
Acceleration , Contrast Sensitivity/physiology , Motion Perception/physiology , Humans , Psychophysics , Sensory Thresholds
5.
Multisens Res ; 30(6): 509-536, 2017 Jan 01.
Article in English | MEDLINE | ID: mdl-31287089

ABSTRACT

A number of studies have shown that multisensory performance is well predicted by a statistically optimal maximum likelihood estimation (MLE) model. Under this model unisensory estimates are combined additively and weighted according to relative reliability. Recent theories have proposed that atypical sensation and perception commonly reported in autism spectrum condition (ASC) may result from differences in the use of reliability information. Furthermore, experimental studies have indicated that multisensory processing is less effective in those with the condition in comparison to neurotypical (NT) controls. In the present study, adults with ASC (n=13) and a matched NT group (n=13) completed a visual-haptic size judgement task (cf. Gori et al., 2008) in which participants compared the height of wooden blocks using either vision or haptics, and in a dual modality condition in which visual-haptic stimuli were presented in size conflict. Participants with ASC tended to produce more reliable estimates than the NT group. However, dual modality performance was not well predicted by the MLE model for either group. Performance was subsequently compared to alternative models in which the participant either switched between modalities trial to trial (rather than integrating) and a model of non-optimal integration. Performance of both groups was statistically comparable to the cue-switching model. These findings suggest that adults with ASC adopted a similar strategy to NTs when processing conflicting visual-haptic information. Findings are discussed in relation to multisensory perception in ASC and methodological considerations associated with multisensory conflict paradigms.

6.
Proc Biol Sci ; 279(1736): 2171-9, 2012 Jun 07.
Article in English | MEDLINE | ID: mdl-22298845

ABSTRACT

Humans commonly face choices between multiple options with uncertain outcomes. Such situations occur in many contexts, from purely financial decisions (which shares should I buy?) to perceptuo-motor decisions between different actions (where should I aim my shot at goal?). Regardless of context, successful decision-making requires that the uncertainty at the heart of the decision-making problem is taken into account. Here, we ask whether humans can recover an estimate of exogenous uncertainty and then use it to make good decisions. Observers viewed a small dot that moved erratically until it disappeared behind an occluder. We varied the size of the occluder and the unpredictability of the dot's path. The observer attempted to capture the dot as it emerged from behind the occluded region by setting the location and extent of a 'catcher' along the edge of the occluder. The reward for successfully catching the dot was reduced as the size of the catcher increased. We compared human performance with that of an agent maximizing expected gain and found that observers consistently selected catcher size close to this theoretical solution. These results suggest that humans are finely tuned to exogenous uncertainty information and can exploit it to guide action.


Subject(s)
Decision Making , Observer Variation , Uncertainty , Choice Behavior , Humans , Models, Theoretical , Monte Carlo Method , Reward , Visual Perception
7.
J Vis ; 10(6): 14, 2010 Jun 01.
Article in English | MEDLINE | ID: mdl-20884563

ABSTRACT

There is little direct psychophysical evidence that the visual system contains mechanisms tuned to head-centered velocity when observers make a smooth pursuit eye movement. Much of the evidence is implicit, relying on measurements of bias (e.g., matching and nulling). We therefore measured discrimination contours in a space dimensioned by pursuit target motion and relative motion between target and background. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented parallel to the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity-discrimination ellipses were significantly oriented away from the cardinal axes, toward the main negative diagonal. However, ellipse orientation was considerably less steep than predicted by a pure combination of components. This suggests that observers used a mixture of two strategies across trials, one based on individual components and another based on their sum. We provide a model that simulates this type of behavior and is able to reproduce the ellipse orientations we found.


Subject(s)
Discrimination, Psychological/physiology , Eye Movements/physiology , Form Perception/physiology , Pursuit, Smooth/physiology , Space Perception/physiology , Humans , Models, Neurological , Motion Perception , Orientation/physiology , Reflex, Vestibulo-Ocular/physiology
8.
Vision Res ; 50(16): 1510-8, 2010 Jul 21.
Article in English | MEDLINE | ID: mdl-20452369

ABSTRACT

Ground-planes have an important influence on the perception of 3D space (Gibson, 1950) and it has been shown that the assumption that a ground-plane is present in the scene plays a role in the perception of object distance (Bruno & Cutting, 1988). Here, we investigate whether this influence is exerted at an early stage of processing, to affect the rapid estimation of 3D size. Participants performed a visual search task in which they searched for a target object that was larger or smaller than distracter objects. Objects were presented against a background that contained either a frontoparallel or slanted 3D surface, defined by texture gradient cues. We measured the effect on search performance of target location within the scene (near vs. far) and how this was influenced by scene orientation (which, e.g., might be consistent with a ground or ceiling plane, etc.). In addition, we investigated how scene orientation interacted with texture gradient information (indicating surface slant), to determine how these separate cues to scene layout were combined. We found that the difference in target detection performance between targets at the front and rear of the simulated scene was maximal when the scene was consistent with a ground-plane - consistent with the use of an elevation cue to object distance. In addition, we found a significant increase in the size of this effect when texture gradient information (indicating surface slant) was present, but no interaction between texture gradient and scene orientation information. We conclude that scene orientation plays an important role in the estimation of 3D size at an early stage of processing, and suggest that elevation information is linearly combined with texture gradient information for the rapid estimation of 3D size.


Subject(s)
Discrimination, Psychological , Size Perception/physiology , Cues , Humans , Reaction Time , Space Perception/physiology
9.
Curr Biol ; 20(8): 757-62, 2010 Apr 27.
Article in English | MEDLINE | ID: mdl-20399096

ABSTRACT

During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.


Subject(s)
Bayes Theorem , Head , Models, Biological , Movement/physiology , Pursuit, Smooth/physiology , Humans , Illusions , Motion Perception/physiology
10.
J Vis ; 9(1): 33.1-11, 2009 Jan 23.
Article in English | MEDLINE | ID: mdl-19271903

ABSTRACT

One way the visual system estimates object motion during pursuit is to combine estimates of eye velocity and retinal motion. This questions whether observers need direct access to retinal motion during pursuit. We tested this idea by varying the correlation between retinal motion and objective motion in a two-interval speed discrimination task. Responses were classified according to three motion cues: retinal speed (based on measured eye movements), objective speed, and the relative motion between pursuit target and stimulus. In the first experiment, feedback was based on relative motion and this cue fit the response curves best. In the second experiment, simultaneous relative motion was removed but observers still used the sequential relative motion between pursuit target and dot pattern to make their judgements. In a final experiment, feedback was given explicitly on the retinal motion, using online measurements of eye movements. Nevertheless, sequential relative motion still provided the best account of the data. The results suggest that observers do not have direct access to retinal motion when making perceptual judgements about movement during pursuit.


Subject(s)
Eye Movements/physiology , Motion , Pursuit, Smooth/physiology , Retina/physiology , Adult , Cues , Differential Threshold , Discrimination, Psychological , Feedback , Female , Humans , Male , Photic Stimulation/methods , Psychophysics , Time Factors
11.
Vision Res ; 48(17): 1820-30, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18597808

ABSTRACT

In order to compute a representation of an object's size within a 3D scene, the visual system must scale retinal size by an estimate of the distance to the object. Evidence from size discrimination and visual search studies suggests that we have no access to the representation of retinal size when performing such tasks. In this study we investigate whether observers have early access to retinal size prior to scene size. Observer performance was assessed in a visual search task (requiring search within a 3D scene) in which processing was interrupted at a range of short presentation times. If observers have access to retinal size then we might expect to find a presentation time before which observers behave as if using retinal size and after which they behave as if using scene size. Observers searched for a larger or smaller target object within a group of objects viewed against a textured plane slanted at 0 degrees or 60 degrees . Stimuli were presented for 100, 200, 400 or 800ms and immediately followed by a mask. We measured the effect of target location within a stimulus (near vs. far) on task performance and how this was influenced by the background slant. The results of experiments 1 and 2 suggest that background slant had a significant influence on performance at all presentation times consistent with the use of scene size and not retinal size. Experiment 3 shows that this finding cannot be explained by a 2D texture contrast effect. Experiment 4 indicates that contextual information learned across a block of trials could be an important factor in such visual search experiments. In spite of this finding, our results suggest that distance scaling may occur prior to 100ms and we find no clear evidence for explicit access to a retinal representation of size.


Subject(s)
Discrimination, Psychological , Saccades/physiology , Visual Perception/physiology , Form Perception/physiology , Humans , Photic Stimulation/methods , Psychophysics , Reaction Time , Space Perception/physiology
12.
J Vis ; 7(13): 10.1-10, 2007 Oct 29.
Article in English | MEDLINE | ID: mdl-17997638

ABSTRACT

Studies of visual search performance with shaded stimuli, in which the target is rotated by 180 degrees relative to the distracters, typically demonstrate more efficient performance in stimuli with vertical compared to horizontal shading gradients. In addition, performance is usually better for vertically shaded stimuli with top-light (seen as convex) distracters compared to those with bottom-light (seen as concave) distracters. These findings have been cited as evidence for the use of the prior assumptions of overhead lighting and convexity in the interpretation of shaded stimuli and suggest that these priors affect preattentive processing. Here we attempt to modify these priors by providing observers with visual-haptic training in an environment inconsistent with their priors. Observers' performance was measured in a visual search task and a shape judgment task before and after training. Following training, we found a reduced asymmetry between visual search performance with convex and concave distracters, suggesting a modification of the convexity prior. However, although evidence of a change in the light-from-above prior was found in the shape judgment task, no change was found in the visual search task. We conclude that experience can modify the convexity prior at a preattentive stage in processing; however, our training did not modify the light-from-above prior that is measured via visual search.


Subject(s)
Adaptation, Physiological , Depth Perception/physiology , Light , Visual Perception/physiology , Adult , Calibration , Form Perception/physiology , Humans , Judgment , Perceptual Masking , Photic Stimulation/methods
13.
Vision Res ; 47(4): 564-8, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17011014

ABSTRACT

Our perception of speed has been shown to be distorted under a number of viewing conditions. Recently the well-known reduction of perceived speed at low contrast has led to Bayesian models of speed perception that account for these distortions with a slow speed 'prior'. To test the predictive, rather than the descriptive, power of the Bayesian approach we have investigated perceived speed at low luminance. Our results indicate that, for the mesopic and photopic range (0.13-30 cd m(-2)) the perceived speed of lower luminance patterns is virtually unaffected at low speeds (<4 deg s(-1)) but is over-estimated at higher speeds (>4 deg s(-1)). We show here that the results can be accounted for by an extension to a simple ratio model of speed encoding [Hammett, S. T., Champion, R. A., Morland, A. & Thompson, P. G. (2005). A ratio model of perceived speed in the human visual system. Proceedings of Royal Society B, 262, 2351-2356.] that takes account of known changes in neural responses as a function of luminance, contrast and temporal frequency. The results are not consistent with current Bayesian approaches to modelling speed encoding that postulate a slow speed prior.


Subject(s)
Models, Psychological , Motion Perception , Perceptual Distortion , Bayes Theorem , Humans , Lighting , Pattern Recognition, Visual , Photic Stimulation/methods , Psychophysics
14.
Vision Res ; 47(3): 375-83, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17187840

ABSTRACT

It has been shown that the perceived direction of a plaid with components of unequal contrast is biased towards the direction of the higher-contrast component [Stone, L. S., Watson, A. B., & Mulligan, J. B. (1990). Effect of contrast on the perceived direction of a moving plaid. Vision Research 30, 1049-1067]. It was proposed that this effect is due to the influence of contrast on the perceived speed of the plaid components. This led to the conclusion that perceived plaid direction is computed by the intersection of constraints (IOC) of the perceived speed of the components rather than their physical speeds. We tested this proposal at a wider range of component speeds (2-16deg/s) than used previously, across which the effect of contrast on perceived speed is seen to reverse. We find that across this range, perceived plaid direction cannot be predicted either by a model which takes the IOC of physical or perceived component speed. Our results are consistent with an explanation of 2D motion perception proposed by [Bowns, L. (1996). Evidence for a feature tracking explanation of why Type II plaids move in the vector sum direction at short durations. Vision Research, 36, 3685-3694.] in which the motion of the zero-crossing edges of the features in the stimulus contribute to the perceived direction of motion.


Subject(s)
Motion Perception , Pattern Recognition, Visual , Contrast Sensitivity , Humans , Models, Psychological , Photic Stimulation/methods , Psychophysics
15.
Proc Biol Sci ; 272(1579): 2351-6, 2005 Nov 22.
Article in English | MEDLINE | ID: mdl-16243695

ABSTRACT

The perceived speed of moving images changes over time. Prolonged viewing of a pattern (adaptation) leads to an exponential decrease in its perceived speed. Similarly, responses of neurones tuned to motion reduce exponentially over time. It is tempting to link these phenomena. However, under certain conditions, perceived speed increases after adaptation and the time course of these perceptual effects varies widely. We propose a model that comprises two temporally tuned mechanisms whose sensitivities reduce exponentially over time. Perceived speed is taken as the ratio of these filters' outputs. The model captures increases and decreases in perceived speed following adaptation and describes our data well with just four free parameters. Whilst the model captures perceptual time courses that vary widely, parameter estimates for the time constants of the underlying filters are in good agreement with estimates of the time course of adaptation of direction selective neurones in the mammalian visual system.


Subject(s)
Models, Neurological , Motion Perception/physiology , Humans , Photic Stimulation , Time Factors
16.
Perception ; 33(2): 237-47, 2004.
Article in English | MEDLINE | ID: mdl-15109164

ABSTRACT

The failure of shape constancy from stereoscopic information is widely reported in the literature. In this study we investigate how shape constancy is influenced by the size of the object and by the shape of the object's surface. Participants performed a shape-judgment task on objects of five sizes with three different surface shapes. The shapes used were: a frontoparallel rectangle, a triangular ridge surface, and a cylindrical surface, all of which contained the same maximum depth information, but different variations in depth across the surface. The results showed that, generally, small objects appear stretched and large objects appear squashed along the depth dimension. We also found a larger variance in shape judgments for rectangular stimuli than for cylindrical and ridge-shaped stimuli, suggesting that, when performing shape judgments with cylindrical and ridge-shaped stimuli, observers rely on a higher-order shape representation.


Subject(s)
Space Perception/physiology , Depth Perception/physiology , Humans , Judgment , Mathematics , Pattern Recognition, Visual/physiology , Psychological Tests , Size Perception/physiology , Surface Properties
17.
Vision Res ; 44(5): 483-7, 2004 Mar.
Article in English | MEDLINE | ID: mdl-14680774

ABSTRACT

The interaction of the depth cues of binocular disparity and motion parallax could potentially be used by the visual system to recover an estimate of the viewing distance. The present study investigated whether an interaction of stereo and motion has effects that persist over time to influence the perception of shape from stereo when the motion information is removed. Static stereoscopic ellipsoids were presented following the presentation of rotating stereoscopic ellipsoids, which were located either at the same or a different viewing distance. It was predicted that shape judgements for static stimuli would be better after presentation of a rotating stimulus at the same viewing distance, than after presentation of one at a different viewing distance. No such difference was found. It was concluded that an interaction between stereo and motion depth cues does not influence the perception of subsequently presented static objects.


Subject(s)
Cues , Photic Stimulation , Visual Perception/physiology , Depth Perception/physiology , Form Perception/physiology , Humans , Motion Perception/physiology , Psychophysics
SELECTION OF CITATIONS
SEARCH DETAIL
...