RESUMO
Stereoscopic imagery often aims to evoke three-dimensional (3-D) percepts that are accurate and realistic-looking. The "gap" between 3-D imagery and real scenes is small, but focus cues typically remain incorrect because images are displayed on a single focal plane. Research has concentrated on the resulting vergence-accommodation conflicts. Yet, incorrect focus cues may also affect the appearance of 3-D imagery. We investigated whether incorrect focus cues reduce perceived realism of 3-D structure ("depth realism"). Experiment 1 used a multiple-focal-planes display to compare depth realism with correct focus cues vs. conventional stereo presentation. The stimuli were random-dot stereograms, which isolated the role of focus cues. Depth realism was consistently lower with incorrect focus cues, providing proof-of-principle evidence that they contribute to perceptual realism. Experiments 2 and 3 examined whether focus cues play a similar role with realistic objects, presented with an almost complete set of visual cues using a high-resolution, high-dynamic-range multiple-focal-planes display. We also examined the efficacy of approximating correct focus cues via gaze-contingent depth-of-field rendering. Improvements in depth realism with correct focus cues were less clear in more realistic scenes, indicating that the role of focus cues in depth realism depends on scene content. Rendering-based approaches, if anything, reduced depth realism, which we attribute to their inability to present higher-order aspects of blur correctly. Our findings suggest future general 3-D display solutions may need to present focus cues correctly to maximise perceptual realism.
Assuntos
Acomodação Ocular , Sinais (Psicologia) , Humanos , Tecnologia , PercepçãoRESUMO
Errors of touch localization after hand nerve injuries are common, and their measurement is important for evaluating functional recovery. Available empirical accounts have significant methodological limitations, however, and a quantitatively rigorous and detailed description of touch localization in nerve injury is lacking. Here, we develop a new method of measuring touch localization and evaluate its value for use in nerve injury. Eighteen patients with transection injuries to the median/ulnar nerves and 33 healthy controls were examined. The hand was blocked from the participant's view and points were marked on the volar surface using an ultraviolet (UV) pen. These points served as targets for touch stimulation. Two photographs were taken, one with and one without UV lighting, rendering targets seen and unseen, respectively. The experimenter used the photograph with visible targets to register their locations, and participants reported the felt position of each stimulation on the photograph with unseen targets. The error of localization and its directional components were measured, separate from misreferrals-errors made across digits, or from a digit to the palm. Nerve injury was found to significantly increase the error of localization. These effects were specific to the territory of the repaired nerve and showed considerable variability at the individual level, with some patients showing no evidence of impairment. A few patients also made abnormally high numbers of misreferrals, and the pattern of misreferrals in patients differed from that observed in healthy controls.NEW & NOTEWORTHY We provide a more rigorous and comprehensive account of touch localization in nerve injury than previously available. Our results show that touch localization is significantly impaired following median/ulnar nerve transection injuries and that these impairments are specific to the territory of the repaired nerve(s), vary considerably between patients, and can involve frequent errors spanning between digits.
Assuntos
Percepção do Tato , Tato , Humanos , Tato/fisiologia , Mãos/inervação , Nervo Mediano , Nervo Ulnar/fisiologiaRESUMO
The brain must interpret sensory input from diverse receptor systems to estimate object properties. Much has been learned about the brain mechanisms behind these processes in vision, but our understanding of haptic perception remains less clear. Here we examined haptic judgments of object size, which require integrating multiple cutaneous and proprioceptive afferent signals, as a model problem. To identify candidate human brain regions that support this process, participants (n = 16) in an event-related functional MRI experiment grasped objects to categorize them as one of four sizes. Object sizes were calibrated psychophysically to be equally distinct for each participant. We applied representational similarity logic to whole brain, multivoxel searchlight analyses to identify brain regions that exhibit size-relevant voxelwise activity patterns. Of particular interest was to identify regions for which more similar sizes produce more similar patterns of activity, which constitutes evidence of a metric size code. Regions of the intraparietal sulcus and the lateral prefrontal cortex met this criterion, both within hands and across hands. We suggest that these regions compute representations of haptic size that abstract over the specific peripheral afferent signals generated in a grasp. Results of a matched visual size task, performed by the same participants and analyzed in the same fashion, identified similar regions, indicating that these representations may be partly modality general. We consider these results with respect to perspectives on magnitude estimation in general and to computational views on perceptual signal integration.NEW & NOTEWORTHY Our understanding of the neural basis of haptics (perceiving the world through touch) remains incomplete. We used functional MRI to study human haptic judgments of object size, which require integrating multiple afferent signals. Multivoxel pattern analyses identified intraparietal and prefrontal regions that encode size haptically in a metric and hand-invariant fashion. Effector-independent haptic size estimates are useful on their own and in combination with other sensory estimates for a variety of perceptual and motor tasks.
Assuntos
Mapeamento Encefálico/métodos , Córtex Cerebral/fisiologia , Julgamento/fisiologia , Percepção de Tamanho/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Adulto , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Reconhecimento Automatizado de Padrão , Adulto JovemRESUMO
Ideal grasping movements should maintain an appropriate probability of success, while controlling movement-related costs, in the presence of varying visual (and motor) uncertainty. It is often assumed that the probability of errors is managed by adjusting a margin for error in hand opening (e.g., opening the hand wider with increased visual uncertainty). This idea is intuitive, but non-trivial. It implies not only that the brain can estimate the amount of uncertainty, but also that it can compute how different possible alterations to the movement will affect the probability of errors-which we term the 'probability landscape'. Previous work suggests the amount of uncertainty is factored into grasping movements. Our aim was to determine whether grasping movements are also sensitive to the probability landscape. Subjects completed three different grasping tasks, with naturally different probability landscapes, such that appropriate margin-for-error responses to increased uncertainty were qualitatively different (opening the hand wider, the same amount, or less wide). We increased visual uncertainty by blurring vision, and by covering one eye. Movements were performed without visual feedback to isolate uncertainty in the brain's initial estimate of object properties. Changes to hand opening in response to increased visual uncertainty closely resembled those predicted by the margin-for-error account, suggesting that grasping is sensitive to the probability landscape associated with different tasks. Our findings therefore support the intuitive idea that grasping movements employ a true margin-for-error mechanism, which exerts active control over the probability of errors across changing circumstances.
Assuntos
Atividade Motora/fisiologia , Desempenho Psicomotor/fisiologia , Incerteza , Percepção Visual/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto JovemRESUMO
When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.
Assuntos
Percepção de Profundidade/fisiologia , Desempenho Psicomotor/fisiologia , Percepção do Tato/fisiologia , Tato/fisiologia , Visão Ocular/fisiologia , Percepção Visual/fisiologia , Adulto , Discriminação Psicológica , Retroalimentação Sensorial , Feminino , Humanos , Masculino , Estimulação Luminosa , Estimulação Física , Psicofísica , Percepção de TamanhoRESUMO
Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the 'viewing geometry' typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being 'hard-wired'. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, 'architectural' property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.
Assuntos
Retroalimentação Sensorial/fisiologia , Força da Mão/fisiologia , Desempenho Psicomotor/fisiologia , Visão Binocular/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Movimento/fisiologia , Sistemas On-Line , Estimulação Luminosa , Fatores de Tempo , Punho/inervação , Adulto JovemRESUMO
Most stereoscopic displays present images at a single focal plane, resulting in "conflicts" between the stimuli to vergence and accommodation. Minimizing these conflicts is beneficial because they can cause distorted depth percepts, visual fatigue, and reduced stereoscopic performance. One proposed solution is to present a sum of images at multiple focal planes and to vary focal depth continuously by distributing image intensity across planes-a technique referred to as depth filtering. We evaluated this digital approximation to real-world variations in focal depth by measuring accommodation responses to depth-filtered stimuli at various simulated distances. Specifically, we determined the maximum image-plane separation that supported accurate and reliable accommodation. We used an analysis of retinal-image formation to predict when responses might be inaccurate. Accommodation to depth-filtered images was accurate and precise for image-plane separations up to â¼1 diopter, suggesting that depth filtering can be used to precisely match accommodation and vergence demands in a practical display. At larger plane separations, responses broke down in a manner consistent with our analysis. We develop this approach to consider how different spatial frequencies contribute to accommodation control. The results suggest that higher spatial frequencies contribute less to the accommodation response than has previously been thought.
Assuntos
Acomodação Ocular/fisiologia , Percepção de Profundidade/fisiologia , Fixação Ocular/fisiologia , Visão Binocular/fisiologia , Adolescente , Adulto , Astenopia/fisiopatologia , Humanos , Valores de Referência , Adulto JovemRESUMO
The role of binocular vision in grasping has frequently been assessed by measuring the effects on grasp kinematics of covering one eye. These studies have typically used three or fewer objects presented at three or fewer distances, raising the possibility that participants learn the properties of the stimulus set. If so, even relatively poor visual information may be sufficient to identify which object/distance configuration is presented on a given trial, in effect providing an additional source of depth information. Here we show that the availability of this uncontrolled cue leads to an underestimate of the effects of removing binocular information, and therefore to an overestimate of the effectiveness of the remaining cues. We measured the effects of removing binocular cues on visually open-loop grasps using (1) a conventional small stimulus-set, and (2) a large, pseudo-randomised stimulus set, which could not be learned. Removing binocular cues resulted in a significant change in grip aperture scaling in both conditions: peak grip apertures were larger (when reaching to small objects), and scaled less with increases in object size. However, this effect was significantly larger with the randomised stimulus set. These results confirm that binocular information makes a significant contribution to grasp planning. Moreover, they suggest that learned stimulus information can contribute to grasping in typical experiments, and so the contribution of information from binocular vision (and from other depth cues) may not have been measured accurately.
Assuntos
Mãos/fisiologia , Desempenho Psicomotor , Visão Binocular , Adulto , Fenômenos Biomecânicos , Sinais (Psicologia) , Feminino , Humanos , Modelos Lineares , Masculino , Estimulação Luminosa , Adulto JovemRESUMO
When integrating signals from vision and haptics the brain must solve a "correspondence problem" so that it only combines information referring to the same object. An invariant spatial rule could be used when grasping with the hand: here the two signals should only be integrated when the estimate of hand and object position coincide. Tools complicate this relationship, however, because visual information about the object, and the location of the hand, are separated spatially. We show that when a simple tool is used to estimate size, the brain integrates visual and haptic information in a near-optimal fashion, even with a large spatial offset between the signals. Moreover, we show that an offset between the tool-tip and the object results in similar reductions in cross-modal integration as when the felt and seen positions of an object are offset in normal grasping. This suggests that during tool use the haptic signal is treated as coming from the tool-tip, not the hand. The brain therefore appears to combine visual and haptic information, not based on the spatial proximity of sensory stimuli, but based on the proximity of the distal causes of stimuli, taking into account the dynamics and geometry of tools.
Assuntos
Desempenho Psicomotor/fisiologia , Comportamento de Utilização de Ferramentas/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Adulto , Discriminação Psicológica , Mãos/fisiologia , Humanos , Estimulação Luminosa/métodos , Estimulação Física , Adulto JovemRESUMO
In the past few years a new scenario for robot-based applications has emerged. Service and mobile robots have opened new market niches. Also, new frameworks for shop-floor robot applications have been developed. In all these contexts, robots are requested to perform tasks within open-ended conditions, possibly dynamically varying. These new requirements ask also for a change of paradigm in the design of robots: on-line and safe feedback motion control becomes the core of modern robot systems. Future robots will learn autonomously, interact safely and possess qualities like self-maintenance. Attaining these features would have been relatively easy if a complete model of the environment was available, and if the robot actuators could execute motion commands perfectly relative to this model. Unfortunately, a complete world model is not available and robots have to plan and execute the tasks in the presence of environmental uncertainties which makes sensing an important component of new generation robots. For this reason, today's new generation robots are equipped with more and more sensing components, and consequently they are ready to actively deal with the high complexity of the real world. Complex sensorimotor tasks such as exploration require coordination between the motor system and the sensory feedback. For robot control purposes, sensory feedback should be adequately organized in terms of relevant features and the associated data representation. In this paper, we propose an overall functional picture linking sensing to action in closed-loop sensorimotor control of robots for touch (hands, fingers). Basic qualities of haptic perception in humans inspire the models and categories comprising the proposed classification. The objective is to provide a reasoned, principled perspective on the connections between different taxonomies used in the Robotics and human haptic literature. The specific case of active exploration is chosen to ground interesting use cases. Two reasons motivate this choice. First, in the literature on haptics, exploration has been treated only to a limited extent compared to grasping and manipulation. Second, exploration involves specific robot behaviors that exploit distributed and heterogeneous sensory data.
RESUMO
Depth information from focus cues--accommodation and the gradient of retinal blur--is typically incorrect in three-dimensional (3-D) displays because the light comes from a planar display surface. If the visual system incorporates information from focus cues into its calculation of 3-D scene parameters, this could cause distortions in perceived depth even when the 2-D retinal images are geometrically correct. In Experiment 1 we measured the direct contribution of focus cues to perceived slant by varying independently the physical slant of the display surface and the slant of a simulated surface specified by binocular disparity (binocular viewing) or perspective/texture (monocular viewing). In the binocular condition, slant estimates were unaffected by display slant. In the monocular condition, display slant had a systematic effect on slant estimates. Estimates were consistent with a weighted average of slant from focus cues and slant from disparity/texture, where the cue weights are determined by the reliability of each cue. In Experiment 2, we examined whether focus cues also have an indirect effect on perceived slant via the distance estimate used in disparity scaling. We varied independently the simulated distance and the focal distance to a disparity-defined 3-D stimulus. Perceived slant was systematically affected by changes in focal distance. Accordingly, depth constancy (with respect to simulated distance) was significantly reduced when focal distance was held constant compared to when it varied appropriately with the simulated distance to the stimulus. The results of both experiments show that focus cues can contribute to estimates of 3-D scene parameters. Inappropriate focus cues in typical 3-D displays may therefore contribute to distortions in perceived space.
Assuntos
Sinais (Psicologia) , Percepção de Profundidade/fisiologia , Fixação Ocular , Acomodação Ocular , Adulto , Percepção de Distância , Humanos , Imageamento Tridimensional , Modelos Psicológicos , Orientação , Estimulação Luminosa/métodos , Retina/fisiologia , Disparidade Visual , Visão Binocular , Visão MonocularRESUMO
Neuropsychological results support the proposal that the human visual system is organised into distinct processing pathways, one for conscious perception and one for the control of action. Here, we compare perceptual and action responses following a pre-response-delay. Experiment 1 required participants to reproduce remembered locations and found that although perceptual matches were unaffected by delays of up to 4 s, pointing responses were significantly biased after only 2 s. Experiment 2 examined whether both the transport and grasp components of a natural prehensile movement were similarly affected by delay. Both peak wrist velocities and peak grip-apertures were affected equivalently by delay, suggesting that the two components of a prehensile movement have similar temporal constraints. The results from both experiments are consistent with the general perception-action dichotomy as originally proposed by Milner and Goodale [The visual brain in action, Oxford: Oxford University Press, 1995].
Assuntos
Movimento , Tempo de Reação/fisiologia , Percepção Visual , Adulto , Feminino , Mãos , Humanos , Masculino , Memória , Processos MentaisRESUMO
Binocular cues have been shown previously to make an important contribution to the control of natural prehensile movements in adults [Visual Cognition 4 (1997) 113, Vision Research 32 (1992) 1513, Neuropsychologia 38 (2000) 1473]. The present study examined the role of binocular vision in the control of prehension in middle childhood. Fourteen children aged 5-6 years, and 16 children aged 10-11 years reached out and grasped different sized objects at different distances, in either binocular or monocular viewing conditions. In contrast to adult data, many of the principal kinematic indices of the children's reaches were unaffected by the removal of binocular information. The older children, like adults, spent an increased amount of time in the final approach to the object when only monocular information was available. However, both peak wrist velocities and peak grip apertures were unaffected by the removal of binocular information and continued to scale with object properties in the normal way. These results suggest that the use of binocular cues to control prehensile movements is not yet mature at the age of 10-11 years.
Assuntos
Desempenho Psicomotor/fisiologia , Visão Binocular/fisiologia , Criança , Desenvolvimento Infantil , Pré-Escolar , Percepção de Distância/fisiologia , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Percepção de Tamanho/fisiologia , Visão Monocular/fisiologiaRESUMO
The primary visual sources of depth and size information are binocular cues and motion parallax. Here, the authors determine the efficacy of these cues to control prehension by presenting them in isolation from other visual cues. When only binocular cues were available, reaches showed normal scaling of the transport and grasp components with object distance and size. However, when only motion parallax was available, only the transport component scaled reliably. No additional increase in scaling was found when both cues were available simultaneously. Therefore, although equivalent information is available from binocular and motion parallax information, the latter may be of relatively limited use for the control of the grasp. Binocular disparity appears selectively important for the control of the grasp.
Assuntos
Percepção de Movimento/fisiologia , Destreza Motora/fisiologia , Movimento/fisiologia , Percepção Espacial/fisiologia , Visão Binocular/fisiologia , Adaptação Fisiológica , Adulto , Braço/fisiologia , Percepção de Profundidade/fisiologia , Percepção de Distância/fisiologia , Feminino , Mãos/fisiologia , Força da Mão/fisiologia , Humanos , Masculino , Disparidade Visual/fisiologiaRESUMO
The present study examined the effects of a pre-movement delay on the kinematics of prehension in middle childhood. Twenty-five children between the ages of 5 and 11 years made visually open-loop reaches to two different sized objects at two different distances along the midline. Reaches took place either (i) immediately, or (ii) 2 s after the occlusion of the stimulus. In all age groups, reaches following the pre-movement delay were characterised by longer movement durations, lower peak velocities, larger peak grip apertures and longer time spent in the final slow phase of the movement. This pattern of results suggests that the representations that control the transport and grasp component are affected similarly by delay, and is consistent with the results previously reported for adults. Such representations therefore appear to develop before the age of 5.
Assuntos
Força da Mão/fisiologia , Destreza Motora/fisiologia , Movimento/fisiologia , Fenômenos Biomecânicos , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Fatores de TempoRESUMO
How does the visual system combine information from different depth cues to estimate three-dimensional scene parameters? We tested a maximum-likelihood estimation (MLE) model of cue combination for perspective (texture) and binocular disparity cues to surface slant. By factoring the reliability of each cue into the combination process, MLE provides more reliable estimates of slant than would be available from either cue alone. We measured the reliability of each cue in isolation across a range of slants and distances using a slant-discrimination task. The reliability of the texture cue increases as |slant| increases and does not change with distance. The reliability of the disparity cue decreases as distance increases and varies with slant in a way that also depends on viewing distance. The trends in the single-cue data can be understood in terms of the information available in the retinal images and issues related to solving the binocular correspondence problem. To test the MLE model, we measured perceived slant of two-cue stimuli when disparity and texture were in conflict and the reliability of slant estimation when both cues were available. Results from the two-cue study indicate, consistent with the MLE model, that observers weight each cue according to its relative reliability: Disparity weight decreased as distance and |slant| increased. We also observed the expected improvement in slant estimation when both cues were available. With few discrepancies, our data indicate that observers combine cues in a statistically optimal fashion and thereby reduce the variance of slant estimates below that which could be achieved from either cue alone. These results are consistent with other studies that quantitatively examined the MLE model of cue combination. Thus, there is a growing empirical consensus that MLE provides a good quantitative account of cue combination and that sensory information is used in a manner that maximizes the precision of perceptual estimates.
Assuntos
Sinais (Psicologia) , Percepção de Profundidade/fisiologia , Percepção de Forma/fisiologia , Disparidade Visual/fisiologia , Visão Binocular/fisiologia , Humanos , Funções Verossimilhança , Reconhecimento Visual de ModelosRESUMO
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the "weight" given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different "gains" between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices.
RESUMO
Stereoscopic displays have become important for many applications, including operation of remote devices, medical imaging, surgery, scientific visualization, and computer-assisted design. But the most significant and exciting development is the incorporation of stereo technology into entertainment: specifically, cinema, television, and video games. In these applications for stereo, three-dimensional (3D) imagery should create a faithful impression of the 3D structure of the scene being portrayed. In addition, the viewer should be comfortable and not leave the experience with eye fatigue or a headache. Finally, the presentation of the stereo images should not create temporal artifacts like flicker or motion judder. This paper reviews current research on stereo human vision and how it informs us about how best to create and present stereo 3D imagery. The paper is divided into four parts: (1) getting the geometry right, (2) depth cue interactions in stereo 3D media, (3) focusing and fixating on stereo images, and (4) how temporal presentation protocols affect flicker, motion artifacts, and depth distortion.
RESUMO
When we grasp with one eye covered, the finger and thumb are typically opened wider than for binocularly guided grasps, as if to build a margin-for-error into the movement. Also, patients with visual form agnosia can have profound deficits in their (otherwise relatively normal) grasping when binocular information is removed. One interpretation of these findings is that there is a functional specialism for binocular vision in the control of grasping. Alternatively, cue-integration theory suggests that binocular and monocular depth cues are combined in the control of grasping, and so impaired performance reflects not the loss of 'critical' binocular cues, but increased uncertainty per se. Unfortunately, removing binocular information confounds removing particular (binocular) depth cues with an overall reduction in the available information, and so such experiments cannot distinguish between these alternatives. We measured the effects on visually open-loop grasping of selectively removing monocular (texture) or binocular depth cues. To allow meaningful comparisons, we made psychophysical measurements of the uncertainty in size estimates in each case, so that the informativeness of binocular and monocular cues was known in each condition. Consistent with cue-integration theory, removing either binocular or monocular cues resulted in similar increases in grip apertures. In a separate experiment, we also confirmed that changes in uncertainty per se (keeping the same depth cues available) resulted in larger grip apertures. Overall, changes in the margin-for-error in grasping movements were determined by the uncertainty in size estimates and not by the presence or absence of particular depth cues. Our data therefore argue against a binocular specialism for grasp programming. Instead, grip apertures were smaller when binocular and monocular cues were available than with either cue alone, providing strong evidence that the visuo-motor system exploits the redundancy available in multiple sources of information, and integrates binocular and monocular cues to improve grasping performance.
Assuntos
Sinais (Psicologia) , Percepção de Profundidade/fisiologia , Força da Mão/fisiologia , Visão Binocular/fisiologia , Adulto , Discriminação Psicológica , Feminino , Humanos , Masculino , Movimento/fisiologia , Estimulação Luminosa/métodos , Valor Preditivo dos Testes , Psicofísica , Adulto JovemRESUMO
Recent evidence suggests that the visual control of prehension may be less dependent on binocular information than has previously been thought. Studies investigating this question, however, have generally only examined reaches to single objects presented in isolation, even though natural prehensile movements are typically directed at objects in cluttered scenes which contain many objects. The present study was designed, therefore, to assess the contribution of binocular information to the control of prehensile movements in multiple-object scenes. Subjects reached for and grasped objects presented either in isolation or in the presence of one, two or four additional 'flanking' objects, under binocular and monocular viewing conditions. So that the role of binocular information could be clearly determined, subjects made reaches both in the absence of a visible scene around the target objects (self-illuminated objects presented in the dark) and under normal ambient lighting conditions. Analysis of kinematic parameters indicated that the removal of binocular information did not significantly affect many of the major indices of the transport component, including peak wrist velocity. However, peak grip apertures increased and subjects spent more time in the final slow phase of movement, prior to grasping the object, during monocularly guided reaches. The dissociation between effects of binocular versus monocular viewing on transport and grasp parameters was observed irrespective of the presence of flanking objects. These results therefore further question the view that binocular vision is pre-eminent in the control of natural prehensile movements.