RESUMO
Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor's body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here, we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors, that is, grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral stream areas during grasp planning then in premotor regions during grasp execution. Object mass was encoded in ventral stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.SIGNIFICANCE STATEMENT Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and, surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
Assuntos
Encéfalo , Desempenho Psicomotor , Humanos , Mapeamento Encefálico/métodos , Força da Mão , MãosRESUMO
We rarely experience difficulty picking up objects, yet of all potential contact points on the surface, only a small proportion yield effective grasps. Here, we present extensive behavioral data alongside a normative model that correctly predicts human precision grasping of unfamiliar 3D objects. We tracked participants' forefinger and thumb as they picked up objects of 10 wood and brass cubes configured to tease apart effects of shape, weight, orientation, and mass distribution. Grasps were highly systematic and consistent across repetitions and participants. We employed these data to construct a model which combines five cost functions related to force closure, torque, natural grasp axis, grasp aperture, and visibility. Even without free parameters, the model predicts individual grasps almost as well as different individuals predict one another's, but fitting weights reveals the relative importance of the different constraints. The model also accurately predicts human grasps on novel 3D-printed objects with more naturalistic geometries and is robust to perturbations in its key parameters. Together, the findings provide a unified account of how we successfully grasp objects of different 3D shape, orientation, mass, and mass distribution.
Assuntos
Força da Mão/fisiologia , Modelos Biológicos , Desempenho Psicomotor/fisiologia , Adulto , Biologia Computacional , Feminino , Mãos/fisiologia , Humanos , Masculino , Torque , Adulto JovemRESUMO
Visually inferring the elasticity of a bouncing object poses a challenge to the visual system: The observable behavior of the object depends on its elasticity but also on extrinsic factors, such as its initial position and velocity. Estimating elasticity requires disentangling these different contributions to the observed motion. We created 2-second simulations of a cube bouncing in a room and varied the cube's elasticity in 10 steps. The cube's initial position, orientation, and velocity were varied randomly to gain three random samples for each level of elasticity. We systematically limited the visual information by creating three versions of each stimulus: (a) a full rendering of the scene, (b) the cube in a completely black environment, and (c) a rigid version of the cube following the same trajectories but without rotating or deforming (also in a completely black environment). Thirteen observers rated the apparent elasticity of the cubes and the typicality of their motion. Generally, stimuli were judged as less typical if they showed rigid motion without rotations, highly elastic cubes, or unlikely events. Overall, elasticity judgments correlated strongly with the true elasticity but did not show perfect constancy. Yet, importantly, we found similar results for all three stimulus conditions, despite significant differences in their apparent typicality. This suggests that the trajectory alone contains the information required to make elasticity judgments.
Assuntos
Elasticidade/fisiologia , Percepção de Movimento/fisiologia , Adulto , Simulação por Computador , Feminino , Humanos , Julgamento , Masculino , Movimento (Física) , Orientação , Adulto JovemRESUMO
The material-weight illusion (MWI) occurs when an object that looks heavy (e.g., stone) and one that looks light (e.g., Styrofoam) have the same mass. When such stimuli are lifted, the heavier-looking object feels lighter than the lighter-looking object, presumably because well-learned priors about the density of different materials are violated. We examined whether a similar illusion occurs when a certain weight distribution is expected (such as the metal end of a hammer being heavier), but weight is uniformly distributed. In experiment 1, participants lifted bipartite objects that appeared to be made of two materials (combinations of stone, Styrofoam, and wood) but were manipulated to have a uniform weight distribution. Most participants experienced an inverted MWI (i.e., the heavier-looking side felt heavier), suggesting an integration of incoming sensory information with density priors. However, a replication of the classic MWI was found when the objects appeared to be uniformly made of just one of the materials ( experiment 2). Both illusions seemed to be independent of the forces used when the objects were lifted. When lifting bipartite objects but asked to judge the weight of the whole object, participants experienced no illusion ( experiment 3). In experiment 4, we investigated weight perception in objects with a nonuniform weight distribution and again found evidence for an integration of prior and sensory information. Taken together, our seemingly contradictory results challenge most theories about the MWI. However, Bayesian integration of competing density priors with the likelihood of incoming sensory information may explain the opposing illusions. NEW & NOTEWORTHY We report a novel weight illusion that contradicts all current explanations of the material-weight illusion: When lifting an object composed of two materials, the heavier-looking side feels heavier, even when the true weight distribution is uniform. The opposite (classic) illusion is found when the same materials are lifted in two separate objects. Identifying the common mechanism underlying both illusions will have implications for perception more generally. A potential candidate is Bayesian inference with competing priors.
Assuntos
Ilusões , Percepção de Peso/fisiologia , Feminino , Humanos , Masculino , Adulto JovemRESUMO
When haptically exploring softness, humans use higher peak forces when indenting harder versus softer objects. Here, we investigated the influence of different channels and types of prior knowledge on initial peak forces. Participants explored two stimuli (hard vs. soft) and judged which was softer. In Experiment 1 participants received either semantic (the words "hard" and "soft"), visual (video of indentation), or prior information from recurring presentation (blocks of harder or softer pairs only). In a control condition no prior information was given (randomized presentation). In the recurring condition participants used higher initial forces when exploring harder stimuli. No effects were found in control and semantic conditions. With visual prior information, participants used less force for harder objects. We speculate that these findings reflect differences between implicit knowledge induced by recurring presentation and explicit knowledge induced by visual and semantic information. To test this hypothesis, we investigated whether explicit prior information interferes with implicit information in Experiment 2. Two groups of participants discriminated softness of harder or softer stimuli in two conditions (blocked and randomized). The interference group received additional explicit information during the blocked condition; the implicit-only group did not. Implicit prior information was only used for force adaptation when no additional explicit information was given, whereas explicit interfered with movement adaptation. The integration of prior knowledge only seems possible when implicit prior knowledge is induced-not with explicit knowledge.
Assuntos
Cognição/fisiologia , Formação de Conceito/fisiologia , Aprendizagem/fisiologia , Adulto , Feminino , Humanos , Masculino , Semântica , Adulto JovemRESUMO
Nonrigid materials, such as jelly, rubber, or sponge move and deform in distinctive ways depending on their stiffness. Which cues do we use to infer stiffness? We simulated cubes of varying stiffness and optical appearance (e.g., wood, metal, wax, jelly) being subjected to two kinds of deformation: (a) a rigid cylinder pushing downwards into the cube to various extents (shape change, but little motion: shape dominant), (b) a rigid cylinder retracting rapidly from the cube (same initial shapes, differences in motion: motion dominant). Observers rated the apparent softness/hardness of the cubes. In the shape-dominant condition, ratings mainly depended on how deeply the rod penetrated the cube and were almost unaffected by the cube's intrinsic physical properties. In contrast, in the motion-dominant condition, ratings varied systematically with the cube's intrinsic stiffness, and were less influenced by the extent of the perturbation. We find that both results are well predicted by the absolute magnitude of deformation, suggesting that when asked to judge stiffness, observers resort to simple heuristics based on the amount of deformation. Softness ratings for static, unperturbed cubes varied substantially and systematically depending on the optical properties. However, when animated, the ratings were again dominated by the extent of the deformation, and the effect of optical appearance was negligible. Together, our results suggest that to estimate stiffness, the visual system strongly relies on measures of the extent to which an object changes shape in response to forces.
Assuntos
Sinais (Psicologia) , Percepção de Forma/fisiologia , Percepção de Movimento/fisiologia , Elasticidade , Humanos , Movimento (Física)RESUMO
Visually inferring the stiffness of objects is important for many tasks but is challenging because, unlike optical properties (e.g., gloss), mechanical properties do not directly affect image values. Stiffness must be inferred either (a) by recognizing materials and recalling their properties (associative approach) or (b) from shape and motion cues when the material is deformed (estimation approach). Here, we investigated interactions between these two inference types. Participants viewed renderings of unfamiliar shapes with 28 materials (e.g., nickel, wax, cork). In Experiment 1, they viewed nondeformed, static versions of the objects and rated 11 material attributes (e.g., soft, fragile, heavy). The results confirm that the optical materials elicited a wide range of apparent properties. In Experiment 2, using a blue plastic material with intermediate apparent softness, the objects were subjected to physical simulations of 12 shape-transforming processes (e.g., twisting, crushing, stretching). Participants rated softness and extent of deformation. Both correlated with the physical magnitude of deformation. Experiment 3 combined variations in optical cues with shape cues. We find that optical cues completely dominate. Experiment 4 included the entire motion sequence of the deformation, yielding significant contributions of optical as well as motion cues. Our findings suggest participants integrate shape, motion, and optical cues to infer stiffness, with optical cues playing a major role for our range of stimuli.
Assuntos
Sinais (Psicologia) , Percepção de Forma/fisiologia , Percepção de Movimento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Visão Ocular/fisiologia , Feminino , Humanos , Masculino , Propriedades de Superfície , Adulto JovemRESUMO
Successfully picking up and handling objects requires taking into account their physical properties (e.g., material) and position relative to the body. Such features are often inferred by sight, but it remains unclear to what extent observers vary their actions depending on the perceived properties. To investigate this, we asked participants to grasp, lift and carry cylinders to a goal location with a precision grip. The cylinders were made of four different materials (Styrofoam, wood, brass and an additional brass cylinder covered with Vaseline) and were presented at six different orientations with respect to the participant (0°, 30°, 60°, 90°, 120°, 150°). Analysis of their grasping kinematics revealed differences in timing and spatial modulation at all stages of the movement that depended on both material and orientation. Object orientation affected the spatial configuration of index finger and thumb during the grasp, but also the timing of handling and transport duration. Material affected the choice of local grasp points and the duration of the movement from the first visual input until release of the object. We find that conditions that make grasping more difficult (orientation with the base pointing toward the participant, high weight and low surface friction) lead to longer durations of individual movement segments and a more careful placement of the fingers on the object.
Assuntos
Dedos/fisiologia , Atividade Motora/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Adulto JovemRESUMO
We present an apparatus that allows independent stimulation of rods and short (S)-, middle (M)-, and long (L)-wavelength-sensitive cones. Previously presented devices allow rod and cone stimulation independently, but only for a spatially invariant stimulus design (Pokorny, Smithson, & Quinlan, 2004; Sun, Pokorny, & Smith, 2001b). We overcame this limitation by using two spectrally filtered projectors with overlapping projections. This approach allows independent rod and cone stimulation in a dynamic two-dimensional scene with appropriate resolution in the spatial, temporal, and receptor domains. Modulation depths were ±15% for M-cones and L-cones, ±20% for rods, and ±50% for S-cones, all with respect to an equal-energy mesopic background at 3.4 cd/m2. Validation was provided by radiometric measures and behavioral data from two trichromats, one protanope, one deuteranope, and one night-blind observer.
Assuntos
Estimulação Luminosa/métodos , Células Fotorreceptoras Retinianas Cones/efeitos da radiação , Células Fotorreceptoras Retinianas Bastonetes/efeitos da radiação , Adolescente , Adulto , Comportamento de Escolha , Visão de Cores/fisiologia , Sensibilidades de Contraste/fisiologia , Feminino , Humanos , Masculino , Células Fotorreceptoras Retinianas Cones/fisiologia , Células Fotorreceptoras Retinianas Bastonetes/fisiologia , Adulto JovemRESUMO
Choosing appropriate grasp points is necessary for successfully interacting with objects in our environment. We brought two possible determinants of grasp point selection into conflict: the attempt to grasp an object near its center of mass to minimize torque and ensure stability and the attempt to minimize movement distance. We let our participants grasp two elongated objects of different mass and surface friction that were approached from different distances to both sides of the object. Maximizing stability predicts grasp points close to the object's center, while minimizing movement costs predicts a bias of the grasp axis toward the side at which the movement started. We found smaller deviations from the center of mass for the smooth and heavy object, presumably because the larger torques and more slippery surface for the heavy object increase the chance of unwanted object rotation. However, our right-handed participants tended to grasp the objects to the right of the center of mass, irrespective of where the movement started. The rightward bias persisted when vision was removed once the hand was half way to the object. It was reduced when the required precision was increased. Starting the movement above the object eliminated the bias. Grasping with the left hand, participants tended to grasp the object to the left of its center. Thus, the selected grasp points seem to reflect a compromise between maximizing stability by grasping near the center of mass and grasping on the side of the acting hand, perhaps to increase visibility of the object.
Assuntos
Viés , Força da Mão/fisiologia , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto , Análise de Variância , Feminino , Fricção , Humanos , Masculino , Estimulação Luminosa , Rotação , Adulto JovemRESUMO
[This corrects the article DOI: 10.3389/fnins.2020.591898.].
RESUMO
How humans visually select where to grasp objects is determined by the physical object properties (e.g., size, shape, weight), the degrees of freedom of the arm and hand, as well as the task to be performed. We recently demonstrated that human grasps are near-optimal with respect to a weighted combination of different cost functions that make grasps uncomfortable, unstable, or impossible, e.g., due to unnatural grasp apertures or large torques. Here, we ask whether humans can consciously access these rules. We test if humans can explicitly judge grasp quality derived from rules regarding grasp size, orientation, torque, and visibility. More specifically, we test if grasp quality can be inferred (i) by using visual cues and motor imagery alone, (ii) from watching grasps executed by others, and (iii) through performing grasps, i.e., receiving visual, proprioceptive and haptic feedback. Stimuli were novel objects made of 10 cubes of brass and wood (side length 2.5 cm) in various configurations. On each object, one near-optimal and one sub-optimal grasp were selected based on one cost function (e.g., torque), while the other constraints (grasp size, orientation, and visibility) were kept approximately constant or counterbalanced. Participants were visually cued to the location of the selected grasps on each object and verbally reported which of the two grasps was best. Across three experiments, participants were required to either (i) passively view the static objects and imagine executing the two competing grasps, (ii) passively view videos of other participants grasping the objects, or (iii) actively grasp the objects themselves. Our results show that, for a majority of tested objects, participants could already judge grasp optimality from simply viewing the objects and imagining to grasp them, but were significantly better in the video and grasping session. These findings suggest that humans can determine grasp quality even without performing the grasp-perhaps through motor imagery-and can further refine their understanding of how to correctly grasp an object through sensorimotor feedback but also by passively viewing others grasp objects.
RESUMO
Humans exhibit spatial biases when grasping objects. These biases may be due to actors attempting to shorten their reaching movements and therefore minimize energy expenditures. An alternative explanation could be that they arise from actors attempting to minimize the portion of a grasped object occluded from view by the hand. We reanalyze data from a recent study, in which a key condition decouples these two competing hypotheses. The analysis reveals that object visibility, not energy expenditure, most likely accounts for spatial biases observed in human grasping.
RESUMO
We report an illusion in which the felt weight of an object changes depending on whether a previously manipulated object was lighter or heavier. The illusion is not modulated by visual weight cues, yet it transfers across hands.
RESUMO
Perceiving material properties can be crucial for many tasks-such as determining food edibility, or avoiding getting splashed-yet the visual perception of materials remains poorly understood. Most previous research has focussed on optical characteristics (e.g., gloss, translucency). Here, however, we show that shape also provides powerful visual cues to material properties. When liquids pour, splash or ooze, they organize themselves into characteristic shapes, which are highly diagnostic of the material's properties. Subjects viewed snapshots of simulated liquids of different viscosities, and rated their similarity. Using maximum likelihood difference scaling (Maloney & Yang, 2003), we reconstructed perceptual scales for perceived viscosity as a function of the physical viscosity of the simulated fluids. The resulting psychometric function revealed a distinct sigmoidal shape, distinguishing runny liquids that flow easily from viscous gels that clump up into piles. A parameter-free model based on 20 simple shape statistics predicted the subjects' data surprisingly well. This suggests that when subjects are asked to compare the viscosity of static snapshots of liquids that differ only in terms of viscosity, they rely primarily on relatively simple measures of shape similarity.
Assuntos
Percepção de Forma/fisiologia , Visão Ocular/fisiologia , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Modelos Teóricos , Viscosidade , Adulto JovemRESUMO
When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases.