Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(49): e2303162120, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-37983484

RESUMO

Many actions have instrumental aims, in which we move our bodies to achieve a physical outcome in the environment. However, we also perform actions with epistemic aims, in which we move our bodies to acquire information and learn about the world. A large literature on action recognition investigates how observers represent and understand the former class of actions; but what about the latter class? Can one person tell, just by observing another person's movements, what they are trying to learn? Here, five experiments explore epistemic action understanding. We filmed volunteers playing a "physics game" consisting of two rounds: Players shook an opaque box and attempted to determine i) the number of objects hidden inside, or ii) the shape of the objects inside. Then, independent subjects watched these videos and were asked to determine which videos came from which round: Who was shaking for number and who was shaking for shape? Across several variations, observers successfully determined what an actor was trying to learn, based only on their actions (i.e., how they shook the box)-even when the box's contents were identical across rounds. These results demonstrate that humans can infer epistemic intent from physical behaviors, adding a new dimension to research on action understanding.


Assuntos
Aprendizagem , Movimento , Humanos , Reconhecimento Psicológico , Intenção
2.
J Vis ; 17(9): 21, 2017 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-28837967

RESUMO

Binocular vision is widely recognized as the most reliable source of 3D information within the peripersonal space, where grasping takes place. Since grasping is normally successful, it is often assumed that stereovision for action is accurate. This claim contradicts psychophysical studies showing that observers cannot estimate the 3D properties of an object veridically from binocular information. In two experiments, we compared a front-to-back grasp with a perceptual depth estimation task and found that in both conditions participants consistently relied on the same distorted 3D representation. The subjects experienced (a) compression of egocentric distances: objects looked closer to each other along the z-axis than they were, and (b) underconstancy of relative depth: closer objects looked deeper than farther objects. These biases, which stem from the same mechanism, varied in magnitude across observers, but they equally affected the perceptual and grasping task of each subject. In a third experiment, we found that the visuomotor system compensates for these systematic errors, which are present at planning, through online corrections allowed by visual and haptic feedback of the hand. Furthermore, we hypothesized that the two phenomena would give rise to estimates of the same depth interval that are geometrically inconsistent. Indeed, in a fourth experiment, we show that the landing positions of the grasping digits differ systematically depending on whether they result from absolute distance estimates or relative depth estimates, even when the targeted spatial locations are identical.


Assuntos
Percepção de Distância/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Visão Binocular/fisiologia , Feminino , Humanos , Masculino , Psicofísica/métodos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA