Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
1.
Psychol Res ; 88(2): 307-337, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37847268

RESUMEN

Accounting for how the human mind represents the internal and external world is a crucial feature of many theories of human cognition. Central to this question is the distinction between modal as opposed to amodal representational formats. It has often been assumed that one but not both of these two types of representations underlie processing in specific domains of cognition (e.g., perception, mental imagery, and language). However, in this paper, we suggest that both formats play a major role in most cognitive domains. We believe that a comprehensive theory of cognition requires a solid understanding of these representational formats and their functional roles within and across different domains of cognition, the developmental trajectory of these representational formats, and their role in dysfunctional behavior. Here we sketch such an overarching perspective that brings together research from diverse subdisciplines of psychology on modal and amodal representational formats so as to unravel their functional principles and their interactions.


Asunto(s)
Cognición , Humanos
2.
Mem Cognit ; 46(1): 158-171, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28875474

RESUMEN

Previous behavioral and neurophysiological research has shown better memory for horizontal than for vertical locations. In these studies, participants navigated toward these locations. In the present study we investigated whether the orientation of the spatial plane per se was responsible for this difference. We thus had participants learn locations visually from a single perspective and retrieve them from multiple viewpoints. In three experiments, participants studied colored tags on a horizontally or vertically oriented board within a virtual room and recalled these locations with different layout orientations (Exp. 1) or from different room-based perspectives (Exps. 2 and 3). All experiments revealed evidence for equal recall performance in horizontal and vertical memory. In addition, the patterns for recall from different test orientations were rather similar. Consequently, our results suggest that memory is qualitatively similar for both vertical and horizontal two-dimensional locations, given that these locations are learned from a single viewpoint. Thus, prior differences in spatial memory may have originated from the structure of the space or the fact that participants navigated through it. Additionally, the strong performance advantages for perspective shifts (Exps. 2 and 3) relative to layout rotations (Exp. 1) suggest that configurational judgments are not only based on memory of the relations between target objects, but also encompass the relations between target objects and the surrounding room-for example, in the form of a memorized view.


Asunto(s)
Recuerdo Mental/fisiología , Percepción Espacial/fisiología , Aprendizaje Espacial/fisiología , Memoria Espacial/fisiología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
3.
Exp Brain Res ; 235(4): 1063-1079, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28078359

RESUMEN

Although several process models have described the cognitive processing stages that are involved in mentally rotating objects, the exact nature of the rotation process itself remains elusive. According to embodied cognition, cognitive functions are deeply grounded in the sensorimotor system. We thus hypothesized that modal rotation perceptions should influence mental rotations. We conducted two studies in which participants had to judge if a rotated letter was visually presented canonically or mirrored. Concurrently, participants had to judge if a tactile rotation on their palm changed direction during the trial. The results show that tactile rotations can systematically influence mental rotation performance in that same rotations are favored. In addition, the results show that mental rotations produce a response compatibility effect: clockwise mental rotations facilitate responses to the right, while counterclockwise mental rotations facilitate responses to the left. We conclude that the execution of mental rotations activates cognitive mechanisms that are also used to perceive rotations in different modalities and that are associated with directional motor control processes.


Asunto(s)
Imaginación/fisiología , Procesos Mentales/fisiología , Rotación , Percepción Espacial/fisiología , Tacto/fisiología , Adolescente , Análisis de Varianza , Femenino , Humanos , Masculino , Estimulación Luminosa , Estimulación Física/instrumentación , Desempeño Psicomotor , Tiempo de Reacción/fisiología , Adulto Joven
4.
Cogn Process ; 18(3): 211-228, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28349249

RESUMEN

According to embodied cognition, bodily interactions with our environment shape the perception and representation of our body and the surrounding space, that is, peripersonal space. To investigate the adaptive nature of these spatial representations, we introduced a multisensory conflict between vision and proprioception in an immersive virtual reality. During individual bimanual interaction trials, we gradually shifted the visual hand representation. As a result, participants unknowingly shifted their actual hands to compensate for the visual shift. We then measured the adaptation to the invoked multisensory conflict by means of a self-localization and an external localization task. While effects of the conflict were observed in both tasks, the effects systematically interacted with the type of localization task and the available visual information while performing the localization task (i.e., the visibility of the virtual hands). The results imply that the localization of one's own hands is based on a multisensory integration process, which is modulated by the saliency of the currently most relevant sensory modality and the involved frame of reference. Moreover, the results suggest that our brain strives for consistency between its body and spatial estimates, thereby adapting multiple, related frames of reference, and the spatial estimates within, due to a sensory conflict in one of them.


Asunto(s)
Encéfalo/fisiología , Propiocepción/fisiología , Procesamiento Espacial/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Femenino , Mano , Humanos , Masculino , Espacio Personal , Percepción Espacial , Realidad Virtual , Visión Ocular , Adulto Joven
5.
Exp Brain Res ; 234(8): 2415-31, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27068808

RESUMEN

Action-oriented eye-tracking studies have shown that eye fixations reveal much about current behavioral intentions. The eyes typically fixate those positions of a tool or an object where the fingers will be placed next, or those positions in a scene, where obstacles need to be avoided to successfully reach or transport a tool or object. Here, we asked to what extent eye fixations can also reveal active cognitive inference processes, which are expected to integrate bottom-up visual information with internal knowledge for planning suitable object interactions task-dependently. In accordance to the available literature, we expected that task-relevant knowledge will include sensorimotor, semantic, and mechanical aspects. To investigate if and in which way this internal knowledge influences eye fixation behavior while planning an object interaction, we presented pictures of familiar and unfamiliar tools and instructed participants to either pantomime 'lifting' or 'using' the respective tool. When confronted with unfamiliar tools, participants fixated the tool's effector part closer and longer in comparison with familiar tools. This difference was particularly prominent during 'using' trials when compared with 'lifting' trials. We suggest that this difference indicates that the brain actively extracts mechanical information about the unknown tool in order to infer its appropriate usage. Moreover, the successive fixations over a trial indicate that a dynamic, task-oriented, active cognitive process unfolds, which integrates available tool knowledge with visually gathered information to plan and determine the currently intended tool interaction.


Asunto(s)
Anticipación Psicológica/fisiología , Fijación Ocular/fisiología , Actividad Motora/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Reconocimiento en Psicología , Adulto Joven
6.
J Vis ; 16(1): 18, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26818971

RESUMEN

It is well-known that our eyes typically fixate those objects in a scene, with which interactions are about to unfold. During manual interactions, our eyes usually anticipate the next subgoal and thus serve top-down, goal-driven information extraction requirements, probably driven by a schema-based task representation. On the other hand, motor control research concerning object manipulations has extensively demonstrated how grasping choices are often influenced by deeper considerations about the final goal of manual interactions. Here we show that also these deeper considerations are reflected in early eye fixation behavior, significantly before the hand makes contact with the object. In this study, subjects were asked to either pretend to drink out of the presented object or to hand it over to the experimenter. The objects were presented upright or upside down, thus affording a thumb-up (prone) or a thumb-down (supine) grasp. Eye fixation data show a clear anticipatory preference for the region where the index finger is going to be placed. Indeed, fixations highly correlate with the final index finger position, thus subserving the planning of the actual manual action. Moreover, eye fixations reveal several orders of manual planning: Fixation distributions do not only depend on the object orientation but also on the interaction task. These results suggest a fully embodied, bidirectional sensorimotor coupling of eye-hand coordination: The eyes help in planning and determining the actual manual object interaction, considering where to grasp the presented object in the light of the orientation and type of the presented object and the actual manual task to be accomplished with the object.


Asunto(s)
Fijación Ocular/fisiología , Actividad Motora/fisiología , Análisis y Desempeño de Tareas , Percepción Visual/fisiología , Adulto , Fenómenos Biomecánicos/fisiología , Femenino , Humanos , Masculino , Adulto Joven
7.
Cogn Process ; 16 Suppl 1: 249-53, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26224266

RESUMEN

Grasp selection for object manipulation depend on a person's preferences (e.g., comfortable grasp) and the object's shape (i.e., how the object can be grasped). Both have to be matched when planning to grasp an object. According to the simulation hypothesis, humans simulate the action outcome for each of the grasp options to select the best grasp. However, if an object offers many different grasp options, further processing is necessary to reduce the number of possibilities. According to the preference hypothesis, a preferred grasp is first computed and then adjusted to comply with the objects' shape, if necessary. To test the hypotheses, we asked participants to grasp knobs that could be grasped with two, four, or an unconstrained range of grasps. When participants chose among two or four options, planning time increased with the number of possible grasps which is in line with the simulation hypothesis. However, when grasps were unconstrained, planning times were as short as in the two-grasp condition, suggesting another--possibly preference-based--selection process in this case. In contrast to planning time, grasp choices were comparable regardless of the knob's shape. This suggests a common criterion most likely determined grasp selection in all conditions.


Asunto(s)
Fuerza de la Mano/fisiología , Control Interno-Externo , Orientación , Desempeño Psicomotor/fisiología , Percepción del Tacto/fisiología , Adulto , Análisis de Varianza , Fenómenos Biomecánicos , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
8.
Exp Brain Res ; 232(6): 1677-88, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24534913

RESUMEN

Object-directed grasping movements are usually adjusted in anticipation of the direction and extent of a subsequent object rotation. Such anticipatory grasp selections have been mostly explained in terms of the kinematics of the arm movement. However, object rotations of different directions and extents also differ in their dynamics and in how the tasks are represented. Here, we examined how the dynamics, the kinematics, and the cognitive representation of an object manipulation affect anticipatory grasp selections. We asked participants to grasp an object and rotate it by different angles and in different directions. To examine the influence of dynamic factors, we varied the object's weight. To examine the influence of the cognitive task representation, we instructed identical object rotations as either toward-top or away-from-top rotations. While instructed object rotation and cognitive task representation did affect grasp selection over the entire course of the experiment, a rather small effect of object weight only appeared late in the experiment. We suggest that grasp selections are determined on different levels. The representation of the kinematics of the object movement determines grasp selection on a trial-by-trial basis. The effect of object weight affects grasp selection by a slower adaptation process. This result implies that even simple motor acts, such as grasping, can only be understood when cognitive factors, such as the task representation, are taken into account.


Asunto(s)
Adaptación Fisiológica/fisiología , Cognición/fisiología , Fuerza de la Mano/fisiología , Orientación/fisiología , Desempeño Psicomotor/fisiología , Adulto , Análisis de Varianza , Fenómenos Biomecánicos , Femenino , Antebrazo/fisiología , Lateralidad Funcional/fisiología , Humanos , Masculino , Dinámicas no Lineales , Tiempo de Reacción/fisiología , Rotación , Adulto Joven
9.
Evol Comput ; 22(1): 139-58, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-23746295

RESUMEN

It has been shown previously that the control of a robot arm can be efficiently learned using the XCSF learning classifier system, which is a nonlinear regression system based on evolutionary computation. So far, however, the predictive knowledge about how actual motor activity changes the state of the arm system has not been exploited. In this paper, we utilize the forward velocity kinematics knowledge of XCSF to alleviate the negative effect of noisy sensors for successful learning and control. We incorporate Kalman filtering for estimating successive arm positions, iteratively combining sensory readings with XCSF-based predictions of hand position changes over time. The filtered arm position is used to improve both trajectory planning and further learning of the forward velocity kinematics. We test the approach on a simulated kinematic robot arm model. The results show that the combination can improve learning and control performance significantly. However, it also shows that variance estimates of XCSF prediction may be underestimated, in which case self-delusional spiraling effects can hinder effective learning. Thus, we introduce a heuristic parameter, which can be motivated by theory, and which limits the influence of XCSF's predictions on its own further learning input. As a result, we obtain drastic improvements in noise tolerance, allowing the system to cope with more than 10 times higher noise levels.


Asunto(s)
Inteligencia Artificial/tendencias , Modelos Teóricos , Robótica/métodos , Fenómenos Biomecánicos , Análisis de Regresión , Factores de Tiempo
10.
Behav Brain Res ; 471: 115096, 2024 08 05.
Artículo en Inglés | MEDLINE | ID: mdl-38849007

RESUMEN

BACKGROUND: Theoretical models and behavioural studies indicate faster approach behaviour for high-calorie food (approach bias) among healthy participants. A previous study with Virtual Reality (VR) and online motion-capture quantified this approach bias towards food and non-food cues in a controlled VR environment with hand movements. The aim of this study was to test the specificity of a manual approach bias for high-calorie food in grasp movements compared to low-calorie food and neutral objects of different complexity, namely, simple balls and geometrically more complex office tools. METHODS: In a VR setting, healthy participants (N = 27) repeatedly grasped or pushed high-calorie food, low-calorie food, balls and office tools in randomized order with 30 item repetitions. All objects were rated for valence and arousal. RESULTS: High-calorie food was less attractive and more arousing in subjective ratings than low-calorie food and neutral objects. Movement onset was faster for high-calorie food in push-trials, but overall push responses were comparable. In contrast, responses to high-calorie food relative to low-calorie food and to control objects were faster in grasp trials for later stages of interaction (grasp and collect). Non-parametric tests confirmed an approach bias for high-calorie food. CONCLUSION: A behavioural bias for food was specific to high-calorie food objects. The results confirm the presence of bottom-up advantages in motor-cognitive behaviour for high-calorie food in a non-clinical population. More systematic variations of object fidelity and in clinical populations are outstanding. The utility of VR in assessing approach behaviour is confirmed in this study by exploring manual interactions in a controlled environment.


Asunto(s)
Alimentos , Realidad Virtual , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Desempeño Psicomotor/fisiología , Fuerza de la Mano/fisiología , Señales (Psicología) , Movimiento/fisiología
11.
Biol Cybern ; 107(1): 61-82, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23090574

RESUMEN

Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.


Asunto(s)
Modelos Teóricos , Movimiento , Humanos , Probabilidad
12.
Open Mind (Camb) ; 7: 111-129, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37416076

RESUMEN

Human behavioral choices can reveal intrinsic and extrinsic decision-influencing factors. We investigate the inference of choice priors in situations of referential ambiguity. In particular, we use the scenario of signaling games and investigate to which extent study participants profit from actively engaging in the task. Previous work has revealed that speakers are able to infer listeners' choice priors upon observing ambiguity resolution. However, it was also shown that only a small group of participants was able to strategically construct ambiguous situations to create learning opportunities. This paper sets to address how prior inference unfolds in more complex learning scenarios. In Experiment 1, we examine whether participants accumulate evidence about inferred choice priors across a series of four consecutive trials. Despite the intuitive simplicity of the task, information integration turns out to be only partially successful. Integration errors result from a variety of sources, including transitivity failure and recency bias. In Experiment 2, we investigate how the ability to actively construct learning scenarios affects the success of prior inference and whether the iterative settings improve the ability to choose utterances strategically. The results suggest that full task engagement and explicit access to the reasoning pipeline facilitates the invocation of optimal utterance choices as well as the accurate inference of listeners' choice priors.

13.
Psychol Res ; 76(3): 345-63, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-21499901

RESUMEN

The grasp orientation when grasping an object is frequently aligned in anticipation of the intended rotation of the object (end-state comfort effect). We analyzed grasp orientation selection in a continuous task to determine the mechanisms underlying the end-state comfort effect. Participants had to grasp a box by a circular handle-which allowed for arbitrary grasp orientations-and then had to rotate the box by various angles. Experiments 1 and 2 revealed both that the rotation's direction considerably determined grasp orientations and that end-postures varied considerably. Experiments 3 and 4 further showed that visual stimuli and initial arm postures biased grasp orientations if the intended rotation could be easily achieved. The data show that end-state comfort but also other factors determine grasp orientation selection. A simple mechanism that integrates multiple weighted biases can account for the data.


Asunto(s)
Fuerza de la Mano/fisiología , Orientación/fisiología , Desempeño Psicomotor/fisiología , Fenómenos Biomecánicos/fisiología , Femenino , Mano , Humanos , Masculino , Rotación , Adulto Joven
14.
Cogn Process ; 13 Suppl 1: S113-6, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22806661

RESUMEN

The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top-down biases on visual processing. Equally, such information may influence top-down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.


Asunto(s)
Juicio/fisiología , Modelos Biológicos , Percepción de Movimiento/fisiología , Propiocepción , Tacto/fisiología , Atención/fisiología , Humanos , Metacarpo/inervación , Orientación , Estimulación Luminosa , Postura , Probabilidad , Rotación , Factores de Tiempo
15.
Front Psychol ; 13: 867328, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35846607

RESUMEN

Pursuing a precise, focused train of thought requires cognitive effort. Even more effort is necessary when more alternatives need to be considered or when the imagined situation becomes more complex. Cognitive resources available to us limit the cognitive effort we can spend. In line with previous work, an information-theoretic, Bayesian brain approach to cognitive effort is pursued: to solve tasks in our environment, our brain needs to invest information, that is, negative entropy, to impose structure, or focus, away from a uniform structure or other task-incompatible, latent structures. To get a more complete formalization of cognitive effort, a resourceful event-predictive inference model (REPI) is introduced, which offers computational and algorithmic explanations about the latent structure of our generative models, the active inference dynamics that unfold within, and the cognitive effort required to steer the dynamics-to, for example, purposefully process sensory signals, decide on responses, and invoke their execution. REPI suggests that we invest cognitive resources to infer preparatory priors, activate responses, and anticipate action consequences. Due to our limited resources, though, the inference dynamics are prone to task-irrelevant distractions. For example, the task-irrelevant side of the imperative stimulus causes the Simon effect and, due to similar reasons, we fail to optimally switch between tasks. An actual model implementation simulates such task interactions and offers first estimates of the involved cognitive effort. The approach may be further studied and promises to offer deeper explanations about why we get quickly exhausted from multitasking, how we are influenced by irrelevant stimulus modalities, why we exhibit magnitude interference, and, during social interactions, why we often fail to take the perspective of others into account.

16.
Front Neurorobot ; 16: 881673, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36035589

RESUMEN

Flexible, goal-directed behavior is a fundamental aspect of human life. Based on the free energy minimization principle, the theory of active inference formalizes the generation of such behavior from a computational neuroscience perspective. Based on the theory, we introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture, which processes sensorimotor information, infers behavior-relevant aspects of its world, and invokes highly flexible, goal-directed behavior. We show that our architecture, which is trained end-to-end to minimize an approximation of free energy, develops latent states that can be interpreted as affordance maps. That is, the emerging latent states signal which actions lead to which effects dependent on the local context. In combination with active inference, we show that flexible, goal-directed behavior can be invoked, incorporating the emerging affordance maps. As a result, our simulated agent flexibly steers through continuous spaces, avoids collisions with obstacles, and prefers pathways that lead to the goal with high certainty. Additionally, we show that the learned agent is highly suitable for zero-shot generalization across environments: After training the agent in a handful of fixed environments with obstacles and other terrains affecting its behavior, it performs similarly well in procedurally generated environments containing different amounts of obstacles and terrains of various sizes at different locations.

17.
Front Rehabil Sci ; 3: 806114, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36189032

RESUMEN

Currently, there is neither a standardized mode for the documentation of phantom sensations and phantom limb pain, nor for their visualization as perceived by patients. We have therefore created a tool that allows for both, as well as for the quantification of the patient's visible and invisible body image. A first version provides the principal functions: (1) Adapting a 3D avatar for self-identification of the patient; (2) modeling the shape of the phantom limb; (3) adjusting the position of the phantom limb; (4) drawing pain and cramps directly onto the avatar; and (5) quantifying their respective intensities. Our tool (C.A.L.A.) was evaluated with 33 occupational therapists, physiotherapists, and other medical staff. Participants were presented with two cases in which the appearance and the position of the phantom had to be modeled and pain and cramps had to be drawn. The usability of the software was evaluated using the System Usability Scale and its functional range was evaluated using a self-developed questionnaire and semi-structured interview. In addition, our tool was evaluated on 22 patients with limb amputations. For each patient, body image as well as phantom sensation and pain were modeled to evaluate the software's functional scope. The accuracy of the created body image was evaluated using a self-developed questionnaire and semi-structured interview. Additionally, pain sensation was assessed using the SF-McGill Pain Questionnaire. The System Usability Scale reached a level of 81%, indicating high usability. Observing the participants, though, identified several operational difficulties. While the provided functions were considered useful by most participants, the semi-structured interviews revealed the need for an improved pain documentation component. In conclusion, our tool allows for an accurate visualization of phantom limbs and phantom limb sensations. It can be used as both a descriptive and quantitative documentation tool for analyzing and monitoring phantom limbs. Thus, it can help to bridge the gap between the therapist's conception and the patient's perception. Based on the collected requirements, an improved version with extended functionality will be developed.

18.
Cognition ; 218: 104862, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34634532

RESUMEN

Bayesian accounts of social cognition successfully model the human ability to infer goals and intentions of others on the basis of their behavior. In this paper, we extend this paradigm to the analysis of ambiguity resolution during brief communicative exchanges. In a reference game experimental setup, we observed that participants were able to infer listeners' preferences when analyzing their choice of object given referential ambiguity. Moreover, a subset of speakers was able to strategically choose ambiguous over unambiguous utterances in an epistemic manner, although a different group preferred unambiguous utterances. We show that a modified Rational Speech Act model well-approximates the data of both the inference of listeners' preferences and their utterance choices. In particular, the observed preference inference is modeled by Bayesian inference, which computes posteriors over hypothetical, behavior-influencing inner states of conversation partners-such as their knowledge, beliefs, intentions, or preferences-after observing their utterance-interpretation behavior. Utterance choice is modeled by anticipating social information gain, which we formalize as the expected knowledge change, when choosing a particular utterance and watching the listener's response. Taken together, our results demonstrate how social conversations allow us to (sometimes strategically) learn about each other when observing interpretations of ambiguous utterances.


Asunto(s)
Comprensión , Percepción del Habla , Teorema de Bayes , Humanos , Aprendizaje , Habla
19.
Exp Brain Res ; 213(4): 371-82, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21748333

RESUMEN

A habitual and a goal-directed system contribute to action selection in the human CNS. We examined to which extent both systems interact when selecting grasps for handling everyday objects. In Experiment 1, an upright or inverted cup had to be rotated or moved. To-be-rotated upright cups were more frequently grasped with a thumb-up grasp, which is habitually used to hold an upright cup, than inverted cups, which are not associated with a specific grasp. Additionally, grasp selection depended on the overarching goal of the movement sequence (rotation vs. transport) according to the end-state comfort principle. This shows that the habitual system and the goal-directed system both contribute to grasp selection. Experiment 2 revealed that this object-orientation-dependent grasp selection was present for movements of the dominant- and non-dominant hand. In Experiment 3, different everyday objects had to be moved or rotated. Only if different orientations of an object were associated with different habitual grasps, the grasp selection depended on the object orientation. Additionally, grasp selection was affected by the horizontal direction of the forthcoming movement. In sum, the experiments provide evidence that the interaction between the habitual and the goal-directed system determines grasp selection for the interaction with every-day objects.


Asunto(s)
Objetivos , Habituación Psicofisiológica/fisiología , Fuerza de la Mano/fisiología , Orientación/fisiología , Desempeño Psicomotor/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Percepción Espacial/fisiología , Adulto Joven
20.
Front Psychol ; 12: 695550, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34447336

RESUMEN

During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA