RESUMO
Research on goal-predictive gaze shifts in infancy so far has mostly focused on the effect of infants' experience with observed actions or the effect of agency cues that the observed agent displays. However, the perspective from which an action is presented to the infants (egocentric vs. allocentric) has received only little attention from researchers despite the fact that the natural observation of own actions is always linked to an egocentric perspective, whereas the observation of others' actions is often linked to an allocentric perspective. The current study investigated the timing of 6-, 9-, and 12-month-olds' goal-predictive gaze behavior, as well as that of adults, during the observation of simple human grasping actions that were presented from either an egocentric or allocentric perspective (within-participants design). The results showed that at 6 and 9 months of age, the infants predicted the action goal only when observing the action from the egocentric perspective. The 12-month-olds and adults, in contrast, predicted the action in both perspectives. The results therefore are in line with accounts proposing an advantage of egocentric versus allocentric processing of social stimuli, at least early in development. This study is among the first to show this egocentric bias already during the first year of life.
RESUMO
Looking times and gaze behavior indicate that infants can predict the goal state of an observed simple action event (e.g., object-directed grasping) already in the first year of life. The present paper mainly focuses on infants' predictive gaze-shifts toward the goal of an ongoing action. For this, infants need to generate a forward model of the to-be-obtained goal state and to disengage their gaze from the moving agent at a time when information about the action event is still incomplete. By about 6 months of age, infants show goal-predictive gaze-shifts, but mainly for familiar actions that they can perform themselves (e.g., grasping) and for familiar agents (e.g., a human hand). Therefore, some theoretical models have highlighted close relations between infants' ability for action-goal prediction and their motor development and/or emerging action experience. Recent research indicates that infants can also predict action goals of familiar simple actions performed by non-human agents (e.g., object-directed grasping by a mechanical claw) when these agents display agency cues, such as self-propelled movement, equifinality of goal approach, or production of a salient action effect. This paper provides a review on relevant findings and theoretical models, and proposes that the impacts of action experience and of agency cues can be explained from an action-event perspective. In particular, infants' goal-predictive gaze-shifts are seen as resulting from an interplay between bottom-up processing of perceptual information and top-down influences exerted by event schemata that store information about previously executed or observed actions.
Assuntos
Sinais (Psicologia) , Objetivos , Humanos , Lactente , MotivaçãoRESUMO
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.
Assuntos
Objetivos , Motivação , Mãos , Humanos , Lactente , Aprendizagem , MovimentoRESUMO
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.
RESUMO
When infants observe a human grasping action, experience-based accounts predict that all infants familiar with grasping actions should be able to predict the goal regardless of additional agency cues such as an action effect. Cue-based accounts, however, suggest that infants use agency cues to identify and predict action goals when the action or the agent is not familiar. From these accounts, we hypothesized that younger infants would need additional agency cues such as a salient action effect to predict the goal of a human grasping action, whereas older infants should be able to predict the goal regardless of agency cues. In three experiments, we presented 6-, 7-, and 11-month-olds with videos of a manual grasping action presented either with or without an additional salient action effect (Exp. 1 and 2), or we presented 7-month-olds with videos of a mechanical claw performing a grasping action presented with a salient action effect (Exp. 3). The 6-month-olds showed tracking gaze behavior, and the 11-month-olds showed predictive gaze behavior, regardless of the action effect. However, the 7-month-olds showed predictive gaze behavior in the action-effect condition, but tracking gaze behavior in the no-action-effect condition and in the action-effect condition with a mechanical claw. The results therefore support the idea that salient action effects are especially important for infants' goal predictions from 7 months on, and that this facilitating influence of action effects is selective for the observation of human hands.
Assuntos
Fixação Ocular/fisiologia , Objetivos , Força da Mão/fisiologia , Feminino , Humanos , Lactente , Masculino , Estimulação Luminosa , Fatores de Tempo , Gravação em VídeoRESUMO
Successful communication often involves comprehension of both spoken language and observed actions with and without objects. Even very young infants can learn associations between actions and objects as well as between words and objects. However, in daily life, children are usually confronted with both kinds of input simultaneously. Choosing the critical information to attend to in such situations might help children structure the input, and thereby, allow for successful learning. In the current study, we therefore, investigated the developmental time course of children's and adults' word and action learning when given the opportunity to learn both word-object and action-object associations for the same object. All participants went through a learning phase and a test phase. In the learning phase, they were presented with two novel objects which were associated with a distinct novel name (e.g., "Look, a Tanu") and a distinct novel action (e.g., moving up and down while tilting sideways). In the test phase, participants were presented with both objects on screen in a baseline phase, then either heard one of the two labels or saw one of the two actions in a prime phase, and then saw the two objects again on screen in a recognition phase. Throughout the trial, participants' target looking was recorded to investigate whether participants looked at the target object upon hearing its label or seeing its action, and thus, would show learning of the word-object and action-object associations. Growth curve analyses revealed that 12-month-olds showed modest learning of action-object associations, 36-month-olds learned word-object associations, and adults learned word-object and action-object associations. These results highlight how children attend to the different information types from the two modalities through which communication is addressed to them. Over time, with increased exposure to systematic word-object mappings, children attend less to action-object mappings, with the latter potentially being mediated by word-object learning even in adulthood. Thus, choosing between different kinds of input that may be more relevant in their rich environment encompassing different modalities might help learning at different points in development.
Assuntos
Aprendizagem por Associação/fisiologia , Compreensão/fisiologia , Desenvolvimento da Linguagem , Percepção da Fala/fisiologia , Aprendizagem Verbal/fisiologia , Adulto , Pré-Escolar , Feminino , Humanos , Lactente , MasculinoRESUMO
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children's word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children's learning of novel word-object associations. Children (18 months, 30 months and 36-48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word-object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child's perceptual abilities to shape the learning experience.
RESUMO
Action effects have been stated to be important for infants' processing of goal-directed actions. In this study, 11-month-olds showed equally fast predictive gaze shifts to a claw's action goal when the grasping action was presented either with three agency cues (self-propelled movement, equifinality of goal achievement and a salient action effect) or with only a salient action effect, but infants showed tracking gaze when the claw showed only self-propelled movement and equifinality of goal achievement. The results suggest that action effects, compared to purely kinematic cues, seem to be especially important for infants' online processing of goal-directed actions.
Assuntos
Fixação Ocular/fisiologia , Comportamento do Lactente/fisiologia , Fenômenos Biomecânicos , Sinais (Psicologia) , Feminino , Objetivos , Humanos , Lactente , Masculino , MotivaçãoRESUMO
Previous research indicates that infants' prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds' action prediction. Infants' (N=49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants' predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants' action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.