Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
J Exp Child Psychol ; 235: 105690, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37419010

RESUMEN

Children can anticipate upcoming input in sentences with semantically constraining verbs. In the visual world, the sentence context is used to anticipatorily fixate the only object matching potential sentence continuations. Adults can process even multiple visual objects in parallel when predicting language. This study examined whether young children can also maintain multiple prediction options in parallel during language processing. In addition, we aimed at replicating the finding that children's receptive vocabulary size modulates their prediction. German children (5-6 years, n = 26) and adults (19-40 years, n = 37) listened to 32 subject-verb-object sentences with semantically constraining verbs (e.g., "The father eats the waffle") while looking at visual scenes of four objects. The number of objects being consistent with the verb constraints (e.g., being edible) varied among 0, 1, 3, and 4. A linear mixed effects model on the proportion of target fixations with the effect coded factors condition (i.e., the number of consistent objects), time window, and age group revealed that upon hearing the verb, children and adults anticipatorily fixated the single visual object, or even multiple visual objects, being consistent with the verb constraints, whereas inconsistent objects were fixated less. This provides first evidence that, comparable to adults, young children maintain multiple prediction options in parallel. Moreover, children with larger receptive vocabulary sizes (Peabody Picture Vocabulary Test) anticipatorily fixated potential targets more often than those with smaller ones, showing that verbal abilities affect children's prediction in the complex visual world.


Asunto(s)
Lenguaje , Percepción del Habla , Adulto , Humanos , Niño , Preescolar , Vocabulario , Aptitud , Cognición , Comprensión
2.
Brain Cogn ; 135: 103571, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31202157

RESUMEN

Behavioral studies have shown that speaker gaze to objects in a co-present scene can influence listeners' sentence comprehension. To gain deeper insight into the mechanisms involved in gaze processing and integration, we conducted two ERP experiments (N = 30, Age: [18, 32] and [19, 33] respectively). Participants watched a centrally positioned face performing gaze actions aligned to utterances comparing two out of three displayed objects. They were asked to judge whether the sentence was true given the provided scene. We manipulated the second gaze cue to be either Congruent (baseline), Incongruent or Averted (Exp1)/Mutual (Exp2). When speaker gaze is used to form lexical expectations about upcoming referents, we found an attenuated N200 when phonological information confirms these expectations (Congruent). Similarly, we observed attenuated N400 amplitudes when gaze-cued expectations (Congruent) facilitate lexical retrieval. Crucially, only a violation of gaze-cued lexical expectations (Incongruent) leads to a P600 effect, suggesting the necessity to revise the mental representation of the situation. Our results support the hypothesis that gaze is utilized above and beyond simply enhancing a cued object's prominence. Rather, gaze to objects leads to their integration into the mental representation of the situation before they are mentioned.


Asunto(s)
Atención/fisiología , Comprensión/fisiología , Potenciales Evocados/fisiología , Movimientos Oculares/fisiología , Lenguaje , Percepción del Habla/fisiología , Adolescente , Adulto , Señales (Psicología) , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Adulto Joven
3.
Cognition ; 236: 105449, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37030139

RESUMEN

Behavioral studies have shown that speaker gaze to objects in a co-present scene can influence listeners' expectations about how the utterance will unfold. These findings have recently been supported by ERP studies that linked the underlying mechanisms of the integration of speaker gaze with an utterance meaning representation to multiple ERP components. This leads to the question, however, as to whether speaker gaze should be considered part of the communicative signal itself, such that the referential information conveyed by gaze can help listeners not only form expectations but also to confirm referential expectations induced by the prior linguistic context. In the current study, we investigated this question by conducting an ERP experiment (N=24, Age:[19,31]), in which referential expectations were established by linguistic context together with several depicted objects in the scene. Those expectations then could be confirmed by subsequent speaker gaze that preceded the referential expression. Participants were presented with a centrally positioned face performing gaze actions aligned to utterances comparing two out of three displayed objects, with the task to judge whether the sentence was true given the provided scene. We manipulated the gaze cue to be either Present (toward the subsequently named object) or Absent preceding contextually Expected or Unexpected referring nouns. The results provided strong evidence for gaze as being treated as an integral part of the communicative signal: While in the absence of gaze, effects of phonological verification (PMN), word meaning retrieval (N400) and sentence meaning integration/evaluation (P600) were found on the unexpected noun, in the presence of gaze effects of retrieval (N400) and integration/evaluation (P300) were solely found in response to the pre-referent gaze cue when it was directed toward the unexpected referent with attenuated effects on the following referring noun.


Asunto(s)
Comprensión , Electroencefalografía , Humanos , Masculino , Femenino , Comprensión/fisiología , Potenciales Evocados/fisiología , Atención/fisiología , Lenguaje
4.
Acta Psychol (Amst) ; 226: 103558, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35439618

RESUMEN

Developmental and longitudinal studies with children increasingly use pictorial stimuli in cognitive, psychologic, and psycholinguistic research. To enhance validity and comparability within and across those studies, the use of normed pictures is recommended. Besides, creating picture sets and evaluating them in rating studies is very time consuming, in particular regarding samples of young children in which testing time is rather limited. As there is an increasing number of studies that investigate young German children's semantic language processing with colored clipart stimuli, this work provides a first set of 247 colored cliparts with ratings of German native speaking children aged 4 to 6 years. We assessed two central rating aspects of pictures: Name agreement (Do pictures elicit the intended name of an object?) and semantic categorization (Are objects classified as members of the intended semantic category?). Our ratings indicate that children are proficient in naming and even better in semantic categorization of objects, whereas both seems to improve with increasing age of young childhood. Finally, this paper discusses some features of pictorial objects that might be important for children's name agreement and semantic categorization and could be considered in future picture rating studies.


Asunto(s)
Nombres , Semántica , Niño , Preescolar , Humanos , Lenguaje , Reconocimiento Visual de Modelos , Psicolingüística
5.
Front Psychol ; 12: 661898, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34122248

RESUMEN

Recent work has shown that linguistic and visual contexts jointly modulate linguistic expectancy and, thus, the processing effort for a (more or less) expected critical word. According to these findings, uncertainty about the upcoming referent in a visually-situated sentence can be reduced by exploiting the selectional restrictions of a preceding word (e.g., a verb or an adjective), which then reduces processing effort on the critical word (e.g., a referential noun). Interestingly, however, no such modulation was observed in these studies on the expectation-generating word itself. The goal of the current study is to investigate whether the reduction of uncertainty (i.e., the generation of expectations) simply does not modulate processing effort-or whether the particular subject-verb-object (SVO) sentence structure used in these studies (which emphasizes the referential nature of the noun as direct pointer to visually co-present objects) accounts for the observed pattern. To test these questions, the current design reverses the functional roles of nouns and verbs by using sentence constructions in which the noun reduces uncertainty about upcoming verbs, and the verb provides the disambiguating and reference-resolving piece of information. Experiment 1 (a Visual World Paradigm study) and Experiment 2 (a Grammaticality Maze study) both replicate the effect found in previous work (i.e., the effect of visually-situated context on the word which uniquely identifies the referent), albeit on the verb in the current study. Results on the noun, where uncertainty is reduced and expectations are generated in the current design, were mixed and were most likely influenced by design decisions specific to each experiment. These results show that processing of the reference-resolving word-whether it be a noun or a verb-reliably benefits from the prior linguistic and visual information that lead to the generation of concrete expectations.

6.
Psychon Bull Rev ; 28(2): 624-631, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33269463

RESUMEN

Recently, Ankener et al. (Frontiers in Psychology, 9, 2387, 2018) presented a visual world study which combined both attention and pupillary measures to demonstrate that anticipating a target results in lower effort to integrate that target (noun). However, they found no indication that the anticipatory processes themselves, i.e., the reduction of uncertainty about upcoming referents, results in processing effort (cf. Linzen and Jaeger, Cognitive Science, 40(6), 1382-1411, 2016). In contrast, Maess et al. (Frontiers in Human Neuroscience, 10, 1-11, 2016) found that more constraining verbs elicited a higher N400 amplitude than unconstraining verbs. The aim of the present study was therefore twofold: Firstly, we examined whether the graded ICA effect, which was previously found on the noun as a result of a likelihood manipulation, replicates in ERP measures. Secondly, we set out to investigate whether the processes leading to the generation of expectations (derived during verb and scene processing) induce an N400 modulation. Our results confirm that visual context is combined with the verb's meaning to establish expectations about upcoming nouns and that these expectations affect the retrieval of the upcoming noun (modulated N400 on the noun). Importantly, however, we find no evidence for different costs in generating more or less specific expectations for upcoming nouns. Thus, the benefits of generating expectations are not associated with any costs in situated language comprehension.


Asunto(s)
Anticipación Psicológica/fisiología , Comprensión/fisiología , Potenciales Evocados/fisiología , Psicolingüística , Percepción Visual/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
7.
Cogn Sci ; 42(8): 2418-2458, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30294808

RESUMEN

Referential gaze has been shown to benefit language processing in situated communication in terms of shifting visual attention and leading to shorter reaction times on subsequent tasks. The present study simultaneously assessed both visual attention and, importantly, the immediate cognitive load induced at different stages of sentence processing. We aimed to examine the dynamics of combining visual and linguistic information in creating anticipation for a specific object and the effect this has on language processing. We report evidence from three visual-world eye-tracking experiments, showing that referential gaze leads to a shift in visual attention toward the cued object, which consequently lowers the effort required for processing the linguistic reference. Importantly, perceiving and following the gaze cue did not prove costly in terms of cognitive effort, unless the cued object did not fit the verb selectional preferences.


Asunto(s)
Atención/fisiología , Cognición/fisiología , Comprensión/fisiología , Movimientos Oculares/fisiología , Adolescente , Adulto , Señales (Psicología) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
8.
Front Psychol ; 9: 2387, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30618905

RESUMEN

A word's predictability or surprisal, as determined by cloze probabilities or language models (Frank, 2013) is related to processing effort, in that less expected words take more effort to process (Hale, 2001; Lau et al., 2013). A word's surprisal, however, may also be influenced by the non-linguistic context, such as visual cues: In the visual world paradigm (VWP), anticipatory eye movements suggest that listeners exploit the scene to predict what will be mentioned next (Altmann and Kamide, 1999). How visual context affects surprisal and processing effort, however, remains unclear. Here, we present a series of four studies providing evidence on how visually-determined probabilistic expectations for a spoken target word, as indicated by anticipatory eye movements, predict graded processing effort for that word, as assessed by a pupillometric measure (the Index of Cognitive Activity, ICA). These findings are a clear and robust demonstration that the non-linguistic context can immediately influence both lexical expectations, and surprisal-based processing effort.

9.
Cogn Res Princ Implic ; 3(1): 51, 2018 Dec 29.
Artículo en Inglés | MEDLINE | ID: mdl-30594976

RESUMEN

Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential success. Specifically, our system describes objects in the real world to a human listener using on-the-fly speech generation. It continuously interprets listener gaze and implements alternative strategies to react to this implicit feedback. We used this system to investigate an optimal strategy for task performance: providing an unambiguous, longer instruction right from the beginning, or starting with a shorter, yet ambiguous instruction. Further, the system provides gaze-driven feedback, which could be either underspecified ("No, not that one!") or contrastive ("Further left!"). As expected, our results show that ambiguous instructions followed by underspecified feedback are not beneficial for task performance, whereas contrastive feedback results in faster interactions. Interestingly, this approach even outperforms unambiguous instructions (manipulation between subjects). However, when the system alternates between underspecified and contrastive feedback to initially ambiguous descriptions in an interleaved manner (within subjects), task performance is similar for both approaches. This suggests that listeners engage more intensely with the system when they can expect it to be cooperative. This, rather than the actual informativity of the spoken feedback, may determine the efficiency of information uptake and performance.

10.
Psychon Bull Rev ; 24(2): 400-407, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27432003

RESUMEN

So-called "looks-at-nothing" have previously been used to show that recalling what also elicits the recall of where this was. Here, we present evidence from an eye-tracking study which shows that disrupting looks to "there" does not disrupt recalling what was there, nor do (anticipatory) looks to "there" facilitate recalling what was there. Therefore, our results suggest that recalling where does not recall what.


Asunto(s)
Atención , Fijación Ocular , Recuerdo Mental , Orientación , Reconocimiento Visual de Modelos , Movimientos Sacádicos , Aprendizaje Espacial , Anticipación Psicológica , Señales (Psicología) , Discriminación en Psicología , Femenino , Humanos , Masculino , Probabilidad , Adulto Joven
11.
Cogn Sci ; 40(7): 1671-1703, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-26471391

RESUMEN

Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction.


Asunto(s)
Atención/fisiología , Comunicación , Comprensión/fisiología , Movimientos Oculares/fisiología , Realidad Virtual , Adulto , Femenino , Humanos , Lenguaje , Masculino
12.
Cognition ; 133(1): 317-28, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25079951

RESUMEN

Previous research has shown that listeners follow speaker gaze to mentioned objects in a shared environment to ground referring expressions, both for human and robot speakers. What is less clear is whether the benefit of speaker gaze is due to the inference of referential intentions (Staudte and Crocker, 2011) or simply the (reflexive) shifts in visual attention. That is, is gaze special in how it affects simultaneous utterance comprehension? In four eye-tracking studies we directly contrast speech-aligned speaker gaze of a virtual agent with a non-gaze visual cue (arrow). Our findings show that both cues similarly direct listeners' attention and that listeners can benefit in utterance comprehension from both cues. Only when they are similarly precise, however, does this equality extend to incongruent cueing sequences: that is, even when the cue sequence does not match the concurrent sequence of spoken referents can listeners benefit from gaze as well as arrows. The results suggest that listeners are able to learn a counter-predictive mapping of both cues to the sequence of referents. Thus, gaze and arrows can in principle be applied with equal flexibility and efficiency during language comprehension.


Asunto(s)
Atención/fisiología , Comprensión/fisiología , Fijación Ocular/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Femenino , Humanos , Intención , Masculino , Tiempo de Reacción/fisiología
13.
Cognition ; 120(2): 268-91, 2011 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-21665198

RESUMEN

Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human-robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker's referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement. We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms.


Asunto(s)
Atención , Comunicación , Movimientos Oculares , Interfaz Usuario-Computador , Adulto , Comprensión , Humanos , Lenguaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA