Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Proc Biol Sci ; 288(1943): 20202419, 2021 01 27.
Artículo en Inglés | MEDLINE | ID: mdl-33499783

RESUMEN

Beat gestures-spontaneously produced biphasic movements of the hand-are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple 'flicks of the hand' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.


Asunto(s)
Gestos , Percepción del Habla , Humanos , Lenguaje , Fonética , Habla
2.
Behav Res Methods ; 50(3): 1047-1054, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-28646401

RESUMEN

The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.


Asunto(s)
Imagenología Tridimensional/métodos , Proyectos de Investigación , Realidad Virtual , Bases de Datos Factuales , Humanos , Estimulación Luminosa/métodos , Reconocimiento en Psicología
3.
Behav Res Methods ; 50(3): 1102-1115, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-28791625

RESUMEN

Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.


Asunto(s)
Comprensión , Movimientos Oculares , Realidad Virtual , Estimulación Acústica , Adulto , Señales (Psicología) , Medidas del Movimiento Ocular , Femenino , Humanos , Lenguaje , Masculino , Estimulación Luminosa
4.
Behav Res Methods ; 50(2): 862-869, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-28550656

RESUMEN

When we comprehend language, we often do this in rich settings where we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and nonlinguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and virtual reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant while wearing EEG equipment. In the restaurant, participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g., a plate with salmon). The restaurant guest would then produce a sentence (e.g., "I just ordered this salmon."). The noun in the spoken sentence could either match ("salmon") or mismatch ("pasta") the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.


Asunto(s)
Comprensión/fisiología , Electroencefalografía , Lenguaje , Procesamiento de Lenguaje Natural , Percepción del Habla/fisiología , Realidad Virtual , Adolescente , Adulto , Análisis de Varianza , Señales (Psicología) , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Adulto Joven
5.
J Cogn Neurosci ; 27(12): 2352-68, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26284993

RESUMEN

In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.


Asunto(s)
Encéfalo/fisiología , Dedos/fisiología , Gestos , Desempeño Psicomotor/fisiología , Habla/fisiología , Atención/fisiología , Fenómenos Biomecánicos , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Relaciones Interpersonales , Masculino , Pruebas Neuropsicológicas , Adulto Joven
6.
J Cogn ; 7(1): 35, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38638461

RESUMEN

Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals' capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory.

7.
Neuropsychologia ; 193: 108764, 2024 01 29.
Artículo en Inglés | MEDLINE | ID: mdl-38141963

RESUMEN

Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.


Asunto(s)
Multilingüismo , Realidad Virtual , Humanos , Lenguaje , Lingüística , Psicolingüística
8.
Cogn Neurosci ; 14(2): 61-62, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36803570

RESUMEN

The use of naturalistic stimuli in cognitive neuroscience experiments inspires and requires theoretical foundations that bring together different cognitive domains, such as emotion, language, and morality. By zooming in on the digital environments in which we often perceive emotional messages today, and inspired by the Mixed and Ambiguous Emotions and Morality model, we here argue that successfully processing emotional information in the twenty-first century will often have to rely not only on simulation and/or mentalizing, but also on executive control and attention regulation.


Asunto(s)
Atención , Emociones , Humanos , Emociones/fisiología , Atención/fisiología , Función Ejecutiva/fisiología , Lenguaje
9.
Cognition ; 240: 105581, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37573692

RESUMEN

Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.


Asunto(s)
Gestos , Intención , Humanos , Dedos , Mano , Movimiento
10.
Neuropsychologia ; 191: 108730, 2023 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-37939871

RESUMEN

EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.


Asunto(s)
Electroencefalografía , Percepción del Habla , Humanos , Masculino , Femenino , Habla , Tecnología de Seguimiento Ocular , Comprensión/fisiología , Potenciales Evocados , Percepción del Habla/fisiología
11.
Nat Hum Behav ; 7(12): 2099-2110, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37904020

RESUMEN

The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives-the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.


Asunto(s)
Lenguaje , Semántica , Humanos , Cognición
12.
Psychon Bull Rev ; 28(2): 409-433, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33037583

RESUMEN

Language allows us to efficiently communicate about the things in the world around us. Seemingly simple words like this and that are a cornerstone of our capability to refer, as they contribute to guiding the attention of our addressee to the specific entity we are talking about. Such demonstratives are acquired early in life, ubiquitous in everyday talk, often closely tied to our gestural communicative abilities, and present in all spoken languages of the world. Based on a review of recent experimental work, here we introduce a new conceptual framework of demonstrative reference. In the context of this framework, we argue that several physical, psychological, and referent-intrinsic factors dynamically interact to influence whether a speaker will use one demonstrative form (e.g., this) or another (e.g., that) in a given setting. However, the relative influence of these factors themselves is argued to be a function of the cultural language setting at hand, the theory-of-mind capacities of the speaker, and the affordances of the specific context in which the speech event takes place. It is demonstrated that the framework has the potential to reconcile findings in the literature that previously seemed irreconcilable. We show that the framework may to a large extent generalize to instances of endophoric reference (e.g., anaphora) and speculate that it may also describe the specific form and kinematics a speaker's pointing gesture takes. Testable predictions and novel research questions derived from the framework are presented and discussed.


Asunto(s)
Gestos , Lenguaje , Modelos Psicológicos , Habla , Teoría de la Mente , Humanos
13.
Cognition ; 195: 104107, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31731119

RESUMEN

Perhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.


Asunto(s)
Potenciales Evocados/fisiología , Función Ejecutiva/fisiología , Multilingüismo , Inhibición Proactiva , Psicolingüística , Habla/fisiología , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología , Realidad Virtual , Adulto Joven
14.
Q J Exp Psychol (Hove) ; 73(10): 1523-1536, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32160814

RESUMEN

Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two "Go Fish"-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants' target categorisation responses. These findings contribute to a better understanding of how what we see influences what we hear.


Asunto(s)
Señales (Psicología) , Fonética , Percepción del Habla , Habla , Estimulación Acústica , Acústica , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa , Adulto Joven
15.
J Exp Psychol Learn Mem Cogn ; 46(3): 403-415, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31192681

RESUMEN

When learning a second spoken language, cognates, words overlapping in form and meaning with one's native language, help breaking into the language one wishes to acquire. But what happens when the to-be-acquired second language is a sign language? We tested whether hearing nonsigners rely on their gestural repertoire at first exposure to a sign language. Participants saw iconic signs with high and low overlap with the form of iconic gestures while electrophysiological brain activity was recorded. Upon first exposure, signs with low overlap with gestures elicited enhanced positive amplitude in the P3a component compared to signs with high overlap. This effect disappeared after a training session. We conclude that nonsigners generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures, even without having to produce them. Learners thus draw from any available semiotic resources when acquiring a second language, and not only from their linguistic experience. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Potenciales Evocados/fisiología , Gestos , Aprendizaje/fisiología , Multilingüismo , Psicolingüística , Lengua de Signos , Adulto , Anticipación Psicológica/fisiología , Electroencefalografía , Potenciales Relacionados con Evento P300/fisiología , Femenino , Humanos , Masculino , Práctica Psicológica
16.
Psychon Bull Rev ; 26(3): 894-900, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30734158

RESUMEN

This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behavior, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g., speech) in isolation.


Asunto(s)
Psicolingüística/métodos , Realidad Virtual , Humanos
17.
Cortex ; 111: 63-73, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30458296

RESUMEN

Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Lenguaje , Multilingüismo , Red Nerviosa/fisiología , Reconocimiento en Psicología/fisiología , Adulto , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/diagnóstico por imagen , Lectura , Adulto Joven
19.
Neuropsychologia ; 95: 21-29, 2017 01 27.
Artículo en Inglés | MEDLINE | ID: mdl-27939189

RESUMEN

In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.


Asunto(s)
Encéfalo/fisiología , Gestos , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Mapeo Encefálico , Comprensión/fisiología , Señales (Psicología) , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Adulto Joven
20.
Cognition ; 136: 64-84, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25490130

RESUMEN

A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers' linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Comprensión/fisiología , Lenguaje , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Habla/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA