RESUMO
We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish-Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from within-language coactivation. The results revealed cross-language, cross-modal activation in both directions. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. Spoken word activation follows the temporal structure of that word only when the word itself is heard; for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)-even when the sign is seen. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. Examining languages across modalities illustrates how this interaction impacts language processing.
Assuntos
Idioma , Multilinguismo , Língua de Sinais , HumanosRESUMO
Spoken words and signs both consist of structured sub-lexical units. While phonemes unfold in time in the case of the spoken signal, visual sub-lexical units such as location and handshape are produced simultaneously in signs. In the current study we investigate the role of sub-lexical units in lexical access in spoken Spanish and in Spanish Sign Language (LSE) in hearing early bimodal bilinguals and in hearing second language (L2) learners of LSE, both native speakers of Spanish, using the visual world paradigm. Experiment 1 investigated phonological competition in spoken Spanish from words sharing onset or rhyme. Experiment 2 investigated competition in LSE from signs sharing handshape or location. For Spanish, the results confirm previous findings for word recognition: onset competition comes first and is more salient than rhyme competition. For sign recognition, native bimodal bilinguals (native speakers of spoken and signed languages) showed earlier competition from location than handshape, and overall stronger competition from handshape compared to location. Hearing bimodal bilinguals who learned LSE as a second language also experienced competition from both signed parameters. However, they showed later effects for location competitors and weaker effects for handshape competitors than native signers. Our results demonstrate that the temporal dynamics of spoken words and signs impact the time course of lexical co-activation. Furthermore, age of acquisition of the signed language modulates sub-lexical processing of signs, and may reflect enhanced abilities of native signers to use early phonological cues in transition movements to constrain sign recognition.
Assuntos
Multilinguismo , Reconhecimento Visual de Modelos/fisiologia , Psicolinguística , Reconhecimento Psicológico/fisiologia , Língua de Sinais , Percepção Espacial/fisiologia , Percepção da Fala/fisiologia , Adulto , Fatores Etários , Humanos , Fonética , Fatores de TempoRESUMO
This study investigated whether language control during language production in bilinguals generalizes across modalities, and to what extent the language control system is shaped by competition for the same articulators. Using a cued language-switching paradigm, we investigated whether switch costs are observed when hearing signers switch between a spoken and a signed language. The results showed an asymmetrical switch cost for bimodal bilinguals on reaction time (RT) and accuracy, with larger costs for the (dominant) spoken language. Our findings suggest important similarities in the mechanisms underlying language selection in bimodal bilinguals and unimodal bilinguals, with competition occurring at multiple levels other than phonology. (PsycINFO Database Record