Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Child Dev ; 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38563146

RESUMEN

Most language use is displaced, referring to past, future, or hypothetical events, posing the challenge of how children learn what words refer to when the referent is not physically available. One possibility is that iconic cues that imagistically evoke properties of absent referents support learning when referents are displaced. In an audio-visual corpus of caregiver-child dyads, English-speaking caregivers interacted with their children (N = 71, 24-58 months) in contexts in which the objects talked about were either familiar or unfamiliar to the child, and either physically present or displaced. The analysis of the range of vocal, manual, and looking behaviors caregivers produced suggests that caregivers used iconic cues especially in displaced contexts and for unfamiliar objects, using other cues when objects were present.

2.
Dev Sci ; 24(3): e13066, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33231339

RESUMEN

A key question in developmental research concerns how children learn associations between words and meanings in their early language development. Given a vast array of possible referents, how does the child know what a word refers to? We contend that onomatopoeia (e.g. knock, meow), where a word's sound evokes the sound properties associated with its meaning, are particularly useful in children's early vocabulary development, offering a link between word and sensory experience not present in arbitrary forms. We suggest that, because onomatopoeia evoke imagery of the referent, children can draw from sensory experience to easily link onomatopoeic words to meaning, both when the referent is present as well as when it is absent. We use two sources of data: naturalistic observations of English-speaking caregiver-child interactions from 14 up to 54 months, to establish whether these words are present early in caregivers' speech to children, and experimental data to test whether English-speaking children can learn from onomatopoeia when it is present. Our results demonstrate that onomatopoeia: (a) are most prevalent in early child-directed language and in children's early productions, (b) are learnt more easily by children compared with non-iconic forms and (c) are used by caregivers in contexts where they can support communication and facilitate word learning.


Asunto(s)
Desarrollo del Lenguaje , Simbolismo , Niño , Humanos , Lenguaje , Aprendizaje Verbal , Vocabulario
3.
Cogn Sci ; 44(7): e12868, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32619055

RESUMEN

Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent "secondary" cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.


Asunto(s)
Señales (Psicología) , Lengua de Signos , Gestos , Mano , Humanos , Boca , Habla
4.
Front Psychol ; 9: 1109, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30002643
5.
Dev Sci ; 21(2)2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-28295866

RESUMEN

Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on iconicity in language, that is, resemblance relationships between form and meaning, and on non-ostensive contexts, where label and referent do not co-occur. We approach the question of language learning from the perspective of the language input. Specifically, we look at child-directed language (CDL) in British Sign Language (BSL), a language rich in iconicity due to the affordances of the visual modality. We ask whether child-directed signing exploits iconicity in the language by highlighting the similarity mapping between form and referent. We find that CDL modifications occur more often with iconic signs than with non-iconic signs. Crucially, for iconic signs, modifications are more frequent in non-ostensive contexts than in ostensive contexts. Furthermore, we find that pointing dominates in ostensive contexts, and suggest that caregivers adjust the semiotic resources recruited in CDL to context. These findings offer first evidence for a role of iconicity in the language input and suggest that iconicity may be involved in referential mapping and language learning, particularly in non-ostensive contexts.


Asunto(s)
Lenguaje Infantil , Desarrollo del Lenguaje , Lengua de Signos , Niño , Humanos , Lenguaje , Aprendizaje
6.
Cogn Sci ; 41 Suppl 6: 1377-1404, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-27484253

RESUMEN

Previous studies show that reading sentences about actions leads to specific motor activity associated with actually performing those actions. We investigate how sign language input may modulate motor activation, using British Sign Language (BSL) sentences, some of which explicitly encode direction of motion, versus written English, where motion is only implied. We find no evidence of action simulation in BSL comprehension (Experiments 1-3), but we find effects of action simulation in comprehension of written English sentences by deaf native BSL signers (Experiment 4). These results provide constraints on the nature of mental simulations involved in comprehending action sentences referring to transfer events, suggesting that the richer contextual information provided by BSL sentences versus written or spoken English may reduce the need for action simulation in comprehension, at least when the event described does not map completely onto the signer's own body.


Asunto(s)
Comprensión/fisiología , Sordera/psicología , Lengua de Signos , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
7.
Top Cogn Sci ; 7(1): 2-11, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25565249

RESUMEN

For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.


Asunto(s)
Gestos , Lenguaje , Reconocimiento Visual de Modelos/fisiología , Lengua de Signos , Humanos , Desarrollo del Lenguaje
8.
Spat Cogn Comput ; 15(3): 143-169, 2015 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-26981027

RESUMEN

Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.

9.
Top Cogn Sci ; 7(1): 36-60, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25472492

RESUMEN

Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture.


Asunto(s)
Gestos , Lengua de Signos , Habla/fisiología , Femenino , Humanos , Lenguaje , Desarrollo del Lenguaje , Modelos Lineales , Reconocimiento Visual de Modelos/fisiología , Personas con Deficiencia Auditiva
10.
Philos Trans R Soc Lond B Biol Sci ; 369(1651): 20130292, 2014 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-25092660

RESUMEN

Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages.


Asunto(s)
Evolución Biológica , Lenguaje , Aprendizaje/fisiología , Lingüística/tendencias , Comunicación no Verbal/fisiología , Semántica , Humanos
11.
Philos Trans R Soc Lond B Biol Sci ; 369(1651): 20130300, 2014 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-25092668

RESUMEN

Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.


Asunto(s)
Desarrollo del Lenguaje , Lenguaje , Modelos Psicológicos , Comunicación no Verbal , Semántica , Simbolismo , Evolución Cultural , Humanos , Vocabulario
12.
Front Psychol ; 1: 227, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-21833282

RESUMEN

Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings found in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers) exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to "hook up" to motor, perceptual, and affective experience.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA