Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters








Publication year range
1.
Dev Sci ; 27(5): e13507, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38629500

ABSTRACT

Blind adults display language-specificity in their packaging and ordering of events in speech. These differences affect the representation of events in co-speech gesture-gesturing with speech-but not in silent gesture-gesturing without speech. Here we examine when in development blind children begin to show adult-like patterns in co-speech and silent gesture. We studied speech and gestures produced by 30 blind and 30 sighted children learning Turkish, equally divided into 3 age groups: 5-6, 7-8, 9-10 years. The children were asked to describe three-dimensional spatial event scenes (e.g., running out of a house) first with speech, and then without speech using only their hands. We focused on physical motion events, which, in blind adults, elicit cross-linguistic differences in speech and co-speech gesture, but cross-linguistic similarities in silent gesture. Our results showed an effect of language on gesture when it was accompanied by speech (co-speech gesture), but not when it was used without speech (silent gesture) across both blind and sighted learners. The language-specific co-speech gesture pattern for both packaging and ordering semantic elements was present at the earliest ages we tested the blind and sighted children. The silent gesture pattern appeared later for blind children than sighted children for both packaging and ordering. Our findings highlight gesture as a robust and integral aspect of the language acquisition process at the early ages and provide insight into when language does and does not have an effect on gesture, even in blind children who lack visual access to gesture. RESEARCH HIGHLIGHTS: Gestures, when produced with speech (i.e., co-speech gesture), follow language-specific patterns in event representation in both blind and sighted children. Gestures, when produced without speech (i.e., silent gesture), do not follow the language-specific patterns in event representation in both blind and sighted children. Language-specific patterns in speech and co-speech gestures are observable at the same time in blind and sighted children. The cross-linguistic similarities in silent gestures begin slightly later in blind children than in sighted children.


Subject(s)
Blindness , Gestures , Language Development , Speech , Humans , Child , Male , Female , Child, Preschool , Speech/physiology , Blindness/physiopathology , Vision, Ocular/physiology , Language
2.
Cogn Sci ; 47(11): e13377, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37966099

ABSTRACT

Static depiction of motion, particularly lines trailing behind a mover, has long been of interest in the psychology literature. Empirical research has demonstrated that these "motion lines" benefited motion comprehension in static images by disambiguating the direction of movement. Yet, there is no consensus on how those lines derive their meaning. In this article, we review three accounts suggesting different interpretations of what motion lines represent. While a perceptual account considers motion lines originating from motion streaks in the primary visual cortex, metaphorical and lexical accounts propose them as graphical conventions that should be learned-either through resemblance to sensory experiences (e.g., natural path marks) or directly being mapped to a conceptual category of paths. To contrast these three accounts, we integrate empirical research on motion lines and their understanding. Overall, developmental, proficiency, and cross-cultural variances indicate that the understanding of motion lines is neither innate nor universal, thus providing less support for lines having a purely perceptual origin. Rather, we argue the empirical findings suggest that motion lines are not iconic depictions of visual percepts but are graphical conventions indexing conceptual path information, which need to be learned and encoded in a visual lexicon.


Subject(s)
Motion Perception , Humans , Learning , Motion , Movement , Empirical Research
3.
Cogn Sci ; 47(1): e13228, 2023 01.
Article in English | MEDLINE | ID: mdl-36607157

ABSTRACT

The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.


Subject(s)
Blindness , Language , Humans , Speech , Motion
4.
Front Psychol ; 13: 1010002, 2022.
Article in English | MEDLINE | ID: mdl-36312066

ABSTRACT

A core question in developmental and cognitive research concerns the way linguistic variation affects the acquisition process. Previous research on monolinguals suggests that children, but not adults, tend to regularize inconsistent input, resulting in reduced variation. Some recent claims explain regularization as a general bias linked to cognitive load. However, little is known about bilingual acquisition contexts where children naturally experience both increased variability and cognitive load. This study investigated the impact of between- and within-language variability in syntactic packaging (i.e., how semantic elements are mapped onto syntactic units) on simultaneous bilinguals' acquisition of motion event encoding. In this domain, French is considered highly variable, in contrast to low variability demonstrated by English. Based on this crosslinguistic contrast, 96 English-French bilingual children (aged 4-11 years) and 96 age-matched monolinguals of each language described 32 animated cartoons showing complex motion events. Children's variability of selected syntactic patterns was measured using the information-theoretical concept of entropy. Results indicated that bilingual children significantly reduced syntactic variation relative to monolingual peers, but only in French, the more variable language. Moreover, bilingual children converged in entropy levels across the two languages and patterned mid-way between respective monolinguals. These findings suggest that the cognitive load inherent in bilingualism is not sufficient to explain reduced linguistic variation. Instead, the asymmetric drop in entropy highlights the strong impact of crosslinguistic differences and thus underlines the importance of taking language-specific factors into account in theories of cognitive load.

5.
Front Psychol ; 13: 892346, 2022.
Article in English | MEDLINE | ID: mdl-35911036

ABSTRACT

The study examined the implications of Talmy motion event typology and Slobin's thinking-for-speaking hypothesis for the context of Uyghur-Chinese early successive bilingualism. Uyghur and Chinese represent genetically distant languages (Turkic vs. Sino-Tibetan) that nonetheless share important framing properties in the motion domain, i.e., verb-framing. This study thus aimed to establish how this structural overlap would inform bilingual speakers' construal of motion events. Additionally, it sought to offer an "end state" perspective to a previous study on Uyghur-Chinese child bilinguals and to shed light on issues around the longevity of crosslinguistic influence. Thirty adult Uyghur-Chinese early successive bilinguals were invited to describe a set of voluntary motion events (e.g., "a man runs across the road"). Their verbalizations, alongside those from 24 monolingual Uyghur and 12 monolingual Chinese speakers were systematically analyzed with regard to the kind of linguistic devices used to encode key components of motion (main verb vs. other devices), the frequency with which the components are expressed together (Manner + Path) or separately (Path or Manner) and how they are syntactically packaged. The findings show that the bilinguals' thinking-for-speaking patterns are largely language-specific, with little crosslinguistic influence. A comparison of our findings with previous studies on Uyghur-Chinese child bilinguals revealed no developmental change either in the analyzed aspects of motion descriptions or in patterns of crosslinguistic influence. As such, the findings lend support to accounts that propose crosslinguistic influence to be a developmental phenomenon.

6.
Front Psychol ; 13: 878277, 2022.
Article in English | MEDLINE | ID: mdl-35795448

ABSTRACT

While understanding and expressing causal relations are universal aspects of human cognition, language users may differ in their capacity to perceive, interpret, and express events. One source of variation in descriptions of caused motion events is agentivity, which refers to the attribution of a result to the agent's action. Depending on the perspective taken, the same event may be described with agentive or non-agentive interpretations. Does language play a role in how people construe and express caused motion events? The present study investigated the use of agentive vs. non-agentive language by speakers of different languages (i.e., monolingual speakers of English and Korean, and Korean learners of English). All three groups described prototypical causal events similarly, using agentive language (active transitive sentences). However, when it came to non-prototypical causal events (where the agent was not shown in the scene), they diverged in their choice of language: English speakers favored agentive language (passive transitive sentences), whereas Korean speakers preferred non-agentive language (intransitive sentences). Korean learners of English patterned with Korean speakers, demonstrating L1 influence on their use of English. These findings highlight the effects of language on motion event construal.

7.
Cognition ; 225: 105127, 2022 08.
Article in English | MEDLINE | ID: mdl-35617850

ABSTRACT

Speakers' visual attention to events is guided by linguistic conceptualization of information in spoken language production and in language-specific ways. Does production of language-specific co-speech gestures further guide speakers' visual attention during message preparation? Here, we examine the link between visual attention and multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers' speech and gesture show language specificity with path of motion mostly expressed within the main verb accompanied by path gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not. Results strongly suggest that speakers' visual attention is guided by language-specific event encoding not only in speech but also in gesture. This provides evidence consistent with models that propose integration of speech and gesture at the conceptualization level of language production and suggests that the links between the eye and the mouth may be extended to the eye and the hand.


Subject(s)
Concept Formation , Gestures , Adult , Eye Movements , Humans , Perception , Speech
8.
Cogn Sci ; 46(1): e13077, 2022 01.
Article in English | MEDLINE | ID: mdl-35085409

ABSTRACT

We investigate the extent to which pragmatic versus conceptual factors can affect a speaker's decision to mention or omit different components of an event. In the two experiments, we demonstrate the special role of pragmatic factors related to audience design in speakers' decisions to mention conceptually "peripheral" event components, such as sources (i.e., starting points) in source-goal motion events (e.g., a baby crawling from a crib to a toybox). In particular, we found that pragmatic factors related to audience design could not only drive the decision to omit sources from mention, but could also motivate speakers to mention sources more often than needed. By contrast, speaker's decisions to talk about goals did not appear to be fundamentally driven by pragmatic factors in communication. We also manipulated the animacy of the figure in motion and found that participants in our studies treated both animate and inanimate source-goal motion events in the same way, both linguistically and in memory. We discuss the implications of our work for message generation across different communicative contexts and for future work on the topic of audience design.


Subject(s)
Communication , Language , Humans , Motion
9.
Cogn Semiot ; 15(2): 197-222, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36590029

ABSTRACT

Languages use different strategies to encode motion. Some use particles or "satellites" to describe a path of motion (Satellite-framed or S-languages like English), while others typically use the main verb to convey the path information (Verb-framed or V-languages like French). We here ask: might this linguistic variation lead to differences in the way paths are depicted in visual narratives like comics? We analyzed a corpus of 85 comics originally created by speakers of S-languages (comics from the United States, China, Germany) and V-languages (France, Japan, Korea) for both their depictions of path segments (source, route, and goal) and the visual cues signaling these paths and manner information (e.g., motion lines and postures). Panels from S-languages depicted more path segments overall, especially routes, than those from V-languages, but panels from V-languages more often isolated path segments into their own panels. Additionally, comics from S-languages depicted more motion cues than those from V-languages, and this linguistic typology also interacted with panel framing. Despite these differences across typological groups, analysis of individual countries' comics showed more nuanced variation than a simple S-V dichotomy. These findings suggest a possible influence of spoken language structure on depicting motion events in visual narratives and their sequencing.

10.
Front Psychol ; 12: 686485, 2021.
Article in English | MEDLINE | ID: mdl-34413812

ABSTRACT

Previous work on placement expressions (e.g., "she put the cup on the table") has demonstrated cross-linguistic differences in the specificity of placement expressions in the native language (L1), with some languages preferring more general, widely applicable expressions and others preferring more specific expressions based on more fine-grained distinctions. Research on second language (L2) acquisition of an additional spoken language has shown that learning the appropriate L2 placement distinctions poses a challenge for adult learners whose L2 semantic representations can be non-target like and have fuzzy boundaries. Unknown is whether similar effects apply to learners acquiring a L2 in a different sensory-motor modality, e.g., hearing learners of a sign language. Placement verbs in signed languages tend to be highly iconic and to exhibit transparent semantic boundaries. This may facilitate acquisition of signed placement verbs. In addition, little is known about how exposure to different semantic boundaries in placement events in a typologically different language affects lexical semantic meaning in the L1. In this study, we examined placement event descriptions (in American Sign Language (ASL) and English) in hearing L2 learners of ASL who were native speakers of English. L2 signers' ASL placement descriptions looked similar to those of two Deaf, native ASL signer controls, suggesting that the iconicity and transparency of placement distinctions in the visual modality may facilitate L2 acquisition. Nevertheless, L2 signers used a wider range of handshapes in ASL and used them less appropriately, indicating that fuzzy semantic boundaries occur in cross-modal L2 acquisition as well. In addition, while the L2 signers' English verbal expressions were not different from those of a non-signing control group, placement distinctions expressed in co-speech gesture were marginally more ASL-like for L2 signers, suggesting that exposure to different semantic boundaries can cause changes to how placement is conceptualized in the L1 as well.

11.
Front Psychol ; 12: 625153, 2021.
Article in English | MEDLINE | ID: mdl-33859591

ABSTRACT

Syntactic templates serve as schemas, allowing speakers to describe complex events in a systematic fashion. Motion events have long served as a prime example of how different languages favor different syntactic frames, in turn biasing their speakers toward different event conceptualizations. However, there is also variability in how motion events are syntactically framed within languages. Here, we measure the consistency in event encoding in two languages, Spanish and Swedish. We test a dominant account in the literature, namely that variability within a language can be explained by specific properties of the events. This event-properties account predicts that descriptions of one and the same event should be consistent within a language, even in languages where there is overall variability in the use of syntactic frames. Spanish and Swedish speakers (N = 84) described 32 caused motion events. While the most frequent syntactic framing in each language was as expected based on typology (Spanish: verb-framed, Swedish: satellite-framed, cf. Talmy, 2000), Swedish descriptions were substantially more consistent than Spanish descriptions. Swedish speakers almost invariably encoded all events with a single syntactic frame and systematically conveyed manner of motion. Spanish descriptions, in contrast, varied much more regarding syntactic framing and expression of manner. Crucially, variability in Spanish descriptions was not mainly a function of differences between events, as predicted by the event-properties account. Rather, Spanish variability in syntactic framing was driven by speaker biases. A similar picture arose for whether Spanish descriptions expressed manner information or not: Even after accounting for the effect of syntactic choice, a large portion of the variance in Spanish manner encoding remained attributable to differences among speakers. The results show that consistency in motion event encoding starkly differs across languages: Some languages (like Swedish) bias their speakers toward a particular linguistic event schema much more than others (like Spanish). Implications of these findings are discussed with respect to the typology of event framing, theories on the relationship between language and thought, and speech planning. In addition, the tools employed here to quantify variability can be applied to other domains of language.

12.
Neuropsychologia ; 149: 107638, 2020 12.
Article in English | MEDLINE | ID: mdl-33007360

ABSTRACT

The expression of motion shows strong crosslinguistic variability; however, less is known about speakers' expectancies for lexicalizations of motion at the neural level. We examined event-related brain potentials (ERPs) in native English or Spanish speakers while they read grammatical sentences describing animations involving manner and path components of motion that did or did not violate language-specific patterns of expression. ERPs demonstrated different expectancies between speakers: Spanish speakers showed higher expectancies for motion verbs to encode path and English speakers showed higher expectancies for motion verbs to encode manner followed by a secondary path expression. Interestingly, grammatical but infrequent motion expressions (manner verbs in Spanish, path verbs and secondary manner expressions in English) elicited semantic P600 rather than the expected N400 effects-with or without post-N400 positivities-that are typically associated with semantic processing. Overall, our findings provide the first empirical evidence for the effect of crosslinguistic variation in processing motion event descriptions at the neural level.


Subject(s)
Electroencephalography , Semantics , Evoked Potentials , Female , Humans , Language , Male , Reading
13.
Cogn Neuropsychol ; 37(5-6): 254-270, 2020.
Article in English | MEDLINE | ID: mdl-31856652

ABSTRACT

Language is assumed to affect memory by offering an additional medium of encoding visual stimuli. Given that natural languages differ, cross-linguistic differences might impact memory processes. We investigate the role of motion verbs on memory for motion events in speakers of English, which preferentially encodes manner in motion verbs (e.g., driving), and Greek, which tends to encode path of motion in verbs (e.g., entering). Participants viewed a series of motion events and we later assessed their memory of the path and manner of the original events. There were no effects of language-specific biases on memory when participants watched events in silence; both English and Greek speakers remembered paths better than manners of motion. Moreover, even when motion verbs were available (either produced by or heard by the participants), they affected memory similarly regardless of the participants' language: path verbs attenuated memory for manners of motion, but the reverse did not occur. We conclude that overt language affects motion memory, but these effects interact with underlying, shared biases in how viewers represent motion events.


Subject(s)
Language , Memory/physiology , Humans
14.
Front Psychol ; 9: 1698, 2018.
Article in English | MEDLINE | ID: mdl-30464750

ABSTRACT

This paper focuses on the patterns in the encoding of spatial motion events that play a major role in the acquisition of these type of expressions. The goal is to single out the semantic contribution of the linguistic items which surface in Chinese locative constructions. In this way, we intend to provide learners with an account of the spatial representation encoded in the Chinese language. In fact, Chinese grammar is often perceived as idiosyncratic, thus generating a frustration that turns into learned helplessness (Maier and Seligman, 1976). We will analyze Talmy (2000a,b) framework under the light of investigations such as Landau and Jackendoff (1993), Svenonius (2004, 2006, 2007), and Terzi (2010). It will be shown that in Chinese locative structures, the Axial Part information is signaled by localizers and can be specified only when the Ground is considered as an object with "axially determined parts" (Landau and Jackendoff, 1993). Thus, we will elaborate on present account on the localizer's function (Peyraube, 2003; Lamarre, 2007; Lin, 2013) by showing that the localizer highlights an axially determined part within a reference object, consistently with Terzi (2010) definition of Place, and with Wu (2015) decomposition of Place into Ground and Axial Part. Moreover, it will be shown that the preposition zài 'at' encodes a Locative type of Motion event (Talmy, 2000b), thus, it is not semantically vacuous. Other categories will be presented, such as the semantic class of locational verbs (Huang, 1987). We will indicate the contexts wherein such notions can trigger the conceptual restructuring which enables adult learners to switch from L1 "thinking for speaking" to L2 "thinking for speaking" (Slobin, 1987). The paper is structured as follows: Section "Introduction" provides introduction to the theme; Section "Theoretical Framework" includes a surveys on the semantic and syntactic decompositions of spatial motion expressions; Section "Discussion" offers an account of the instantiation; the findings and the relevant pedagogical implications are presented in Section "Findings."

15.
Cognition ; 180: 225-237, 2018 11.
Article in English | MEDLINE | ID: mdl-30092460

ABSTRACT

Events, as fundamental units in human perception and cognition, are limited by quality changes of objects over time. In the present study, we investigate the role of language in shaping event units. Given fundamental cross-linguistic differences in the concepts encoded in the verb, as in French compared to German, event unit formation was tested for motion events in a verbal (online event description, experiment 1), as well as a non-verbal task (Newtson-test, experiment 2). In German, motion and direction are described by a single assertion, i.e. one verb encoding manner (to walk …), in conjunction with adpositional phrases for path and direction (… over x across y toward z). In contrast, when information on path and direction is encoded in the verb, as typically in French, each path segment requires a separate assertion (head for x, cross y, approach z). Both experiments were based on short naturalistic video clips showing a figure moving through space along a path either without changing orientation/direction (control), or with changes in orientation/direction (critical). Analysis of the verbal task concerned the probability of producing more than one assertion to refer to the motion events presented in the clips; in the non-verbal event segmentation task, the analysis concerned the probability of marking an event boundary, as indicated by pressing a button. Results show that in French, the probability of producing more than one assertion was significantly higher in the critical condition (experiment 1) and the probability to identify an event boundary was also significantly higher (experiment 2), compared to the German participants but only in the critical condition. The findings indicate language-driven effects in event unit formation. The results are discussed in the context of theories of event cognition, thereby focusing on the role of language in the formation of cognitive structures.


Subject(s)
Multilingualism , Narration , Photic Stimulation/methods , Psycholinguistics/methods , Psychomotor Performance/physiology , Female , Humans , Language , Linguistics/methods , Male
16.
J Psycholinguist Res ; 47(3): 741-754, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29305747

ABSTRACT

Children can understand iconic co-speech gestures that characterize entities by age 3 (Stanfield et al. in J Child Lang 40(2):1-10, 2014; e.g., "I'm drinking" [Formula: see text] tilting hand in C-shape to mouth as if holding a glass). In this study, we ask whether children understand co-speech gestures that characterize events as early as they do so for entities, and if so, whether their understanding is influenced by the patterns of gesture production in their native language. We examined this question by studying native English speaking 3- to 4 year-old children and adults as they completed an iconic co-speech gesture comprehension task involving motion events across two studies. Our results showed that children understood iconic co-speech gestures about events at age 4, marking comprehension of gestures about events one year later than gestures about entities. Our findings also showed that native gesture production patterns influenced children's comprehension of gestures characterizing such events, with better comprehension for gestures that follow language-specific patterns compared to the ones that do not follow such patterns-particularly for manner of motion. Overall, these results highlight early emerging abilities in gesture comprehension about motion events.


Subject(s)
Child Development/physiology , Comprehension , Gestures , Language Development , Child, Preschool , Female , Humans , Male , Speech , Speech Perception , Young Adult
17.
Cogn Sci ; 42(3): 1001-1014, 2018 04.
Article in English | MEDLINE | ID: mdl-28481418

ABSTRACT

Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture-blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language-an organization that relies on neither visuospatial cues nor language structure.


Subject(s)
Blindness/psychology , Gestures , Language , Semantics , Speech , Adult , Female , Humans , Male , Turkey
18.
Cogn Sci ; 41(3): 814-826, 2017 04.
Article in English | MEDLINE | ID: mdl-27245931

ABSTRACT

Previous studies have shown a robust bias to express the goal path over the source path when describing events ("the bird flew into the pitcher," rather than "… out of the bucket into the pitcher"). Motivated by linguistic theory, this study manipulated the causal structure of events (specifically, making the source cause the motion of the figure) and measured the extent to which adults and 3.5- to 4-year-old English-speaking children included the goal and source in their descriptions. We found that both children's and adults' encoding of the source increased for events in which the source caused the motion of the figure compared to nearly identical events in which the source played no such causal role. However, a goal bias persisted overall for both causal and noncausal motion events. These findings suggest that although the goal bias in language is highly robust, properties of the source (such as causal agency) influence its likelihood of being encoded in language, thus shedding light on how properties of an event can influence the mapping of event components into language.


Subject(s)
Goals , Language Development , Motion , Child , Child, Preschool , Female , Humans , Language , Male , New Jersey , Psycholinguistics , Semantics
19.
Cognition ; 148: 10-8, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26707427

ABSTRACT

Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N=20/per language) asked to describe physical motion events (e.g., running down a path)-a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech-co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own-silent gestures produced by English-speakers were identical in how motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language.


Subject(s)
Gestures , Language , Speech , Adult , Aged , Female , Humans , Male , Middle Aged , Semantics , Young Adult
20.
Brain Lang ; 150: 1-13, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26283001

ABSTRACT

People often use spontaneous gestures when communicating spatial information. We investigated focal brain-injured individuals to test the hypotheses that (1) naming motion event components of manner-path (represented by verbs-prepositions in English) are impaired selectively, (2) gestures compensate for impaired naming. Patients with left or right hemisphere damage (LHD or RHD) and elderly control participants were asked to describe motion events (e.g., running across) depicted in brief videos. Damage to the left posterior middle frontal gyrus, left inferior frontal gyrus, and left anterior superior temporal gyrus (aSTG) produced impairments in naming paths of motion; lesions to the left caudate and adjacent white matter produced impairments in naming manners of motion. While the frequency of spontaneous gestures were low, lesions to the left aSTG significantly correlated with greater production of path gestures. These suggest that producing prepositions-verbs can be separately impaired and gesture production compensates for naming impairments when damage involves left aSTG.


Subject(s)
Brain Injuries/physiopathology , Brain Injuries/psychology , Gestures , Language , Motion Perception/physiology , Adult , Aged , Aged, 80 and over , Female , Frontal Lobe/injuries , Frontal Lobe/physiopathology , Functional Laterality , Humans , Male , Middle Aged , Running , Temporal Lobe/injuries , Temporal Lobe/physiopathology
SELECTION OF CITATIONS
SEARCH DETAIL