Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Clin Exp Neuropsychol ; : 1-10, 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38753819

RESUMO

INTRODUCTION: Arranging Pictures is a new episodic memory test based on the NIH Toolbox (NIHTB) Picture Sequence Memory measure and optimized for self-administration on a personal smartphone within the Mobile Toolbox (MTB). We describe evidence from three distinct validation studies. METHOD: In Study 1, 92 participants self-administered Arranging Pictures on study-provided smartphones in the lab and were administered external measures of similar and dissimilar constructs by trained examiners to assess validity under controlled circumstances. In Study 2, 1,021 participants completed the external measures in the lab and self-administered Arranging Pictures remotely on their personal smartphones to assess validity in real-world contexts. In Study 3, 141 participants self-administered Arranging Pictures remotely twice with a two-week delay on personal iOS smartphones to assess test-retest reliability and practice effects. RESULTS: Internal consistency was good across samples (ρxx = .80 to .85, p < .001). Test-retest reliability was marginal (ICC = .49, p < .001) and there were significant practice effects after a two-week delay (ΔM = 3.21 (95% CI [2.56, 3.88]). As expected, correlations with convergent measures were significant and moderate to large in magnitude (ρ = .44 to .76, p < .001), while correlations with discriminant measures were small (ρ = .23 to .27, p < .05) or nonsignificant. Scores demonstrated significant negative correlations with age (ρ = -.32 to -.21, p < .001). Mean performance was slightly higher in the iOS compared to the Android group (MiOS = 18.80, NiOS = 635; MAndroid = 17.11, NAndroid = 386; t(757.73) = 4.17, p < .001), but device type did not significantly influence the psychometric properties of the measure. Indicators of potential cheating were mixed; average scores were significantly higher in the remote samples (F(2, 850) = 11.415, p < .001), but there were not significantly more perfect scores. CONCLUSION: The MTB Arranging Pictures measure demonstrated evidence of reliability and validity when self-administered on personal device. Future research should examine the potential for cheating in remote settings and the properties of the measure in clinical samples.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38414411

RESUMO

OBJECTIVE: We describe the development of a new computer adaptive vocabulary test, Mobile Toolbox (MTB) Word Meaning, and validity evidence from 3 studies. METHOD: Word Meaning was designed to be a multiple-choice synonym test optimized for self-administration on a personal smartphone. The items were first calibrated online in a sample of 7,525 participants to create the computer-adaptive test algorithm for the Word Meaning measure within the MTB app. In Study 1, 92 participants self-administered Word Meaning on study-provided smartphones in the lab and were administered external measures by trained examiners. In Study 2, 1,021 participants completed the external measures in the lab and Word Meaning was self-administered remotely on their personal smartphones. In Study 3, 141 participants self-administered Word Meaning remotely twice with a 2-week delay on personal iPhones. RESULTS: The final bank included 1363 items. Internal consistency was adequate to good across samples (ρxx = 0.78 to 0.81, p < .001). Test-retest reliability was good (ICC = 0.65, p < .001), and the mean theta score was not significantly different upon the second administration. Correlations were moderate to large with measures of similar constructs (ρ = 0.67-0.75, p < .001) and non-significant with measures of dissimilar constructs. Scores demonstrated small to moderate correlations with age (ρ = 0.35 to 0.45, p < .001) and education (ρ = 0.26, p < .001). CONCLUSION: The MTB Word Meaning measure demonstrated evidence of reliability and validity in three samples. Further validation studies in clinical samples are necessary.

3.
Top Cogn Sci ; 2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38284283

RESUMO

Decades of research have established that learners benefit when instruction includes hand gestures. This benefit is seen when learners watch an instructor gesture, as well as when they are taught or encouraged to gesture themselves. However, there is substantial individual variability with respect to this phenomenon-not all individuals benefit equally from gesture instruction. In the current paper, we explore the sources of this variability. First, we review the existing research on individual differences that do or do not predict learning from gesture instruction, including differences that are either context-dependent (linked to the particular task at hand) or context-independent (linked to the learner across multiple tasks). Next, we focus on one understudied measure of individual difference: the learner's own spontaneous gesture rate. We present data showing rates of "non-gesturers" across a number of studies and we provide theoretical motivation for why this is a fruitful area for future research. We end by suggesting ways in which research on individual differences will help gesture researchers to further refine existing theories and develop specific predictions about targeted gesture intervention for all kinds of learners.

4.
J Exp Child Psychol ; 236: 105754, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37544069

RESUMO

The language infants hear guides their visual attention; infants look more to objects when they are labeled. However, it is unclear whether labels also change the way infants attend to and encode those objects-that is, whether hearing an object label changes infants' online visual processing of that object. Here, we examined this question in the context of novel word learning, asking whether nuanced measures of visual attention, specifically fixation durations, change when 2-year-olds hear a label for a novel object (e.g., "Look at the dax") compared with when they hear a non-labeling phrase (e.g., "Look at that"). Results confirmed that children visually process objects differently when they are labeled, using longer fixations to examine labeled objects versus unlabeled objects. Children also showed robust retention of these labels on a subsequent test trial, suggesting that these longer fixations accompanied successful word learning. Moreover, when children were presented with the same objects again in a silent re-exposure phase, children's fixations were again longer when looking at the previously labeled objects. Finally, fixation duration at first exposure and silent re-exposure were correlated, indicating a persistent effect of language on visual processing. These effects of hearing labels on visual attention point to the critical interactions involved in cross-modal learning and emphasize the benefits of looking beyond aggregate measures of attention to identify cognitive learning mechanisms during infancy.


Assuntos
Idioma , Aprendizagem , Lactente , Humanos , Pré-Escolar , Aprendizagem Verbal , Percepção Visual , Desenvolvimento da Linguagem
5.
Front Psychol ; 13: 896049, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35846705

RESUMO

Infants are endowed with a proclivity to acquire language, whether it is presented in the auditory or visual modality. Moreover, in the first months of life, listening to language supports fundamental cognitive capacities, including infants' facility to form object categories (e.g., dogs and bottles). Recently, we have found that for English-acquiring infants as young as 4 months of age, this precocious interface between language and cognition is sufficiently broad to include not only their native spoken language (English), but also sign language (American Sign Language, ASL). In the current study, we take this work one step further, asking how "sign-naïve" infants-hearing infants with no prior exposure to sign language-deploy their attentional and social strategies in the context of episodes involving either spoken or sign language. We adopted a now-standard categorization task, presenting 4- to 6-month-old infants with a series of exemplars from a single category (e.g., dinosaurs). Each exemplar was introduced by a woman who appeared on the screen together with the object. What varied across conditions was whether this woman introduced the exemplar by speaking (English) or signing (ASL). We coded infants' visual attentional strategies and their spontaneous vocalizations during this task. Infants' division of attention and visual switches between the woman and exemplar varied as a function of language modality. In contrast, infants' spontaneous vocalizations revealed similar patterns across languages. These results, which advance our understanding of how infants allocate attentional resources and engage with communicative partners across distinct modalities, have implications for specifying our theories of language acquisition.

6.
Dev Psychol ; 58(1): 32-42, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34881968

RESUMO

Parent-child communication is a rich, multimodal process. Substantial research has documented the communicative strategies in certain (predominantly White) United States families, yet we know little about these communicative strategies in Native American families. The current study addresses that gap by documenting the verbal and nonverbal behaviors used by parents and their 4-year-old children (N = 39, 25 boys) across two communities: Menominee families (low to middle income) living on tribal lands in rural Wisconsin, and non-Native, primarily White families (middle income) living in an urban area. Dyads participated in a free-play forest-diorama task designed to elicit talk and play about the natural world. Children from both communities incorporated actions and gestures freely in their talk, emphasizing the importance of considering nonverbal behaviors when evaluating what children know. In sharp contrast to the stereotype that Native American children talk very little, Menominee children talked more than their non-Native counterparts, underlining the importance of taking into account cultural context in child assessments. For children and parents across both communities, gestures were more likely than actions to be related to the content of speech and were more likely than actions to be produced simultaneously with speech. This tight coupling between speech and gesture replicates and extends prior research with predominantly White (and adult) samples. These findings not only broaden our theories of communicative interaction and development, but also provide new evidence about the role of nonverbal behaviors in informal learning contexts. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Gestos , Comunicação não Verbal , Adulto , Pré-Escolar , Feminino , Humanos , Idioma , Masculino , Relações Pais-Filho , Pais
7.
Cognition ; 215: 104845, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34273677

RESUMO

The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants' early link between language and cognition and offer insight into how it unfolds.


Assuntos
Idioma , Língua de Sinais , Percepção Auditiva , Feminino , Audição , Humanos , Lactente , Desenvolvimento da Linguagem
8.
J Exp Child Psychol ; 205: 105069, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33445006

RESUMO

To learn from others, children rely on cues (e.g., familiarity, confidence) to infer who around them will provide useful information. We extended this research to ask whether children will use an informant's inclination to gesture as a marker of whether or not the informant is a good person to learn from. Children (N = 459, ages 4-12 years) watched short videos in which actresses made statements accompanied by meaningful iconic gestures, beat gestures (which act as prosodic markers with speech), or no gestures. After each trial, children were asked "Who do you think would be a good teacher?" (good teacher [experimental] condition) or "Who do you think would be a good friend?" (good friend [control] condition). Results show that children do believe that someone who produces iconic gesture would make a good teacher compared with someone who does not, but this is only later in childhood and only if children have the propensity to see gesture as meaningful. The same effects were not found in the good friend condition, indicating that children's responses are not just about liking an adult who gestures more. These findings have implications for how children attend to and learn from instructional gesture.


Assuntos
Compreensão , Sinais (Psicologia) , Gestos , Individualidade , Aprendizagem , Revelação da Verdade , Adulto , Criança , Pré-Escolar , Feminino , Humanos , Masculino
9.
Philos Trans R Soc Lond B Biol Sci ; 375(1789): 20180408, 2020 01 06.
Artigo em Inglês | MEDLINE | ID: mdl-31735145

RESUMO

Human language has no parallel elsewhere in the animal kingdom. It is unique not only for its structural complexity but also for its inextricable interface with core cognitive capacities such as object representation, object categorization and abstract rule learning. Here, we (i) review recent evidence documenting how (and how early) language interacts with these core cognitive capacities in the mind of the human infant, and (ii) consider whether this link exists in non-human great apes-our closest genealogical cousins. Research with human infants demonstrates that well before they begin to speak, infants have already forged a link between language and core cognitive capacities. Evident by just three months of age, this language-cognition link unfolds in a rich developmental cascade, with each advance providing the foundation for subsequent, more precise and more powerful links. This link supports our species' capacity to represent and convey abstract concepts and to communicate beyond the immediate here and now. By contrast, although the communication systems of great apes are sophisticated in their own right, there is no conclusive evidence that apes establish reference, convey information declaratively or pass down communicative devices via cultural transmission. Thus, the evidence currently available reinforces the uniqueness of human language and the power of its interface to cognition. This article is part of the theme issue 'What can animal communication teach us about human language?'


Assuntos
Cognição/fisiologia , Hominidae/fisiologia , Idioma , Comunicação Animal , Animais , Desenvolvimento Infantil , Comunicação , Humanos , Lactente , Desenvolvimento da Linguagem , Processos Mentais , Fala
10.
Atten Percept Psychophys ; 81(7): 2343-2353, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31111452

RESUMO

Producing gesture can be a powerful tool for facilitating learning. This effect has been replicated across a variety of academic domains, including algebra, chemistry, geometry, and word learning. Yet the mechanisms underlying the effect are poorly understood. Here we address this gap using functional magnetic resonance imaging (fMRI). We examine the neural correlates underlying how children solve mathematical equivalence problems learned with the help of either a speech + gesture strategy, or a speech-alone strategy. Children who learned through a speech + gesture were more likely to recruit motor regions when subsequently solving problems during a scan than children who learned through speech alone. This suggests that gesture promotes learning, at least in part, because it is a type of action. In an exploratory analysis, we also found that children who learned through speech + gesture showed subthreshold activation in regions outside the typical action-learning network, corroborating behavioral findings suggesting that the mechanisms supporting learning through gesture and action are not identical. This study is one of the first to explore the neural mechanisms of learning through gesture.


Assuntos
Gestos , Aprendizagem/fisiologia , Imageamento por Ressonância Magnética/métodos , Conceitos Matemáticos , Estimulação Luminosa/métodos , Resolução de Problemas/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Criança , Compreensão/fisiologia , Feminino , Humanos , Masculino , Fala/fisiologia
11.
Dev Psychol ; 54(10): 1809-1821, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30234335

RESUMO

Interpreting iconic gestures can be challenging for children. Here, we explore the features and functions of iconic gestures that make them more challenging for young children to interpret than instrumental actions. In Study 1, we show that 2.5-year-olds are able to glean size information from handshape in a simple gesture, although their performance is significantly worse than 4-year-olds'. Studies 2 to 4 explore the boundary conditions of 2.5-year-olds' gesture understanding. In Study 2, 2.5-year-old children have an easier time interpreting size information in hands that reach than in hands that gesture. In Study 3, we tease apart the perceptual features and functional objectives of reaches and gestures. We created a context in which an action has the perceptual features of a reach (extending the hand toward an object) but serves the function of a gesture (the object is behind a barrier and not obtainable; the hand thus functions to represent, rather than reach for, the object). In this context, children struggle to interpret size information in the hand, suggesting that gesture's representational function (rather than its perceptual features) is what makes it hard for young children to interpret. A distance control (Study 4) in which a person holds a box in gesture space (close to the body) demonstrates that children's difficulty interpreting static gesture cannot be attributed to the physical distance between a gesture and its referent. Together, these studies provide evidence that children's struggle to interpret iconic gesture may stem from its status as representational action. (PsycINFO Database Record


Assuntos
Gestos , Percepção de Movimento , Atividade Motora , Pré-Escolar , Compreensão , Feminino , Mãos , Humanos , Masculino , Psicologia da Criança , Distribuição Aleatória , Percepção Social
12.
Dev Sci ; 21(6): e12664, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-29663574

RESUMO

Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.


Assuntos
Atenção/fisiologia , Gestos , Aprendizagem , Criança , Humanos , Matemática , Fala , Visão Ocular
13.
Child Dev ; 89(3): e245-e260, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-28504410

RESUMO

Gestures, hand movements that accompany speech, affect children's learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects (but not touching them), or (c) an actor moving her hands in the absence of objects. Participants across all ages were equally able to identify actions on objects as goal directed, but the ability to identify empty-handed movements as representational actions (i.e., as gestures) increased with age and was influenced by the presence of objects, especially in older children.


Assuntos
Desenvolvimento Infantil/fisiologia , Compreensão/fisiologia , Gestos , Movimento/fisiologia , Adulto , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
14.
Learn Instr ; 50: 65-74, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29051690

RESUMO

When teachers gesture during instruction, children retain and generalize what they are taught (Goldin-Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it involves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capitalizes on its synchrony with speech to promote learning that lasts and can be generalized.

15.
Psychon Bull Rev ; 24(3): 652-665, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27604493

RESUMO

A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.


Assuntos
Gestos , Aprendizagem , Pensamento , Compreensão , Humanos , Fala
16.
Cognition ; 146: 339-348, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26513354

RESUMO

Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation.


Assuntos
Gestos , Movimento/fisiologia , Percepção Social , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Atividade Motora/fisiologia
17.
Cognition ; 142: 138-47, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26036925

RESUMO

Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form-a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter's gesture as it was performed). Study 2 compared 2-year-olds' performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner's attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation.


Assuntos
Gestos , Aprendizagem , Fatores Etários , Pré-Escolar , Feminino , Humanos , Comportamento Imitativo , Masculino , Psicologia da Criança
18.
J Cogn Dev ; 15(4): 539-550, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25364304

RESUMO

By the end of the first year, infants expect spoken labels to be extended across individuals and thus, seem to understand words as shared conventional forms. However, it is unknown whether infants' willingness to extend labels across individuals is constrained to familiar forms, such as spoken words, or whether infants can identify a broader range of symbols as potential conventions. The present study tested whether 12-month-old infants will extend a novel signlabel to a new person. Results indicate that 12-month-olds expect signed object-label relations to extend across agents, but restrict object preferences to individuals. The results suggest that infants' expectations about conventional behaviors and linguistic forms are likely broad at 12-months. The implications of these findings for infants' early conceptions of conventional behaviors, as well as our understanding of the initial state of the learner are considered.

19.
Psychol Sci ; 25(4): 903-10, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24503873

RESUMO

Previous research has shown that children benefit from gesturing during math instruction. We asked whether gesturing promotes learning because it is itself a physical action, or because it uses physical action to represent abstract ideas. To address this question, we taught third-grade children a strategy for solving mathematical-equivalence problems that was instantiated in one of three ways: (a) in a physical action children performed on objects, (b) in a concrete gesture miming that action, or (c) in an abstract gesture. All three types of hand movements helped children learn how to solve the problems on which they were trained. However, only gesture led to success on problems that required generalizing the knowledge gained. The results provide the first evidence that gesture promotes transfer of knowledge better than direct action on objects and suggest that the beneficial effects gesture has on learning may reside in the features that differentiate it from action.


Assuntos
Desenvolvimento Infantil , Cognição , Gestos , Matemática/educação , Criança , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA