Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 173
Filtrar
1.
Cognition ; 250: 105855, 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38865912

RESUMO

People are more likely to gesture when their speech is disfluent. Why? According to an influential proposal, speakers gesture when they are disfluent because gesturing helps them to produce speech. Here, we test an alternative proposal: People may gesture when their speech is disfluent because gestures serve as a pragmatic signal, telling the listener that the speaker is having problems with speaking. To distinguish between these proposals, we tested the relationship between gestures and speech disfluencies when listeners could see speakers' gestures and when they were prevented from seeing their gestures. If gesturing helps speakers to produce words, then the relationship between gesture and disfluency should persist regardless of whether gestures can be seen. Alternatively, if gestures during disfluent speech are pragmatically motivated, then the tendency to gesture more when speech is disfluent should disappear when the speaker's gestures are invisible to the listener. Results showed that speakers were more likely to gesture when their speech was disfluent, but only when the listener could see their gestures and not when the listener was prevented from seeing them, supporting a pragmatic account of the relationship between gestures and disfluencies. People tend to gesture more when speaking is difficult, not because gesturing facilitates speech production, but rather because gestures comment on the speaker's difficulty presenting an utterance to the listener.

2.
Behav Brain Sci ; 47: e127, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38934432

RESUMO

I focus here on concepts that are not part of core knowledge - the ability to treat people as social agents with shareable mental states. Spelke proposes that learning language from another might account for the development of these concepts. I suggest that homesigners, who create language rather than learn it, may be a potential counterexample to this hypothesis.


Assuntos
Idioma , Aprendizagem , Humanos , Aprendizagem/fisiologia , Formação de Conceito/fisiologia , Desenvolvimento da Linguagem
3.
Dev Sci ; : e13507, 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38629500

RESUMO

Blind adults display language-specificity in their packaging and ordering of events in speech. These differences affect the representation of events in co-speech gesture--gesturing with speech--but not in silent gesture--gesturing without speech. Here we examine when in development blind children begin to show adult-like patterns in co-speech and silent gesture. We studied speech and gestures produced by 30 blind and 30 sighted children learning Turkish, equally divided into 3 age groups: 5-6, 7-8, 9-10 years. The children were asked to describe three-dimensional spatial event scenes (e.g., running out of a house) first with speech, and then without speech using only their hands. We focused on physical motion events, which, in blind adults, elicit cross-linguistic differences in speech and co-speech gesture, but cross-linguistic similarities in silent gesture. Our results showed an effect of language on gesture when it was accompanied by speech (co-speech gesture), but not when it was used without speech (silent gesture) across both blind and sighted learners. The language-specific co-speech gesture pattern for both packaging and ordering semantic elements was present at the earliest ages we tested the blind and sighted children. The silent gesture pattern appeared later for blind children than sighted children for both packaging and ordering. Our findings highlight gesture as a robust and integral aspect of the language acquisition process at the early ages and provide insight into when language does and does not have an effect on gesture, even in blind children who lack visual access to gesture. RESEARCH HIGHLIGHTS: Gestures, when produced with speech (i.e., co-speech gesture), follow language-specific patterns in event representation in both blind and sighted children. Gestures, when produced without speech (i.e., silent gesture), do not follow the language-specific patterns in event representation in both blind and sighted children. Language-specific patterns in speech and co-speech gestures are observable at the same time in blind and sighted children. The cross-linguistic similarities in silent gestures begin slightly later in blind children than in sighted children.

4.
Infancy ; 29(3): 302-326, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38217508

RESUMO

The valid assessment of vocabulary development in dual-language-learning infants is critical to developmental science. We developed the Dual Language Learners English-Spanish (DLL-ES) Inventories to measure vocabularies of U.S. English-Spanish DLLs. The inventories provide translation equivalents for all Spanish and English items on Communicative Development Inventory (CDI) short forms; extended inventories based on CDI long forms; and Spanish language-variety options. Item-Response Theory analyses applied to Wordbank and Web-CDI data (n = 2603, 12-18 months; n = 6722, 16-36 months; half female; 1% Asian, 3% Black, 2% Hispanic, 30% White, 64% unknown) showed near-perfect associations between DLL-ES and CDI long-form scores. Interviews with 10 Hispanic mothers of 18- to 24-month-olds (2 White, 1 Black, 7 multi-racial; 6 female) provide a proof of concept for the value of the DLL-ES for assessing the vocabularies of DLLs.


Assuntos
Citrus sinensis , Malus , Multilinguismo , Criança , Lactente , Humanos , Feminino , Vocabulário , Linguagem Infantil , Testes de Linguagem , Idioma
5.
Open Mind (Camb) ; 7: 483-509, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37637299

RESUMO

Laboratory studies have demonstrated beneficial effects of making comparisons on children's analogical reasoning skills. We extend this finding to an observational dataset comprising 42 children. The prevalence of specific comparisons, which identify a feature of similarity or difference, in children's spontaneous speech from 14-58 months is associated with higher scores in tests of verbal and non-verbal analogy in 6th grade. We test two pre-registered hypotheses about how parents influence children's production of specific comparisons: 1) via modelling, where parents produce specific comparisons during the sessions prior to child onset of this behaviour; 2) via responsiveness, where parents respond to their children's earliest specific comparisons in variably engaged ways. We do not find that parent modelling or responsiveness predicts children's production of specific comparisons. However, one of our pre-registered control analyses suggests that parents' global comparisons-comparisons that do not identify a specific feature of similarity or difference-may bootstrap children's later production of specific comparisons, controlling for parent IQ. We present exploratory analyses following up on this finding and suggest avenues for future confirmatory research. The results illuminate a potential route by which parents' behaviour may influence children's early spontaneous comparisons and potentially their later analogical reasoning skills.

6.
Cogn Psychol ; 145: 101592, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37567048

RESUMO

How do learners learn what no and not mean when they are only presented with what is? Given its complexity, abstractness, and roles in logic, truth-functional negation might be a conceptual accomplishment. As a result, young children's gradual acquisition of negation words might be due to their undergoing a gradual conceptual change that is necessary to represent those words' logical meaning. However, it's also possible that linguistic expressions of negation take time to learn because of children's gradually increasing grasp of their language. To understand what no and not mean, children might first need to understand the rest of the sentences in which those words are used. We provide experimental evidence that conceptually equipped learners (adults) face the same acquisition challenges that children do when their access to linguistic information is restricted, which simulates how much language children understand at different points in acquisition. When watching a silenced video of naturalistic uses of negators by parents speaking to their children, adults could tell when the parent was prohibiting the child and struggled with inferring that negators were used to express logical negation. However, when provided with additional information about what else the parent said, guessing that the parent had expressed logical negation became easy for adults. Though our findings do not rule out that young learners also undergo conceptual change, they show that increasing understanding of language alone, with no accompanying conceptual change, can account for the gradual acquisition of negation words.


Assuntos
Desenvolvimento da Linguagem , Idioma , Criança , Adulto , Humanos , Pré-Escolar , Aprendizagem , Linguística , Lógica
7.
Psychol Sci ; 34(3): 298-312, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36608154

RESUMO

Languages carve up conceptual space in varying ways-for example, English uses the verb cut both for cutting with a knife and for cutting with scissors, but other languages use distinct verbs for these events. We asked whether, despite this variability, there are universal constraints on how languages categorize events involving tools (e.g., knife-cutting). We analyzed descriptions of tool events from two groups: (a) 43 hearing adult speakers of English, Spanish, and Chinese and (b) 10 deaf child homesigners ages 3 to 11 (each of whom has created a gestural language without input from a conventional language model) in five different countries (Guatemala, Nicaragua, United States, Taiwan, Turkey). We found alignment across these two groups-events that elicited tool-prominent language among the spoken-language users also elicited tool-prominent language among the homesigners. These results suggest ways of conceptualizing tool events that are so prominent as to constitute a universal constraint on how events are categorized in language.


Assuntos
Comparação Transcultural , Língua de Sinais , Adulto , Humanos , Criança , Estados Unidos , Pré-Escolar , Idioma , Linguística , Desenvolvimento da Linguagem , Gestos
8.
Dev Sci ; 26(3): e13335, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36268613

RESUMO

Researchers have long been interested in the origins of humans' understanding of symbolic number, focusing primarily on how children learn the meanings of number words (e.g., "one", "two", etc.). However, recent evidence indicates that children learn the meanings of number gestures before learning number words. In the present set of experiments, we ask whether children's early knowledge of number gestures resembles their knowledge of nonsymbolic number. In four experiments, we show that preschool children (n = 139 in total; age M = 4.14 years, SD = 0.71, range = 2.75-6.20) do not view number gestures in the same the way that they view nonsymbolic representations of quantity (i.e., arrays of shapes), which opens the door for the possibility that young children view number gestures as symbolic, as adults and older children do. A video abstract of this article can be viewed at https://youtu.be/WtVziFN1yuI HIGHLIGHTS: Children were more accurate when enumerating briefly-presented number gestures than arrays of shapes, with a shallower decline in accuracy as quantities increased. We replicated this finding with arrays of shapes that were organized into neat, dice-like configurations (compared to the random configurations used in Experiment 1). The advantage in enumerating briefly-presented number gestures was evident before children had learned the cardinal principle. When gestures were digitally altered to pit handshape configuration against number of fingers extended, children overwhelmingly based their responses on handshape configuration.


Assuntos
Gestos , Aprendizagem , Pré-Escolar , Humanos , Criança , Adolescente , Aprendizagem/fisiologia , Conhecimento
9.
Lang Learn Dev ; 18(1): 16-40, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35603228

RESUMO

Human languages, signed and spoken, can be characterized by the structural patterns they use to associate communicative forms with meanings. One such pattern is paradigmatic morphology, where complex words are built from the systematic use and re-use of sub-lexical units. Here, we provide evidence of emergent paradigmatic morphology akin to number inflection in a communication system developed without input from a conventional language, homesign. We study the communication systems of four deaf child homesigners (mean age 8;02). Although these idiosyncratic systems vary from one another, we nevertheless find that all four children use handshape and movement devices productively to express cardinal and non-cardinal number information, and that their number expressions are consistent in both form and meaning. Our study shows, for the first time, that all four homesigners not only incorporate number devices into representational devices used as predicates , but also into gestures functioning as nominals, including deictic gestures. In other words, the homesigners express number by systematically combining and re-combining additive markers for number (qua inflectional morphemes) with representational and deictic gestures (qua bases). The creation of new, complex forms with predictable meanings across gesture types and linguistic functions constitutes evidence for an inflectional morphological paradigm in homesign and expands our understanding of the structural patterns of language that are, and are not, dependent on linguistic input.

10.
Proc Biol Sci ; 289(1970): 20220066, 2022 03 09.
Artigo em Inglês | MEDLINE | ID: mdl-35259991

RESUMO

How language began is one of the oldest questions in science, but theories remain speculative due to a lack of direct evidence. Here, we report two experiments that generate empirical evidence to inform gesture-first and vocal-first theories of language origin; in each, we tested modern humans' ability to communicate a range of meanings (995 distinct words) using either gesture or non-linguistic vocalization. Experiment 1 is a cross-cultural study, with signal Producers sampled from Australia (n = 30, Mage = 32.63, s.d. = 12.42) and Vanuatu (n = 30, Mage = 32.40, s.d. = 11.76). Experiment 2 is a cross-experiential study in which Producers were either sighted (n = 10, Mage = 39.60, s.d. = 11.18) or severely vision-impaired (n = 10, Mage = 39.40, s.d. = 10.37). A group of undergraduate student Interpreters guessed the meaning of the signals created by the Producers (n = 140). Communication success was substantially higher in the gesture modality than the vocal modality (twice as high overall; 61.17% versus 29.04% success). This was true within cultures, across cultures and even for the signals produced by severely vision-impaired participants. The success of gesture is attributed in part to its greater universality (i.e. similarity in form across different Producers). Our results support the hypothesis that gesture is the primary modality for language creation.


Assuntos
Hominidae , Voz , Adulto , Animais , Gestos , Humanos , Idioma , Desenvolvimento da Linguagem
11.
J Exp Psychol Gen ; 151(6): 1252-1271, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34855443

RESUMO

Why do people gesture when they speak? According to one influential proposal, the Lexical Retrieval Hypothesis (LRH), gestures serve a cognitive function in speakers' minds by helping them find the right spatial words. Do gestures also help speakers find the right words when they talk about abstract concepts that are spatialized metaphorically? If so, then preventing people from gesturing should increase the rate of disfluencies during speech about both literal and metaphorical space. Here, we sought to conceptually replicate the finding that preventing speakers from gesturing increases disfluencies in speech with literal spatial content (e.g., the rocket went up), which has been interpreted as evidence for the LRH, and to extend this pattern to speech with metaphorical spatial content (e.g., my grades went up). Across three measures of speech disfluency (disfluency rate, speech rate, and rate of nonjuncture filled pauses), we found no difference in disfluency between speakers who were allowed to gesture freely and speakers who were not allowed to gesture, for any category of speech (literal spatial content, metaphorical spatial content, and no spatial content). This large dataset (7,969 phrases containing 2,075 disfluencies) provided no support for the idea that gestures help speakers find the right words, even for speech with literal spatial content. Upon reexamining studies cited as evidence for the LRH and related proposals over the past 5 decades, we conclude that there is, in fact, no reliable evidence that preventing gestures impairs speaking. Together, these findings challenge long-held beliefs about why people gesture when they speak. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Gestos , Fala , Humanos , Metáfora
12.
Dev Psychol ; 57(4): 519-534, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34483346

RESUMO

Personal narrative is decontextualized talk where individuals recount stories of personal experiences about past or future events. As an everyday discursive speech type, narrative potentially invites parents and children to explicitly link together, generalize from, and make inferences about representations-i.e., to engage in higher-order thinking talk (HOTT). Here we ask whether narratives in early parent-child interactions include proportionally more HOTT than other forms of everyday home language. Sixty-four children (31 girls; 36 White, 14 Black, 8 Hispanic, 6 mixed/other race) and their primary caregiver(s) (M income = $61,000) were recorded in 90-minute spontaneous home interactions every 4 months from 14-58 months. Speech was transcribed and coded for narrative and HOTT. We found that parents at all visits and children after 38 months used more HOTT in narrative than non-narrative, and more HOTT than expected by chance. At 38- and 50-months, we examined HOTT in a related but distinct form of decontextualized talk-pretend, or talk during imaginary episodes of interaction-as a control to test whether other forms of decontextualized talk also relate to HOTT. While pretend contained more HOTT than other (non-narrative/non-pretend) talk, it generally contained less HOTT than narrative. Additionally, unlike HOTT during narrative, the amount of HOTT during pretend did not exceed the amount expected by chance, suggesting narrative serves as a particularly rich 'breeding ground' for HOTT in parent-child interactions. These findings provide insight into the nature of narrative discourse, and suggest narrative potentially may be used as a lever to increase children's higher-order thinking.


Assuntos
Desenvolvimento da Linguagem , Relações Pais-Filho , Cruzamento , Feminino , Humanos , Idioma , Pais
13.
eNeuro ; 8(4)2021.
Artigo em Inglês | MEDLINE | ID: mdl-34341067

RESUMO

How does the brain anticipate information in language? When people perceive speech, low-frequency (<10 Hz) activity in the brain synchronizes with bursts of sound and visual motion. This phenomenon, called cortical stimulus-tracking, is thought to be one way that the brain predicts the timing of upcoming words, phrases, and syllables. In this study, we test whether stimulus-tracking depends on domain-general expertise or on language-specific prediction mechanisms. We go on to examine how the effects of expertise differ between frontal and sensory cortex. We recorded electroencephalography (EEG) from human participants who were experts in either sign language or ballet, and we compared stimulus-tracking between groups while participants watched videos of sign language or ballet. We measured stimulus-tracking by computing coherence between EEG recordings and visual motion in the videos. Results showed that stimulus-tracking depends on domain-general expertise, and not on language-specific prediction mechanisms. At frontal channels, fluent signers showed stronger coherence to sign language than to dance, whereas expert dancers showed stronger coherence to dance than to sign language. At occipital channels, however, the two groups of participants did not show different patterns of coherence. These results are difficult to explain by entrainment of endogenous oscillations, because neither sign language nor dance show any periodicity at the frequencies of significant expertise-dependent stimulus-tracking. These results suggest that the brain may rely on domain-general predictive mechanisms to optimize perception of temporally-predictable stimuli such as speech, sign language, and dance.


Assuntos
Eletroencefalografia , Fala , Atenção , Encéfalo , Humanos , Periodicidade
14.
Front Hum Neurosci ; 15: 650152, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34408634

RESUMO

Children differ widely in their early language development, and this variability has important implications for later life outcomes. Parent language input is a strong experiential factor predicting the variability in children's early language skills. However, little is known about the brain or cognitive mechanisms that underlie the relationship. In addressing this gap, we used longitudinal data spanning 15 years to examine the role of early parental language input that children receive during preschool years in the development of brain structures that support language processing during school years. Using naturalistic parent-child interactions, we measured parental language input (amount and complexity) to children between the ages of 18 and 42 months (n = 23). We then assessed longitudinal changes in children's cortical thickness measured at five time points between 9 and 16 years of age. We focused on specific regions of interest (ROIs) that have been shown to play a role in language processing. Our results support the view that, even after accounting for important covariates such as parental intelligence quotient (IQ) and education, the amount and complexity of language input to a young child prior to school forecasts the rate of change in cortical thickness during the 7-year period from 5½ to 12½ years later. Examining the proximal correlates of change in brain and cognitive differences has the potential to inform targets for effective prevention and intervention strategies.

15.
Psychol Sci ; 32(8): 1227-1237, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34240647

RESUMO

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


Assuntos
Ilusões , Gestos , Mãos , Humanos , Língua de Sinais , Fala
16.
Cognition ; 215: 104845, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34273677

RESUMO

The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants' early link between language and cognition and offer insight into how it unfolds.


Assuntos
Idioma , Língua de Sinais , Percepção Auditiva , Feminino , Audição , Humanos , Lactente , Desenvolvimento da Linguagem
17.
PLoS One ; 16(6): e0252926, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34153044

RESUMO

Like many indigenous populations worldwide, Yucatec Maya communities are rapidly undergoing change as they become more connected with urban centers and access to formal education, wage labour, and market goods became more accessible to their inhabitants. However, little is known about how these changes affect children's language input. Here, we provide the first systematic assessment of the quantity, type, source, and language of the input received by 29 Yucatec Maya infants born six years apart in communities where increased contact with urban centres has resulted in a greater exposure to the dominant surrounding language, Spanish. Results show that infants from the second cohort received less directed input than infants in the first and, when directly addressed, most of their input was in Spanish. To investigate the mechanisms driving the observed patterns, we interviewed 126 adults from the communities. Against common assumptions, we showed that reductions in Mayan input did not simply result from speakers devaluing the Maya language. Instead, changes in input could be attributed to changes in childcare practices, as well as caregiver ethnotheories regarding the relative acquisition difficulty of each of the languages. Our study highlights the need for understanding the drivers of individual behaviour in the face of socio-demographic and economic changes as it is key for determining the fate of linguistic diversity.


Assuntos
Comércio/estatística & dados numéricos , Comunicação , Desenvolvimento da Linguagem , Idioma , Aprendizagem , Linguística/métodos , Fala/fisiologia , Adulto , Criança , Estudos de Coortes , Feminino , Humanos , Indígenas Norte-Americanos , Lactente , Masculino , Estados Unidos , Adulto Jovem
18.
Discourse Process ; 58(3): 213-232, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34024962

RESUMO

In this study, adults, who were naïve to organic chemistry, drew stereoisomers of molecules and explained their drawings. From these explanations, we identified nine strategies that participants expressed during those explanations. Five of the nine strategies referred to properties of the molecule that were explanatorily irrelevant to solving the problem; the remaining four referred to properties that were explanatorily relevant to the solution. For each problem, we tallied which of the nine strategies were expressed within the explanation for that problem, and determined whether the strategy was expressed in speech only, gesture only, or in both speech and gesture within the explanation. After these explanations, all participants watched the experimenter deliver a two-minute training module on stereoisomers. Following the training, participants repeated the drawing+explanation task on six new problems. The number of relevant strategies that participants expressed in speech (alone or with gesture) before training did not predict their post-training scores. However, the number of relevant strategies participants expressed in gesture-only before training did predict their post-training scores. Conveying relevant information about stereoisomers uniquely in gesture prior to a brief training is thus a good index of who is most likely to learn from the training. We suggest that gesture reveals explanatorily relevant implicit knowledge that reflects (and perhaps even promotes) acquisition of new understanding.

19.
Child Dev ; 92(6): 2335-2355, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34018614

RESUMO

A longitudinal study with 45 children (Hispanic, 13%; non-Hispanic, 87%) investigated whether the early production of non-referential beat and flip gestures, as opposed to referential iconic gestures, in parent-child naturalistic interactions from 14 to 58 months old predicts narrative abilities at age 5. Results revealed that only non-referential beats significantly (p < .01) predicted later narrative productions. The pragmatic functions of the children's speech that accompany these gestures were also analyzed in a representative sample of 18 parent-child dyads, revealing that beats were typically associated with biased assertions or questions. These findings show that the early use of beats predicts narrative abilities later in development, and suggest that this relation is likely due to the pragmatic-structuring function that beats reflect in early discourse.


Assuntos
Gestos , Fala , Pré-Escolar , Humanos , Lactente , Estudos Longitudinais , Narração , Relações Pais-Filho
20.
Psychol Sci ; 32(4): 536-548, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33720801

RESUMO

Early linguistic input is a powerful predictor of children's language outcomes. We investigated two novel questions about this relationship: Does the impact of language input vary over time, and does the impact of time-varying language input on child outcomes differ for vocabulary and for syntax? Using methods from epidemiology to account for baseline and time-varying confounding, we predicted 64 children's outcomes on standardized tests of vocabulary and syntax in kindergarten from their parents' vocabulary and syntax input when the children were 14 and 30 months old. For vocabulary, children whose parents provided diverse input earlier as well as later in development were predicted to have the highest outcomes. For syntax, children whose parents' input substantially increased in syntactic complexity over time were predicted to have the highest outcomes. The optimal sequence of parents' linguistic input for supporting children's language acquisition thus varies for vocabulary and for syntax.


Assuntos
Idioma , Vocabulário , Criança , Linguagem Infantil , Pré-Escolar , Humanos , Lactente , Desenvolvimento da Linguagem , Relações Pais-Filho , Pais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA