Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 696
Filter
Add more filters

Publication year range
1.
Proc Natl Acad Sci U S A ; 121(23): e2311425121, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38814865

ABSTRACT

Theories of language development-informed largely by studies of Western, middleclass infants-have highlighted the language that caregivers direct to children as a key driver of language learning. However, some have argued that language development unfolds similarly across environmental contexts, including those in which childdirected language is scarce. This raises the possibility that children are able to learn from other sources of language in their environments, particularly the language directed to others in their environment. We explore this hypothesis with infants in an indigenous Tseltal-speaking community in Southern Mexico who are rarely spoken to, yet have the opportunity to overhear a great deal of other-directed language by virtue of being carried on their mothers' backs. Adapting a previously established gaze-tracking method for detecting early word knowledge to our field setting, we find that Tseltal infants exhibit implicit knowledge of common nouns (Exp. 1), analogous to their US peers who are frequently spoken to. Moreover, they exhibit comprehension of Tseltal honorific terms that are exclusively used to greet adults in the community (Exp. 2), representing language that could only have been learned through overhearing. In so doing, Tseltal infants demonstrate an ability to discriminate words with similar meanings and perceptually similar referents at an earlier age than has been shown among Western children. Together, these results suggest that for some infants, learning from overhearing may be an important path toward developing language.


Subject(s)
Comprehension , Language Development , Humans , Infant , Female , Male , Comprehension/physiology , Mexico , Language , Vocabulary
2.
Proc Natl Acad Sci U S A ; 120(1): e2209153119, 2023 01 03.
Article in English | MEDLINE | ID: mdl-36574655

ABSTRACT

In the second year of life, infants begin to rapidly acquire the lexicon of their native language. A key learning mechanism underlying this acceleration is syntactic bootstrapping: the use of hidden cues in grammar to facilitate vocabulary learning. How infants forge the syntactic-semantic links that underlie this mechanism, however, remains speculative. A hurdle for theories is identifying computationally light strategies that have high precision within the complexity of the linguistic signal. Here, we presented 20-mo-old infants with novel grammatical elements in a complex natural language environment and measured their resultant vocabulary expansion. We found that infants can learn and exploit a natural language syntactic-semantic link in less than 30 min. The rapid speed of acquisition of a new syntactic bootstrap indicates that even emergent syntactic-semantic links can accelerate language learning. The results suggest that infants employ a cognitive network of efficient learning strategies to self-supervise language development.


Subject(s)
Learning , Semantics , Humans , Infant , Language , Vocabulary , Linguistics , Language Development
3.
Cereb Cortex ; 34(7)2024 Jul 03.
Article in English | MEDLINE | ID: mdl-39011935

ABSTRACT

Companionship refers to one's being in the presence of another individual. For adults, acquiring a new language is a highly social activity that often involves learning in the context of companionship. However, the effects of companionship on new language learning have gone relatively underexplored, particularly with respect to word learning. Using a within-subject design, the current study employs electroencephalography to examine how two types of companionship (monitored and co-learning) affect word learning (semantic and lexical) in a new language. Dyads of Chinese speakers of English as a second language participated in a pseudo-word-learning task during which they were placed in monitored and co-learning companionship contexts. The results showed that exposure to co-learning companionship affected the early attention stage of word learning. Moreover, in this early stage, evidence of a higher representation similarity between co-learners showed additional support that co-learning companionship influenced attention. Observed increases in delta and theta interbrain synchronization further revealed that co-learning companionship facilitated semantic access. In all, the similar neural representations and interbrain synchronization between co-learners suggest that co-learning companionship offers important benefits for learning words in a new language.


Subject(s)
Brain , Electroencephalography , Humans , Male , Female , Young Adult , Adult , Brain/physiology , Learning/physiology , Semantics , Multilingualism , Language , Attention/physiology , Verbal Learning/physiology
4.
Neuroimage ; 294: 120649, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38759354

ABSTRACT

Neurobehavioral studies have provided evidence for the effectiveness of anodal tDCS on language production, by stimulation of the left Inferior Frontal Gyrus (IFG) or of left Temporo-Parietal Junction (TPJ). However, tDCS is currently not used in clinical practice outside of trials, because behavioral effects have been inconsistent and underlying neural effects unclear. Here, we propose to elucidate the neural correlates of verb and noun learning and to determine if they can be modulated with anodal high-definition (HD) tDCS stimulation. Thirty-six neurotypical participants were randomly allocated to anodal HD-tDCS over either the left IFG, the left TPJ, or sham stimulation. On day one, participants performed a naming task (pre-test). On day two, participants underwent a new-word learning task with rare nouns and verbs concurrently to HD-tDCS for 20 min. The third day consisted of a post-test of naming performance. EEG was recorded at rest and during naming on each day. Verb learning was significantly facilitated by left IFG stimulation. HD-tDCS over the left IFG enhanced functional connectivity between the left IFG and TPJ and this correlated with improved learning. HD-tDCS over the left TPJ enabled stronger local activation of the stimulated area (as indexed by greater alpha and beta-band power decrease) during naming, but this did not translate into better learning. Thus, tDCS can induce local activation or modulation of network interactions. Only the enhancement of network interactions, but not the increase in local activation, leads to robust improvement of word learning. This emphasizes the need to develop new neuromodulation methods influencing network interactions. Our study suggests that this may be achieved through behavioral activation of one area and concomitant activation of another area with HD-tDCS.


Subject(s)
Transcranial Direct Current Stimulation , Humans , Transcranial Direct Current Stimulation/methods , Female , Male , Adult , Young Adult , Electroencephalography/methods , Prefrontal Cortex/physiology , Parietal Lobe/physiology , Verbal Learning/physiology , Temporal Lobe/physiology , Learning/physiology
5.
Dev Sci ; 27(2): e13441, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37612893

ABSTRACT

In word learning, learners need to identify the referent of words by leveraging the fact that the same word may co-occur with different sets of objects. This raises the question, what do children remember from "in the moment" that they can use for cross-situational learning? Furthermore, do children represent pictures of familiar animals versus drawings of non-existent novel objects as potential referents differently? This study examined these questions by creating learning scenarios with only two potential referents, requiring the least amount of memory to represent all co-present objects. Across three experiments (n > 250) with 4- and 6-year-old children, children reliably selected the intended referent from learning at test, though the learning of novel objects was better than familiar objects. When asked for a co-present object, children of all ages in the study performed at chance in all of the conditions. We discuss the developmental differences in cross-situational word learning capabilities with regard to representing different stimuli as potential referents. Importantly, all children used a propose-but-verify procedure for learning novel words even in the simplest of the learning scenarios given repeated exposure.


Subject(s)
Learning , Verbal Learning , Child , Humans , Child, Preschool , Probability , Mental Recall
6.
Dev Sci ; 27(5): e13513, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38685611

ABSTRACT

Relatively little work has focused on why we are motivated to learn words. In adults, recent experiments have shown that intrinsic reward signals accompany successful word learning from context. In addition, the experience of reward facilitated long-term memory for words. In adolescence, developmental changes are seen in reward and motivation systems as well as in reading and language systems. Here, in the face of this developmental change, we ask whether adolescents experience reward from word learning, and how the reward and memory benefit seen in adults is modulated by age. We used a naturalistic reading paradigm, which involved extracting novel word meanings from sentence context without the need for explicit feedback. By exploring ratings of enjoyment during the learning phase, as well as recognition memory for words a day later, we assessed whether adolescents show the same reward and learning patterns as adults. We tested 345 children between the ages of 10-18 (N > 84 in each 2-year age-band) using this paradigm. We found evidence for our first prediction: children aged 10-18 report greater enjoyment for successful word learning. However, we did not find evidence for age-related change in this developmental period, or memory benefits. This work gives us greater insight into the process of language acquisition and sets the stage for further investigations of intrinsic reward in typical and atypical development. RESEARCH HIGHLIGHTS: We constantly learn words from context, even in the absence of explicit rewards or feedback. In adults, intrinsic reward experienced during word learning is linked to a dopaminergic circuit in the brain, which also fuels enhancements in memory for words. We find adolescents also report enhanced reward or enjoyment when they successfully learn words from sentence context. The relationship between reward and learning is maintained between the ages of 10 and 18. Unlike in adults, we did not observe ensuing memory benefits.


Subject(s)
Reward , Humans , Adolescent , Child , Female , Male , Language Development , Learning/physiology , Reading , Verbal Learning/physiology , Motivation , Recognition, Psychology/physiology
7.
Dev Sci ; 27(5): e13510, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38597678

ABSTRACT

Although identifying the referents of single words is often cited as a key challenge for getting word learning off the ground, it overlooks the fact that young learners consistently encounter words in the context of other words. How does this company help or hinder word learning? Prior investigations into early word learning from children's real-world language input have yielded conflicting results, with some influential findings suggesting an advantage for words that keep a diverse company of other words, and others suggesting the opposite. Here, we sought to triangulate the source of this conflict, comparing different measures of diversity and approaches to controlling for correlated effects of word frequency across multiple languages. The results were striking: while different diversity measures on their own yielded conflicting results, once nonlinear relationships with word frequency were controlled, we found convergent evidence that contextual consistency supports early word learning. RESEARCH HIGHLIGHTS: The words children learn occur in a sea of other words. The company words keep ranges from highly variable to highly consistent and circumscribed. Prior findings conflict over whether variability versus consistency helps early word learning. Accounting for correlated effects of word frequency resolved the conflict across multiple languages. Results reveal convergent evidence that consistency helps early word learning.


Subject(s)
Language Development , Verbal Learning , Vocabulary , Humans , Verbal Learning/physiology , Child, Preschool , Female , Male , Learning , Child Language , Language
8.
Dev Sci ; 27(4): e13476, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38226762

ABSTRACT

Bilingual environments present an important context for word learning. One feature of bilingual environments is the existence of translation equivalents (TEs)-words in different languages that share similar meanings. Documenting TE learning over development may give us insight into the mechanisms underlying word learning in young bilingual children. Prior studies of TE learning have often been confounded by the fact that increases in overall vocabulary size with age lead to greater opportunities for learning TEs. To address this confound, we employed an item-level analysis, which controls for the age trajectory of each item independently. We used Communicative Development Inventory data from four bilingual datasets (two English-Spanish and two English-French; total N = 419) for modeling. Results indicated that knowing a word's TE increased the likelihood of knowing that word for younger children and for TEs that are more similar phonologically. These effects were consistent across datasets, but varied across lexical categories. Thus, TEs may allow bilingual children to bootstrap their early word learning in one language using their knowledge of the other language. RESEARCH HIGHLIGHTS: Bilingual children must learn words that share a common meaning across both languages, that is, translation equivalents, like dog in English and perro in Spanish. Item-level models explored how translation equivalents affect word learning, in addition to child-level (e.g., exposure) and item-level (e.g., phonological similarity) factors. Knowing a word increased the probability of knowing its corresponding translation equivalent, particularly for younger children and for more phonologically-similar translation equivalents. These findings suggest that young bilingual children use their word knowledge in one language to bootstrap their learning of words in the other language.


Subject(s)
Language Development , Multilingualism , Verbal Learning , Vocabulary , Humans , Child, Preschool , Verbal Learning/physiology , Male , Female , Child , Learning/physiology , Child Language , Language
9.
Dev Sci ; 27(1): e13419, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37291692

ABSTRACT

Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning.


Subject(s)
Language Development , Speech Perception , Infant , Humans , Child, Preschool , Touch , Verbal Learning , Language
10.
Dev Sci ; 27(2): e13442, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37612886

ABSTRACT

Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language-learning environment of American English-learning toddlers by testing how well phonetic clarity and referential clarity align in infant-directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words' first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents' speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less-clear ones. Such multimodal "gems" offer special opportunities for early word learning. RESEARCH HIGHLIGHTS: In parent-infant interaction, parents' referential intentions are sometimes clear and sometimes unclear; likewise, parents' pronunciation is sometimes clear and sometimes quite difficult to understand. We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance. Thus, there are globally valuable instances ("gems") from which children could learn about words' pronunciations and words' meanings at the same time. Homing in on clear phonetic instances and filtering out less-clear ones would help children identify these multimodal "gems" during word learning.


Subject(s)
Speech Perception , Speech , Infant , Humans , Child, Preschool , Phonetics , Language Development , Learning , Language
11.
Dev Sci ; : e13545, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38978148

ABSTRACT

Exposure to talker variability shapes how learning unfolds in the lab, and occurs in the everyday speech infants hear in daily life. Here, we asked whether aspects of talker variability in speech input are also linked to the onset of word production. We further asked whether these effects were redundant with effects of speech register (i.e., whether speech input was adult- vs. child-directed). To do so, we first extracted a set of highly common nouns from a longitudinal corpus of home recordings from North-American English-learning infants. We then used the acoustic variability in how these tokens were said to predict when the children first produced these same nouns. We found that in addition to frequency, variability in how words sound in 6-17 month's input predicted when children first said these words. Furthermore, while the proportion of child-directed speech also predicted the month of first production, it did so alongside measurements of acoustic variability in children's real-world input. Together, these results add to a growing body of literature suggesting that variability in how words sound in the input is linked to learning both in the lab and in daily life. RESEARCH HIGHLIGHTS: Talker variability shapes learning in the lab and exists in everyday speech; we asked whether it predicts word learning in the real world. Acoustic measurements of early words in infants' input (and their frequency) predicted when infants first said those same words. Speech register also predicted when infants said words, alongside effects of talker variability. Our results provide a deeper understanding of how sources of variability inherent to children's input connect to their learning and development.

12.
Dev Sci ; 27(1): e13424, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37322865

ABSTRACT

The speech register that adults especially caregivers use when interacting with infants and toddlers, that is, infant-directed speech (IDS) or baby talk, has been reported to facilitate language development throughout the early years. However, the neural mechanisms as well as why IDS results in such a developmental faciliatory effect remain to be investigated. The current study uses functional near-infrared spectroscopy (fNIRS) to evaluate two alternative hypotheses of such a facilitative effect, that IDS serves to enhance linguistic contrastiveness or to attract the child's attention. Behavioral and fNIRS data were acquired from twenty-seven Cantonese-learning toddlers 15-20 months of age when their parents spoke to them in either an IDS or adult-directed speech (ADS) register in a naturalistic task in which the child learned four disyllabic pseudowords. fNIRS results showed significantly greater neural responses to IDS than ADS register in the left dorsolateral prefrontal cortex (L-dlPFC), but opposite response patterns in the bilateral inferior frontal gyrus (IFG). The differences in fNIRS responses to IDS and to ADS in the L-dlPFC and the left parietal cortex (L-PC) showed significantly positive correlations with the differences in the behavioral word-learning performance of toddlers. The same fNIRS measures in the L-dlPFC and right PC (R-PC) of toddlers were significantly correlated with pitch range differences of parents between the two speech conditions. Together, our results suggest that the dynamic prosody in IDS increased toddlers' attention through greater involvement of the left frontoparietal network that facilitated word learning, compared to ADS. RESEARCH HIGHLIGHTS: This study for the first time examined the neural mechanisms of how infant-directed speech (IDS) facilitates word learning in toddlers. Using fNIRS, we identified the cortical regions that were directly involved in IDS processing. Our results suggest that IDS facilitates word learning by engaging a right-lateralized prosody processing and top-down attentional mechanisms in the left frontoparietal networks. The language network including the inferior frontal gyrus and temporal cortex was not directly involved in IDS processing to support word learning.


Subject(s)
Speech Perception , Speech , Infant , Adult , Humans , Child, Preschool , Speech/physiology , Language Development , Learning , Verbal Learning , Attention , Speech Perception/physiology
13.
Epilepsy Behav ; 153: 109720, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38428174

ABSTRACT

Accelerated long-term forgetting has been studied and demonstrated in adults with epilepsy. In contrast, the question of long-term consolidation (delays > 1 day) in children with epilepsy shows conflicting results. However, childhood is a period of life in which the encoding and long-term storage of new words is essential for the development of knowledge and learning. The aim of this study was therefore to investigate long-term memory consolidation skills in children with self-limited epilepsy with centro-temporal spikes (SeLECTS), using a paradigm exploring new words encoding skills and their long-term consolidation over one-week delay. As lexical knowledge, working memory skills and executive/attentional skills has been shown to contribute to long-term memory/new word learning, we added standardized measures of oral language and executive/attentional functions to explore the involvement of these cognitive skills in new word encoding and consolidation. The results showed that children with SeLECTS needed more repetitions to encode new words, struggled to encode the phonological forms of words, and when they finally reached the level of the typically developing children, they retained what they had learned, but didn't show improved recall skills after a one-week delay, unlike the control participants. Lexical knowledge, verbal working memory skills and phonological skills contributed to encoding and/or recall abilities, and interference sensitivity appeared to be associated with the number of phonological errors during the pseudoword encoding phase. These results are consistent with the functional model linking working memory, phonology and vocabulary in a fronto-temporo-parietal network. As SeLECTS involves perisylvian dysfunction, the associations between impaired sequence storage (phonological working memory), phonological representation storage and new word learning are not surprising. This dual impairment in both encoding and long-term consolidation may result in large learning gap between children with and without epilepsy. Whether these results indicate differences in the sleep-induced benefits required for long-term consolidation or differences in the benefits of retrieval practice between the epilepsy group and healthy children remains open. As lexical development is associated with academic achievement and comprehension, the impact of such deficits in learning new words is certainly detrimental.


Subject(s)
Epilepsy , Memory Consolidation , Child , Adult , Humans , Memory, Long-Term , Memory, Short-Term , Learning , Verbal Learning
14.
Brain Cogn ; 180: 106207, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39053199

ABSTRACT

Evidence for sequential associative word learning in the auditory domain has been identified in infants, while adults have shown difficulties. To better understand which factors may facilitate adult auditory associative word learning, we assessed the role of auditory expertise as a learner-related property and stimulus order as a stimulus-related manipulation in the association of auditory objects and novel labels. We tested in the first experiment auditorily-trained musicians versus athletes (high-level control group) and in the second experiment stimulus ordering, contrasting object-label versus label-object presentation. Learning was evaluated from Event-Related Potentials (ERPs) during training and subsequent testing phases using a cluster-based permutation approach, as well as accuracy-judgement responses during test. Results revealed for musicians a late positive component in the ERP during testing, but neither an N400 (400-800 ms) nor behavioral effects were found at test, while athletes did not show any effect of learning. Moreover, the object-label-ordering group only exhibited emerging association effects during training, while the label-object-ordering group showed a trend-level late ERP effect (800-1200 ms) during test as well as above chance accuracy-judgement scores. Thus, our results suggest the learner-related property of auditory expertise and stimulus-related manipulation of stimulus ordering modulate auditory associative word learning in adults.


Subject(s)
Association Learning , Auditory Perception , Electroencephalography , Evoked Potentials , Music , Humans , Male , Female , Adult , Young Adult , Electroencephalography/methods , Association Learning/physiology , Auditory Perception/physiology , Evoked Potentials/physiology , Acoustic Stimulation/methods , Evoked Potentials, Auditory/physiology , Verbal Learning/physiology
15.
J Exp Child Psychol ; 242: 105883, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38412568

ABSTRACT

Most languages of the world use lexical tones to contrast words. Thus, understanding how individuals process tones when learning new words is fundamental for a better understanding of the mechanisms underlying word learning. The current study asked how tonal information is integrated during word learning. We investigated whether variability in tonal information during learning can interfere with the learning of new words and whether this is language and age dependent. Cantonese- and French-learning 30-month-olds (N = 97) and Cantonese- and French-speaking adults (N = 50) were tested with an eye-tracking task on their ability to learn phonetically different pairs of novel words in two learning conditions: a 1-tone condition in which each object was named with a single label and a 3-tone condition in which each object was named with three different labels varying in tone. We predicted learning in all groups in the 1-tone condition. For the 3-tone condition, because tones are part of the phonological system of Cantonese but not of French, we expected the Cantonese groups to either fail (toddlers) or show lower performance than in the 1-tone condition (adults), whereas the French groups might show less sensitivity to this manipulation. The results show that all participants learned in the 1-tone condition and were sensitive to tone variation to some extent. Learning in the 3-tone condition was impeded in both groups of toddlers. We argue that tonal interference in word learning likely comes from the phonological level in the Cantonese groups and from the acoustic level in the French groups.


Subject(s)
Pitch Perception , Speech Perception , Adult , Humans , Language , Verbal Learning , Linguistics
16.
J Exp Child Psychol ; 246: 105978, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38889479

ABSTRACT

Recent studies have shown that children benefit from orthography when learning new words. This orthographic facilitation can be explained by the fact that written language acts as an anchor device due to the transient nature of spoken language. There is also a close and reciprocal relationship between spoken and written language. Second-language word learning poses specific challenges in terms of orthography-phonology mappings that do not fully overlap with first-language mappings. The current study aimed to investigate whether orthographic information facilitates second-language word learning in developing readers, namely third and fifth graders. In a first experiment French children learned 16 German words, and in a second experiment they learned 24 German words. Word learning was assessed by picture designation, spoken word recognition, and orthographic choice. In both experiments, orthographic facilitation was found in both less and more advanced readers. The theoretical and practical implications of these findings are discussed.


Subject(s)
Multilingualism , Verbal Learning , Vocabulary , Humans , Male , Female , Child , Reading , Language Development , Language
17.
J Exp Child Psychol ; 238: 105780, 2024 02.
Article in English | MEDLINE | ID: mdl-37774502

ABSTRACT

The COVID-19 pandemic has led to a major increase in digital interactions in early experience. A crucial question, given expanding virtual platforms, is whether preschoolers' active word learning behaviors extend to their interactions over video chat. When not provided with sufficient information to link new words to meanings, preschoolers drive their word learning by asking questions. In person, 5-year-olds focus their questions on unknown words compared with known words, highlighting their active word learning. Here, we investigated whether preschoolers' question-asking over video chat differs from in-person question-asking. In the study, 5-year-olds were instructed to move toys in response to known and unknown verbs on a video conferencing call (i.e., Zoom). Consistent with in-person results, video chat participants (n = 18) asked more questions about unknown words than about known words. The rate of question-asking about words across video chat and in-person formats did not differ. Differences in the types of questions asked about words indicate, however, that although video chat does not hinder preschoolers' active word learning, the use of video chat may influence how preschoolers request information about words.


Subject(s)
Pandemics , Verbal Learning , Humans , Child, Preschool
18.
J Exp Child Psychol ; 240: 105831, 2024 04.
Article in English | MEDLINE | ID: mdl-38134601

ABSTRACT

A critical indicator of spoken language knowledge is the ability to discern the finest possible distinctions that exist between words in a language-minimal pairs, for example, the distinction between the novel words beesh and peesh. Infants differentiate similar-sounding novel labels like "bih" and "dih" by 17 months of age or earlier in the context of word learning. Adult word learners readily distinguish similar-sounding words. What is unclear is the shape of learning between infancy and adulthood: Is there a nonlinear increase early in development, or is there protracted improvement as experience with spoken language amasses? Three experiments tested monolingual English-speaking children aged 3 to 6 years and young adults. Children underperformed when learning minimal-pair words compared with adults (Experiment 1), compared with learning dissimilar words even when speech materials were optimized for young children (Experiment 2), and when the number of word instances during learning was quadrupled (Experiment 3). Nonetheless, the youngest group readily recognized familiar minimal pairs (Experiment 3). Results are consistent with a lengthy trajectory for detailed sound pattern learning in one's native language(s), although other interpretations are possible. Suggestions for research on developmental trajectories across various age ranges are made.


Subject(s)
Phonetics , Speech Perception , Infant , Young Adult , Child , Humans , Child, Preschool , Learning , Language , Language Development , Verbal Learning
19.
J Exp Child Psychol ; 240: 105842, 2024 04.
Article in English | MEDLINE | ID: mdl-38184956

ABSTRACT

Dialogic reading promotes early language and literacy development, but high-quality interactions may be inaccessible to disadvantaged children. This study examined whether a chatbot could deliver dialogic reading support comparable to a human partner for Chinese kindergarteners. Using a 2 × 2 factorial design, 148 children (83 girls; Mage = 70.07 months, SD = 7.64) from less resourced families in Beijing, China, were randomly assigned to one of four conditions: dialogic or non-dialogic reading techniques with either a chatbot or human partner. The chatbot provided comparable dialogic support to the human partner, enhancing story comprehension and word learning. Critically, the chatbot's effect on story comprehension was moderated by children's language proficiency rather than age or reading ability. This demonstrates that chatbots can facilitate dialogic reading and highlights the importance of considering children's language skills when implementing chatbot dialogic interventions.


Subject(s)
Comprehension , Reading , Child , Female , Humans , Child, Preschool , Vocabulary , Verbal Learning , China
20.
J Exp Child Psychol ; 243: 105913, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38537422

ABSTRACT

Because of their evolutionary importance, it has been proposed that animate entities would be better remembered than inanimate entities. Although a growing body of evidence supports this hypothesis, it is still unclear whether the animacy effect persists under incidental learning conditions. Furthermore, few studies have tested the robustness of this effect in young children, with conflicting results. Using an incidental learning paradigm, we investigated whether young children (4- and 5-year-olds) would be better at learning words that refer to either human or animal entities rather than vehicle entities using pictures as stimuli. A sample of 79 children were asked to play digital Memory games while associations between pictures and words were presented incidentally. Consistent with the adaptive view of memory, the results showed that words associated with human and animal entities were better learned incidentally than words associated with vehicle entities. The visual complexity of the pictures did not influence this animacy effect. In addition, the more exposure to the pictures, the more incidental learning occurred. Overall, the results confirm the robustness of the animacy effect and show that this processing advantage can be found in an incidental learning task in children as young as 4 or 5 years. Furthermore, it is the first study to show that this effect can be obtained with pictures in children. The demonstration of the animacy effect with pictures, and not just words, is a prerequisite for an ultimate explanation of this effect in terms of survival.


Subject(s)
Verbal Learning , Humans , Male , Child, Preschool , Female , Pattern Recognition, Visual/physiology , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL