Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Proc Natl Acad Sci U S A ; 118(7)2021 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-33510040

RESUMO

Before they even speak, infants become attuned to the sounds of the language(s) they hear, processing native phonetic contrasts more easily than nonnative ones. For example, between 6 to 8 mo and 10 to 12 mo, infants learning American English get better at distinguishing English and [l], as in "rock" vs. "lock," relative to infants learning Japanese. Influential accounts of this early phonetic learning phenomenon initially proposed that infants group sounds into native vowel- and consonant-like phonetic categories-like and [l] in English-through a statistical clustering mechanism dubbed "distributional learning." The feasibility of this mechanism for learning phonetic categories has been challenged, however. Here, we demonstrate that a distributional learning algorithm operating on naturalistic speech can predict early phonetic learning, as observed in Japanese and American English infants, suggesting that infants might learn through distributional learning after all. We further show, however, that, contrary to the original distributional learning proposal, our model learns units too brief and too fine-grained acoustically to correspond to phonetic categories. This challenges the influential idea that what infants learn are phonetic categories. More broadly, our work introduces a mechanism-driven approach to the study of early phonetic learning, together with a quantitative modeling framework that can handle realistic input. This allows accounts of early phonetic learning to be linked to concrete, systematic predictions regarding infants' attunement.


Assuntos
Desenvolvimento da Linguagem , Modelos Neurológicos , Processamento de Linguagem Natural , Fonética , Humanos , Percepção da Fala , Interface para o Reconhecimento da Fala
2.
Cogn Sci ; 47(7): e13314, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37462237

RESUMO

In the first year of life, infants' speech perception becomes attuned to the sounds of their native language. This process of early phonetic learning has traditionally been framed as phonetic category acquisition. However, recent studies have hypothesized that the attunement may instead reflect a perceptual space learning process that does not involve categories. In this article, we explore the idea of perceptual space learning by implementing five different perceptual space learning models and testing them on three phonetic contrasts that have been tested in the infant speech perception literature. We reproduce and extend previous results showing that a perceptual space learning model that uses only distributional information about the acoustics of short time slices of speech can account for at least some crosslinguistic differences in infant perception. Moreover, we find that a second perceptual space learning model, which benefits from word-level guidance. performs equally well in capturing crosslinguistic differences in infant speech perception. These results provide support for the general idea of perceptual space learning as a theory of early phonetic learning but suggest that more fine-grained data are needed to distinguish between different formal accounts. Finally, we provide testable empirical predictions of the two most promising models and show that these are not identical, making it possible to independently evaluate each model in experiments with infants in future research.


Assuntos
Desenvolvimento da Linguagem , Percepção da Fala , Humanos , Lactente , Fonética , Idioma , Aprendizagem Espacial , Simulação por Computador
3.
Open Mind (Camb) ; 5: 113-131, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35024527

RESUMO

Early changes in infants' ability to perceive native and nonnative speech sound contrasts are typically attributed to their developing knowledge of phonetic categories. We critically examine this hypothesis and argue that there is little direct evidence of category knowledge in infancy. We then propose an alternative account in which infants' perception changes because they are learning a perceptual space that is appropriate to represent speech, without yet carving up that space into phonetic categories. If correct, this new account has substantial implications for understanding early language development.

4.
Cognition ; 164: 116-143, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28412593

RESUMO

The semantic bootstrapping hypothesis proposes that children acquire their native language through exposure to sentences of the language paired with structured representations of their meaning, whose component substructures can be associated with words and syntactic structures used to express these concepts. The child's task is then to learn a language-specific grammar and lexicon based on (probably contextually ambiguous, possibly somewhat noisy) pairs of sentences and their meaning representations (logical forms). Starting from these assumptions, we develop a Bayesian probabilistic account of semantically bootstrapped first-language acquisition in the child, based on techniques from computational parsing and interpretation of unrestricted text. Our learner jointly models (a) word learning: the mapping between components of the given sentential meaning and lexical words (or phrases) of the language, and (b) syntax learning: the projection of lexical elements onto sentences by universal construction-free syntactic rules. Using an incremental learning algorithm, we apply the model to a dataset of real syntactically complex child-directed utterances and (pseudo) logical forms, the latter including contextually plausible but irrelevant distractors. Taking the Eve section of the CHILDES corpus as input, the model simulates several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena.


Assuntos
Desenvolvimento da Linguagem , Idioma , Modelos Teóricos , Algoritmos , Teorema de Bayes , Simulação por Computador , Humanos , Semântica , Aprendizagem Verbal/fisiologia
5.
Top Cogn Sci ; 5(3): 495-521, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23749769

RESUMO

The acquisition of syntactic categories is a crucial step in the process of acquiring syntax. At this stage, before a full grammar is available, only surface cues are available to the learner. Previous computational models have demonstrated that local contexts are informative for syntactic categorization. However, local contexts are affected by sentence-level structure. In this paper, we add sentence type as an observed feature to a model of syntactic category acquisition, based on experimental evidence showing that pre-syntactic children are able to distinguish sentence type using prosody and other cues. The model, a Bayesian Hidden Markov Model, allows for adding sentence type in a few different ways; we find that sentence type can aid syntactic category acquisition if it is used to characterize the differences in word order between sentence types. In these models, knowledge of sentence type permits similar gains to those found by extending the local context.


Assuntos
Desenvolvimento da Linguagem , Idioma , Teorema de Bayes , Pré-Escolar , Compreensão , Simulação por Computador , Sinais (Psicologia) , Humanos , Lactente , Aprendizagem , Cadeias de Markov
6.
Psychol Rev ; 120(4): 751-78, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24219848

RESUMO

Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning.


Assuntos
Simulação por Computador , Formação de Conceito/fisiologia , Desenvolvimento da Linguagem , Aprendizagem/fisiologia , Teorema de Bayes , Humanos , Fonética , Vocabulário
7.
Cognition ; 117(2): 107-25, 2010 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-20832060

RESUMO

The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and the number of words in the languages being learned. Although the results are intuitive from the perspective of a language learner (longer sentences, less training, and a larger language all make learning more difficult), standard computational proposals fail to capture several of these results. We describe how probabilistic models of segmentation can be modified to take into account some notion of memory or resource limitations in order to provide a closer match to human performance.


Assuntos
Desenvolvimento da Linguagem , Idioma , Aprendizagem/fisiologia , Modelos Estatísticos , Estimulação Acústica , Comportamento de Escolha/fisiologia , Humanos , Modelos Logísticos , Percepção da Fala/fisiologia
8.
Cognition ; 112(1): 21-54, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19409539

RESUMO

Since the experiments of Saffran et al. [Saffran, J., Aslin, R., & Newport, E. (1996). Statistical learning in 8-month-old infants. Science, 274, 1926-1928], there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of different assumptions the learner might make regarding the nature of words--in particular, how these assumptions affect the kinds of words that are segmented from a corpus of transcribed child-directed speech. We develop several models within a Bayesian ideal observer framework, and use them to examine the consequences of assuming either that words are independent units, or units that help to predict other units. We show through empirical and theoretical results that the assumption of independence causes the learner to undersegment the corpus, with many two- and three-word sequences (e.g. what's that, do you, in the house) misidentified as individual words. In contrast, when the learner assumes that words are predictive, the resulting segmentation is far more accurate. These results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered.


Assuntos
Modelos Psicológicos , Percepção da Fala/fisiologia , Algoritmos , Teorema de Bayes , Interpretação Estatística de Dados , Humanos , Desenvolvimento da Linguagem , Modelos Estatísticos
9.
Conf Proc IEEE Eng Med Biol Soc ; 2006: 1165-8, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17946029

RESUMO

In this work we present and apply infinite Gaussian mixture modeling, a non-parametric Bayesian method, to the problem of spike sorting. As this approach is Bayesian, it allows us to integrate prior knowledge about the problem in a principled way. Because it is non-parametric we are able to avoid model selection, a difficult problem that most current spike sorting methods do not address. We compare this approach to using penalized log likelihood to select the best from multiple finite mixture models trained by expectation maximization. We show favorable offline sorting results on real data and discuss ways to extend our model to online applications.


Assuntos
Potenciais de Ação/fisiologia , Algoritmos , Inteligência Artificial , Mapeamento Encefálico/métodos , Rede Nervosa/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Animais , Teorema de Bayes , Haplorrinos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA