Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
J Exp Child Psychol ; 178: 295-316, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30448530

RESUMO

Early emerging biases for conspecific vocalizations are a hallmark of early development. Typically developing neonates listen to speech more than many other sounds, including non-biological non-speech sounds, but listen equally to speech and monkey calls. By 3 months of age, however, infants prefer speech over both non-biological non-speech sounds and monkey calls. We examined whether different listening preferences continue to develop along different developmental trajectories and whether listening preferences are related to developmental outcomes. Given the static preference for speech over non-biological non-speech sounds and the dynamic preference for speech over monkey calls between birth and 3 months, we examined whether 9-month-olds prefer speech over non-biological non-speech sounds (Experiment 1) and prefer speech over monkey calls (Experiment 2). We compared preferences for sounds in infants at low risk (SIBS-TD) and infants at high risk (SIBS-A) of autism spectrum disorder (ASD), a heterogeneous population who differ from typically developing infants in their preferences for speech, and examined whether listening preferences predict vocabulary and autism-like behaviors at 12 months for both groups. At 9 months, SIBS-TD listened longer to speech than to non-speech sounds and listened longer to monkey calls than to speech, whereas SIBS-A listened longer to speech than to non-speech sounds but listened equally to speech and monkey calls. SIBS-TD's preferences did not predict immediate developmental outcomes. In contrast, SIBS-A who preferred speech over non-speech or monkey calls had larger vocabularies and fewer markers of autism-like behaviors at 12 months, which could have positive developmental implications.


Assuntos
Percepção Auditiva , Transtorno do Espectro Autista/psicologia , Comportamento de Escolha , Idioma , Fala , Desenvolvimento Infantil , Feminino , Humanos , Lactente , Masculino , Percepção da Fala , Vocabulário
2.
J Exp Child Psychol ; 173: 268-283, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29772454

RESUMO

Speech allows humans to communicate and to navigate the social world. By 12 months, infants recognize that speech elicits appropriate responses from others. However, it is unclear how infants process dynamic communicative scenes and how their processing abilities compare with those of adults. Do infants, like adults, process communicative events while the event is occurring or only after being presented with the outcome? We examined 12-month-olds' and adults' eye movements as they watched a Communicator grasp one (target) of two objects. During the test event, the Communicator could no longer reach the objects, so she spoke or coughed to a Listener, who selected either object. Infants' and adults' patterns of looking to the actors and objects revealed that both groups immediately evaluated the Communicator's speech, but not her cough, as communicative and recognized that the Listener should select the target object only when the Communicator spoke. Furthermore, infants and adults shifted their attention between the actors and the objects in very similar ways. This suggests that 12-month-olds can quickly process communicative events as they occur with adult-like accuracy. However, differences in looking reveal that 12-month-olds process slower than adults. This early developing processing ability may allow infants to learn language and acquire knowledge from communicative interactions.


Assuntos
Atenção/fisiologia , Comunicação , Idioma , Fala/fisiologia , Adulto , Movimentos Oculares/fisiologia , Feminino , Humanos , Lactente , Masculino
3.
Proc Natl Acad Sci U S A ; 109(32): 12933-7, 2012 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-22826217

RESUMO

Much of our knowledge is acquired not from direct experience but through the speech of others. Speech allows rapid and efficient transfer of information that is otherwise not directly observable. Do infants recognize that speech, even if unfamiliar, can communicate about an important aspect of the world that cannot be directly observed: a person's intentions? Twelve-month-olds saw a person (the Communicator) attempt but fail to achieve a target action (stacking a ring on a funnel). The Communicator subsequently directed either speech or a nonspeech vocalization to another person (the Recipient) who had not observed the attempts. The Recipient either successfully stacked the ring (Intended outcome), attempted but failed to stack the ring (Observable outcome), or performed a different stacking action (Related outcome). Infants recognized that speech could communicate about unobservable intentions, looking longer at Observable and Related outcomes than the Intended outcome when the Communicator used speech. However, when the Communicator used nonspeech, infants looked equally at the three outcomes. Thus, for 12-month-olds, speech can transfer information about unobservable aspects of the world such as internal mental states, which provides preverbal infants with a tool for acquiring information beyond their immediate experience.


Assuntos
Comunicação , Compreensão , Formação de Conceito , Intenção , Desenvolvimento da Linguagem , Fala , Estimulação Acústica , Feminino , Humanos , Lactente , Masculino , Desempenho Psicomotor/fisiologia
4.
Dev Sci ; 17(5): 766-74, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24576182

RESUMO

How does the brain's response to speech change over the first months of life? Although behavioral findings indicate that neonates' listening biases are sharpened over the first months of life, with a species-specific preference for speech emerging by 3 months, the neural substrates underlying this developmental change are unknown. We examined neural responses to speech compared with biological non-speech sounds in 1- to 4-month-old infants using fMRI. Infants heard speech and biological non-speech sounds, including heterospecific vocalizations and human non-speech. We observed a left-lateralized response in temporal cortex for speech compared to biological non-speech sounds, indicating that this region is highly selective for speech by the first month of life. Specifically, this brain region becomes increasingly selective for speech over the next 3 months as neural substrates become less responsive to non-speech sounds. These results reveal specific changes in neural responses during a developmental period characterized by rapid behavioral changes.


Assuntos
Mapeamento Encefálico , Fonética , Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Fatores Etários , Percepção Auditiva , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Lactente , Recém-Nascido , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Lobo Temporal/irrigação sanguínea
5.
Dev Sci ; 17(6): 872-9, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24835877

RESUMO

Adults and 12-month-old infants recognize that even unfamiliar speech can communicate information between third parties, suggesting that they can separate the communicative function of speech from its lexical content. But do infants recognize that speech can communicate due to their experience understanding and producing language, or do they appreciate that speech is communicative earlier, with little such experience? We examined whether 6-month-olds recognize that speech can communicate information about an object. Infants watched a Communicator selectively grasp one of two objects (target). During test, the Communicator could no longer reach the objects; she turned to a Recipient and produced speech (a nonsense word) or non-speech (coughing). Infants looked longer when the Recipient selected the non-target than the target object when the Communicator spoke but not when she coughed - unless the Recipient had previously witnessed the Communicator's selective grasping of the target object. Our results suggest that at 6 months, with a receptive vocabulary of no more than a handful of commonly used words, infants possess some abstract understanding of the communicative function of speech. This understanding may provide an early mechanism for language and knowledge acquisition.


Assuntos
Comunicação , Compreensão/fisiologia , Desenvolvimento da Linguagem , Análise de Variância , Movimentos Oculares , Feminino , Gestos , Humanos , Lactente , Masculino , Estimulação Luminosa , Fatores de Tempo
6.
J Exp Child Psychol ; 114(2): 173-86, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22960203

RESUMO

Perceptual experiences in one modality are often dependent on activity from other sensory modalities. These cross-modal correspondences are also evident in language. Adults and toddlers spontaneously and consistently map particular words (e.g., 'kiki') to particular shapes (e.g., angular shapes). However, the origins of these systematic mappings are unknown. Because adults and toddlers have had significant experience with the language mappings that exist in their environment, it is unclear whether the pairings are the result of language exposure or the product of an initial proclivity. We examined whether 4-month-old infants make the same sound-shape mappings as adults and toddlers. Four month-olds consistently distinguished between congruent and incongruent sound-shape mappings in a looking time task (Experiment 1). Furthermore, mapping was based on the combination of consonants and vowels in the words given that neither consonants (Experiment 2) nor vowels (Experiment 3) alone sufficed for mapping. Finally, we confirmed that adults also made systematic sound-shape mappings (Experiment 4); however, for adults, vowels or consonants alone sufficed. These results suggest that some sound-shape mappings precede language learning, and may in fact aid in language learning by establishing a basis for matching labels to referents and narrowing the hypothesis space for young infants.


Assuntos
Aprendizagem por Associação , Desenvolvimento da Linguagem , Reconhecimento Visual de Modelos , Fonética , Psicologia da Criança , Percepção da Fala , Simbolismo , Atenção , Aprendizagem por Discriminação , Feminino , Humanos , Lactente , Masculino
7.
J Cogn Neurosci ; 24(5): 1224-32, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22360624

RESUMO

Processing the vocalizations of conspecifics is critical for adaptive social interaction. A species-specific voice-selective region has been identified in the right STS that responds more strongly to human vocal sounds compared with a variety of nonvocal sounds. However, the STS also activates in response to a wide range of signals used in communication, such as eye gaze, biological motion, and speech. These findings raise the possibility that the voice-selective region of the STS may be especially sensitive to vocal sounds that are communicative, rather than to all human vocal sounds. Using fMRI, we demonstrate that the voice-selective region of the STS responds more strongly to communicative vocal sounds (such as speech and laughter) compared with noncommunicative vocal sounds (such as coughing and sneezing). The implications of these results for understanding the role of the STS in voice processing and in disorders of social communication, such as autism spectrum disorder, are discussed.


Assuntos
Comunicação , Detecção de Sinal Psicológico/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Voz , Estimulação Acústica , Análise de Variância , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Tempo de Reação , Lobo Temporal/irrigação sanguínea , Adulto Jovem
8.
Proc Natl Acad Sci U S A ; 106(44): 18867-72, 2009 Nov 03.
Artigo em Inglês | MEDLINE | ID: mdl-19846770

RESUMO

Humans speak, monkeys grunt, and ducks quack. How do we come to know which vocalizations animals produce? Here we explore this question by asking whether young infants expect humans, but not other animals, to produce speech, and further, whether infants have similarly restricted expectations about the sources of vocalizations produced by other species. Five-month-old infants matched speech, but not human nonspeech vocalizations, specifically to humans, looking longer at static human faces when human speech was played than when either rhesus monkey or duck calls were played. They also matched monkey calls to monkey faces, looking longer at static rhesus monkey faces when rhesus monkey calls were played than when either human speech or duck calls were played. However, infants failed to match duck vocalizations to duck faces, even though infants likely have more experience with ducks than monkeys. Results show that by 5 months of age, human infants generate expectations about the sources of some vocalizations, mapping human faces to speech and rhesus faces to rhesus calls. Infants' matching capacity does not appear to be based on a simple associative mechanism or restricted to their specific experiences. We discuss these findings in terms of how infants may achieve such competence, as well as its specificity and relevance to acquiring language.


Assuntos
Fala , Vocalização Animal , Estimulação Acústica , Animais , Face , Humanos , Lactente , Macaca mulatta , Estimulação Luminosa , Fatores de Tempo , Vocalização Animal/fisiologia
9.
Dev Psychol ; 57(9): 1411-1422, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34929087

RESUMO

How do infants learn the sounds of their native language when there are many simultaneous sounds competing for their attention? Adults and children detect when speech sounds change in complex scenes better than when other sounds change. We examined whether infants have similar biases to detect when human speech changes better than nonspeech sounds including musical instruments, water, and animal calls in complex auditory scenes. We used a change deafness paradigm to examine whether 5-month-olds' change detection is biased toward certain sounds within high-level categories (e.g., biological or generated by humans) or whether change detection depends on low-level salient physical features such that detection is better for sounds with more distinct acoustic properties, such as water. In Experiment 1, 5-month-olds showed some evidence for detecting speech and music changes better than no change trials. In Experiment 2, when speech and music were compared separately with animal and water sounds, infants detected when speech and water changed, but not when music changed across scenes. Infants' change detection is both biased for certain sound categories, as they detected small speech changes better than other sounds, and affected by the size of the acoustic change, similar to young infants' attentional priorities in complex visual scenes. By 5 months, infants show some preferential processing of speech changes in complex auditory environments, which could help bootstrap the language learning process. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Fonética , Fala , Atenção , Viés , Humanos , Desenvolvimento da Linguagem
10.
Child Dev ; 81(2): 517-27, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20438457

RESUMO

Human neonates prefer listening to speech compared to many nonspeech sounds, suggesting that humans are born with a bias for speech. However, neonates' preference may derive from properties of speech that are not unique but instead are shared with the vocalizations of other species. To test this, thirty neonates and sixteen 3-month-olds were presented with nonsense speech and rhesus monkey vocalizations. Neonates showed no preference for speech over rhesus vocalizations but showed a preference for both these sounds over synthetic sounds. In contrast, 3-month-olds preferred speech to rhesus vocalizations. Neonates' initial biases minimally include speech and monkey vocalizations. These listening preferences are sharpened over 3 months, yielding a species-specific preference for speech, paralleling findings on infant face perception.


Assuntos
Atenção , Comportamento de Escolha , Recém-Nascido/psicologia , Psicologia da Criança , Percepção da Fala , Estimulação Acústica , Animais , Nível de Alerta , Percepção Auditiva , Feminino , Seguimentos , Humanos , Desenvolvimento da Linguagem , Macaca mulatta , Masculino , Espectrografia do Som , Vocalização Animal
11.
J Autism Dev Disord ; 50(7): 2475-2490, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30790192

RESUMO

Human infants show a robust preference for speech over many other sounds, helping them learn language and interact with others. Lacking a preference for speech may underlie some language and social-pragmatic difficulties in children with ASD. But, it's unclear how an early speech preference supports later language and social-pragmatic abilities. We show that across infants displaying and not displaying later ASD symptoms, a greater speech preference at 9 months is related to increased attention to a person when they speak at 12 months, and better expressive language at 24 months, but is not related to later social-pragmatic attention or outcomes. Understanding how an early speech preference supports language outcomes could inform targeted and individualized interventions for children with ASD.


Assuntos
Atenção/fisiologia , Transtorno do Espectro Autista/diagnóstico , Transtorno do Espectro Autista/psicologia , Desenvolvimento Infantil/fisiologia , Linguística , Fala/fisiologia , Pré-Escolar , Feminino , Previsões , Humanos , Lactente , Estudos Longitudinais , Masculino , Estimulação Luminosa/métodos , Habilidades Sociais
12.
Sci Rep ; 9(1): 4158, 2019 03 11.
Artigo em Inglês | MEDLINE | ID: mdl-30858390

RESUMO

Colaughter-simultaneous laughter between two or more individuals-allows listeners across different cultures and languages to quickly evaluate affiliation within a social group. We examined whether infants are sensitive to acoustic information in colaughter that indicates affiliation, specifically whether they can differentiate colaughter between friends and colaughter between strangers. In the first experiment, infants who heard alternating trials of colaughter between friends and strangers listened longer to colaughter between friends. In the second experiment, we examined whether infants were sensitive to the social context that was appropriate for each type of colaughter. Infants heard colaughter between friends and colaughter between strangers preceded by a silent visual scene depicting one of two different social contexts: either two people affiliating or turning away from each other. Infants looked longer when the social scene was incongruent with the type of colaughter. By 5 months, infants preferentially listen to colaughter between friends and detect when colaughter does not match the valence of a social interaction. The ability to rapidly evaluate acoustic features in colaughter that reveal social relationships between novel individuals appears early in human infancy and might be the product of an adaptive affiliation detection system that uses vocal cues.


Assuntos
Percepção Auditiva , Desenvolvimento Infantil , Riso , Comportamento Social , Feminino , Amigos , Humanos , Lactente , Masculino
13.
Sci Rep ; 9(1): 12203, 2019 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-31417096

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

14.
Dev Psychol ; 55(5): 920-933, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30730173

RESUMO

Adult humans process communicative interactions by recognizing that information is being communicated through speech (linguistic ability) and simultaneously evaluating how to respond appropriately (social-pragmatic ability). These abilities may originate in infancy. Infants understand how speech communicates in social interactions, helping them learn language and how to interact with others. Infants later diagnosed with autism spectrum disorder (ASD), who show deficits in social-pragmatic abilities, differ in how they attend to the linguistic and social-pragmatic information in their environment. Despite their interdependence, experimental measures of language and social-pragmatic attention are often studied in isolation in infancy. Thus, the extent to which language and social-pragmatic abilities are related constructs remains unknown. Understanding how related or separable language and social-pragmatic abilities are in infancy may reveal whether these abilities are supported by distinguishable developmental mechanisms. This study uses a single communicative scene to examine whether real-time linguistic and social-pragmatic attention are separable in neurotypical infants and infants later diagnosed with ASD, and whether attending to linguistic and social-pragmatic information separately predicts later language and social-pragmatic abilities 1 year later. For neurotypical 12-month-olds and 12-month-olds later diagnosed with ASD, linguistic attention was not correlated with concurrent social-pragmatic attention. Furthermore, infants' real-time attention to the linguistic and social-pragmatic aspects of the scene at 12 months predicted and distinguished language and social-pragmatic abilities at 24 months. Language and social-pragmatic attention during communication are thus separable in infancy and may follow distinguishable developmental trajectories. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Transtorno do Espectro Autista/diagnóstico , Linguagem Infantil , Relações Interpessoais , Linguística , Percepção da Fala/fisiologia , Atenção , Desenvolvimento Infantil , Pré-Escolar , Comunicação , Feminino , Humanos , Lactente , Estudos Longitudinais , Masculino , Estudos Prospectivos
15.
Autism Res ; 12(2): 249-262, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30561908

RESUMO

Infants look at others' faces to gather social information. Newborns look equally at human and monkey faces but prefer human faces by 1 month, helping them learn to communicate and interact with others. Infants later diagnosed with autism spectrum disorder (ASD) look at human faces less than neurotypical infants, which may underlie some deficits in social-communication later in life. Here, we asked whether infants later diagnosed with ASD differ in their preferences for both human and nonhuman primate faces compared to neurotypical infants over their first 2 years of life. We compare infants' relative looking times to human or monkey faces paired with nonface controls (Experiment 1) and infants' total looking times to pairs of human and monkey faces (Experiment 2). Across two experiments, we find that between 6 and 18 months, infants later diagnosed with ASD show a greater downturn (decrease after an initial increase) in looking at both primate faces than neurotypical infants. A decrease in attention to primate faces may partly underlie the social-communicative difficulties in children with ASD and could reveal how early perceptual experiences with faces affect development. Autism Res 2019, 12: 249-262 © 2018 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Looking at faces helps infants learn to interact with others. Infants look equally at human and monkey faces at birth but prefer human faces by 1 month. Infants later diagnosed with ASD who show deficits in social-communication look at human faces less than neurotypical infants. We find that a downturn (decline after an initial increase) in attention to both human and monkey faces between 6 and 18 months may partly underlie the social-communicative difficulties in children with ASD.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Reconhecimento Facial/fisiologia , Animais , Feminino , Humanos , Lactente , Estudos Longitudinais , Masculino , Primatas , Estudos Prospectivos
16.
Cognition ; 107(2): 729-42, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-17950721

RESUMO

A language learner trying to acquire a new word must often sift through many potential relations between particular words and their possible meanings. In principle, statistical information about the distribution of those mappings could serve as one important source of data, but little is known about whether learners can in fact track multiple word-referent mappings, and, if they do, the precision with which they can represent those statistics. To test this, two experiments contrasted a pair of possibilities: that learners encode the fine-grained statistics of mappings in the input - both high- and low-frequency mappings - or, alternatively, that only high frequency mappings are represented. Participants were briefly trained on novel word-novel object pairs combined with varying frequencies: some objects were paired with one word, other objects with multiple words with differing frequencies (ranging from 10% to 80%). Results showed that participants were exquisitely sensitive to very small statistical differences in mappings. The second experiment showed that word learners' representation of low frequency mappings is modulated as a function of the variability in the environment. Implications for Mutual Exclusivity and Bayesian accounts of word learning are discussed.


Assuntos
Aprendizagem por Associação , Reconhecimento Visual de Modelos , Estatística como Assunto , Aprendizagem Verbal , Adulto , Percepção de Cores , Percepção de Profundidade , Aprendizagem por Discriminação , Humanos , Rememoração Mental , Percepção de Movimento , Orientação , Probabilidade , Psicolinguística , Percepção da Fala
17.
Cognition ; 173: 87-92, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29358091

RESUMO

Infants understand that speech in their native language allows speakers to communicate. Is this understanding limited to their native language or does it extend to non-native languages with which infants have no experience? Twelve-month-old infants saw an actor, the Communicator, repeatedly select one of two objects. When the Communicator could no longer reach the target but a Recipient could, the Communicator vocalized a nonsense phrase either in English (infants' native language), Spanish (rhythmically different), or Russian (phonotactically different), or hummed (a non-speech vocalization). Across all three languages, native and non-native, but not humming, infants looked longer when the Recipient gave the Communicator the non-target object. Although, by 12 months, infants do not readily map non-native words to objects or discriminate most non-native speech contrasts, they understand that non-native languages can transfer information to others. Understanding language as a tool for communication extends beyond infants' native language: By 12 months, infants view language as a universal mechanism for transferring and acquiring new information.


Assuntos
Desenvolvimento Infantil/fisiologia , Compreensão/fisiologia , Percepção Social , Percepção da Fala/fisiologia , Comportamento Verbal/fisiologia , Feminino , Humanos , Lactente , Masculino
18.
Front Psychol ; 9: 2326, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30532728

RESUMO

Perceptual narrowing, or a diminished perceptual sensitivity to infrequently encountered stimuli, sometimes accompanied by an increased sensitivity to frequently encountered stimuli, has been observed in unimodal speech and visual perception, as well as in multimodal perception, leading to the suggestion that it is a fundamental feature of perceptual development. However, recent findings in unimodal face perception suggest that perceptual abilities are flexible in development. Similarly, in multimodal perception, new paradigms examining temporal dynamics, rather than standard overall looking time, also suggest that perceptual narrowing might not be obligatory. Across two experiments, we assess perceptual narrowing in unimodal visual perception using remote eye-tracking. We compare adults' looking at human faces and monkey faces of different species, and present analyses of standard overall looking time and temporal dynamics. As expected, adults discriminated between different human faces, but, unlike previous studies, they also discriminated between different monkey faces. Temporal dynamics revealed that adults more readily discriminated human compared to monkey faces, suggesting a processing advantage for conspecifics compared to other animals. Adults' success in discriminating between faces of two unfamiliar monkey species calls into question whether perceptual narrowing is an obligatory developmental process. Humans undoubtedly diminish in their ability to perceive distinctions between infrequently encountered stimuli as compared to frequently encountered stimuli, however, consistent with recent findings, this narrowing should be conceptualized as a refinement and not as a loss of abilities. Perceptual abilities for infrequently encountered stimuli may be detectable, though weaker compared to adults' perception of frequently encountered stimuli. Consistent with several other accounts we suggest that perceptual development must be more flexible than a perceptual narrowing account posits.

19.
Cognition ; 100(2): B10-20, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16289066

RESUMO

An essential part of the human capacity for language is the ability to link conceptual or semantic representations with syntactic representations. On the basis of data from spontaneous production, suggested that young children acquire such links on a verb-by-verb basis, with little in the way of a general understanding of linguistic argument structure. Here, we suggest that a receptive understanding of argument structure--including principles linking syntax and conceptual/semantic structure--appears earlier. In a forced-choice pointing task we have shown that toddlers in the third year of life can map a single scene (involving a novel causative action paired with a novel verb) onto two distinct syntactic frames (transitive and intransitive). This suggests that even before toddlers begin generalizing argument structure in their own speech, they have some representation of conceptual/semantic categories, syntactic categories, and a system that links the two.


Assuntos
Formação de Conceito , Semântica , Linguagem Infantil , Pré-Escolar , Feminino , Humanos , Linguística , Masculino , Aprendizagem Verbal
20.
Schizophr Res ; 86(1-3): 130-7, 2006 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16806838

RESUMO

Thought disorder is a fundamental symptom of schizophrenia, observable as irregularities in speech. It has been associated with functional and structural abnormalities in brain regions involved in language processing, including left temporal regions, during language production tasks. We were interested in the neural correlates of thought disorder during receptive language processing, as this function is relatively preserved despite relying on the same brain regions as expressive language. Twelve patients with schizophrenia and 11 controls listened to 30-s speech samples while undergoing fMRI scanning. Thought disorder and global symptom ratings were obtained for each patient. Thought disorder but not global symptomatology correlated positively with the BOLD response in the left posterior superior temporal lobe while listening to comprehensible speech (cluster-level corrected p=.023). The pattern of brain activity associated with thought disorder during listening to comprehensible speech differs from that seen during language generation tasks, where a reduction of the leftward laterality of language has often been observed. As receptive language is spared in thought disorder, we propose that the increase in activation reflects compensatory processing allowing for normal performance.


Assuntos
Transtornos da Percepção Auditiva/patologia , Mapeamento Encefálico , Audição/fisiologia , Esquizofrenia/patologia , Fala , Estimulação Acústica/métodos , Adulto , Transtornos da Percepção Auditiva/etiologia , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Esquizofrenia/complicações
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA