Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Infant Behav Dev ; 76: 101959, 2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38781790

RESUMEN

Werker and Tees (1984) prompted decades of research attempting to detail the paths infants take towards specialisation for the sounds of their native language(s). Most of this research has examined the trajectories of monolingual children. However, it has also been proposed that bilinguals, who are exposed to greater phonetic variability than monolinguals and must learn the rules of two languages, may remain perceptually open to non-native language sounds later into life than monolinguals. Using a visual habituation paradigm, the current study tests this question by comparing 15- to 18-month-old monolingual and bilingual children's developmental trajectories for non-native phonetic consonant contrast discrimination. A novel approach to the integration of stimulus presentation software with eye-tracking software was validated for objective measurement of infant looking time. The results did not support the hypothesis of a protracted period of sensitivity to non-native phonetic contrasts in bilingual compared to monolingual infants. Implications for diversification of perceptual narrowing research and implementation of increasingly sensitive measures are discussed.

2.
Biling (Camb Engl) ; 26(4): 835-844, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37636491

RESUMEN

Bilingual infants rely differently than monolinguals on facial information, such as lip patterns, to differentiate their native languages. This may explain, at least in part, why young monolinguals and bilinguals show differences in social attention. For example, in the first year, bilinguals attend faster and more often to static faces over non-faces than do monolinguals (Mercure et al., 2018). However, the developmental trajectories of these differences are unknown. In this pre-registered study, data were collected from 15- to 18-month-old monolinguals (English) and bilinguals (English and another language) to test whether group differences in face-looking behaviour persist into the second year. We predicted that bilinguals would orient more rapidly and more often to static faces than monolinguals. Results supported the first but not the second hypothesis. This suggests that, even into the second year of life, toddlers' rapid visual orientation to static social stimuli is sensitive to early language experience.

3.
Brain Lang ; 244: 105304, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37481794

RESUMEN

From birth, we perceive speech by hearing and seeing people talk. In adults cortical representations of visual speech are processed in the putative temporal visual speech area (TVSA), but it remains unknown how these representations develop. We measured infants' cortical responses to silent visual syllables and non-communicative mouth movements using functional Near-Infrared Spectroscopy. Our results indicate that cortical specialisation for visual speech may emerge during infancy. The putative TVSA was active to both visual syllables and gurning around 5 months of age, and more active to gurning than to visual syllables around 10 months of age. Multivariate pattern analysis classification of distinct cortical responses to visual speech and gurning was successful at 10, but not at 5 months of age. These findings imply that cortical representations of visual speech change between 5 and 10 months of age, showing that the putative TVSA is initially broadly tuned and becomes selective with age.


Asunto(s)
Percepción del Habla , Adulto , Humanos , Lactante , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Audición , Estimulación Luminosa/métodos
4.
Brain Topogr ; 36(4): 459-475, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37171657

RESUMEN

In adults, the integration of audiovisual speech elicits specific higher (super-additive) or lower (sub-additive) cortical responses when compared to the responses to unisensory stimuli. Although there is evidence that the fronto-temporal network is active during perception of audiovisual speech in infancy, the development of fronto-temporal responses to audiovisual integration remains unknown. In the current study, 5-month-olds and 10-month-olds watched bimodal (audiovisual) and alternating unimodal (auditory + visual) syllables. In this context we use alternating unimodal to denote alternating auditory and visual syllables that are perceived as separate syllables by adults. Using fNIRS we measured responses over large cortical areas including the inferior frontal and superior temporal regions. We identified channels showing different responses to bimodal than alternating unimodal condition and used multivariate pattern analysis (MVPA) to decode patterns of cortical responses to bimodal (audiovisual) and alternating unimodal (auditory + visual) speech. Results showed that in both age groups integration elicits cortical responses consistent with both super- and sub-additive responses in the fronto-temporal cortex. The univariate analyses revealed that between 5 and 10 months spatial distribution of these responses becomes increasingly focal. MVPA correctly classified responses at 5 months, with key input from channels located in the inferior frontal and superior temporal channels of the right hemisphere. However, MVPA classification was not successful at 10 months, suggesting a potential cortical re-organisation of audiovisual speech perception at this age. These results show the complex and non-gradual development of the cortical responses to integration of congruent audiovisual speech in infancy.


Asunto(s)
Percepción del Habla , Percepción Visual , Adulto , Humanos , Lactante , Percepción Visual/fisiología , Habla/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal , Percepción Auditiva/fisiología , Estimulación Acústica , Estimulación Luminosa
5.
J Exp Child Psychol ; 217: 105351, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35093667

RESUMEN

Infants growing up in an environment where more than one language is spoken tend to follow the early milestones of early language development. This is an impressive achievement given that they are learning two languages while receiving reduced exposure to each of these languages compared with monolingual infants. This increased variability in their linguistic environment may lead to adjustments in the way bilingual infants process visual and auditory speech. This study aimed to clarify the influence of infant bilingualism on the development of audiovisual speech integration. Using eye tracking and a McGurk paradigm, we studied face scanning patterns when 7- to 10-month-old infants were viewing articulation of audiovisually congruent and incongruent syllables. We found that monolingual infants decreased their attention to the mouth and increased their attention to the eyes of speaking faces when presented with incongruent articulation, typically leading to the McGurk illusion during adulthood. In bilingual infants, no differences in face scanning patterns were observed between audiovisually congruent and incongruent articulation, suggesting that the increased variability in their speech experience may lead to more tolerance to articulatory inconsistencies. These results suggest that the development of audiovisual speech perception is influenced by infants' language environment.


Asunto(s)
Ilusiones , Multilingüismo , Percepción del Habla , Adulto , Humanos , Lactante , Boca , Percepción Visual
6.
Neurobiol Lang (Camb) ; 1(1): 9-32, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32274469

RESUMEN

Recent neuroimaging studies suggest that monolingual infants activate a left-lateralized frontotemporal brain network in response to spoken language, which is similar to the network involved in processing spoken and signed language in adulthood. However, it is unclear how brain activation to language is influenced by early experience in infancy. To address this question, we present functional near-infrared spectroscopy (fNIRS) data from 60 hearing infants (4 to 8 months of age): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, whereas sign language elicited activation in the right temporoparietal area. A significant difference in brain lateralization was observed between groups. Activation in the posterior temporal region was not lateralized in monolinguals and bimodal bilinguals, but right lateralized in response to both language modalities in unimodal bilinguals. This suggests that the experience of two spoken languages influences brain activation for sign language when experienced for the first time. Multivariate pattern analyses (MVPAs) could classify distributed patterns of activation within the left hemisphere for spoken and signed language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for language and that unimodal bilingual experience has greater impact on early brain lateralization than bimodal bilingual experience.

7.
Autism Res ; 12(4): 614-627, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30801993

RESUMEN

Autism spectrum disorder (ASD) is a common neurodevelopmental condition, and infant siblings of children with ASD are at a higher risk of developing autistic traits or an ASD diagnosis, when compared to those with typically developing siblings. Reports of differences in brain anatomy and function in high-risk infants which predict later autistic behaviors are emerging, but although cerebellar and subcortical brain regions have been frequently implicated in ASD, no high-risk study has examined these regions. Therefore, in this study, we compared regional MRI volumes across the whole brain in 4-6-month-old infants with (high-risk, n = 24) and without (low-risk, n = 26) a sibling with ASD. Within the high-risk group, we also examined whether any regional differences observed were associated with autistic behaviors at 36 months. We found that high-risk infants had significantly larger cerebellar and subcortical volumes at 4-6-months of age, relative to low-risk infants; and that larger volumes in high-risk infants were linked to more repetitive behaviors at 36 months. Our preliminary observations require replication in longitudinal studies of larger samples. If correct, they suggest that the early subcortex and cerebellum volumes may be predictive biomarkers for childhood repetitive behaviors. Autism Res 2019, 12: 614-627. © 2019 The Authors. Autism Research published by International Society for Autism Research published byWiley Periodicals, Inc. LAY SUMMARY: Individuals with a family history of autism spectrum disorder (ASD) are at risk of ASD and related developmental difficulties. This study revealed that 4-6-month-old infants at high-risk of ASD have larger cerebellum and subcortical volumes than low-risk infants, and that larger volumes in high-risk infants are associated with more repetitive behaviors in childhood.


Asunto(s)
Trastorno del Espectro Autista/patología , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Predisposición Genética a la Enfermedad , Imagen por Resonancia Magnética/métodos , Conducta Estereotipada/fisiología , Femenino , Humanos , Lactante , Masculino , Tamaño de los Órganos , Estudios Prospectivos , Riesgo , Hermanos
8.
Dev Sci ; 22(1): e12701, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30014580

RESUMEN

Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4- to 8-month-old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye-tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.


Asunto(s)
Multilingüismo , Percepción del Habla/fisiología , Percepción Visual , Adulto , Atención , Señales (Psicología) , Movimientos Oculares , Cara , Femenino , Humanos , Lactante , Masculino , Boca
9.
Front Psychol ; 9: 1943, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30459671

RESUMEN

Faces capture and maintain infants' attention more than other visual stimuli. The present study addresses the impact of early language experience on attention to faces in infancy. It was hypothesized that infants learning two spoken languages (unimodal bilinguals) and hearing infants of Deaf mothers learning British Sign Language and spoken English (bimodal bilinguals) would show enhanced attention to faces compared to monolinguals. The comparison between unimodal and bimodal bilinguals allowed differentiation of the effects of learning two languages, from the effects of increased visual communication in hearing infants of Deaf mothers. Data are presented for two independent samples of infants: Sample 1 included 49 infants between 7 and 10 months (26 monolinguals and 23 unimodal bilinguals), and Sample 2 included 87 infants between 4 and 8 months (32 monolinguals, 25 unimodal bilinguals, and 30 bimodal bilingual infants with a Deaf mother). Eye-tracking was used to analyze infants' visual scanning of complex arrays including a face and four other stimulus categories. Infants from 4 to 10 months (all groups combined) directed their attention to faces faster than to non-face stimuli (i.e., attention capture), directed more fixations to, and looked longer at faces than non-face stimuli (i.e., attention maintenance). Unimodal bilinguals demonstrated increased attention capture and attention maintenance by faces compared to monolinguals. Contrary to predictions, bimodal bilinguals did not differ from monolinguals in attention capture and maintenance by face stimuli. These results are discussed in relation to the language experience of each group and the close association between face processing and language development in social communication.

10.
J Neurodev Disord ; 7: 33, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26451165

RESUMEN

BACKGROUND: Autism spectrum disorder (ASD) is a common and highly heritable neurodevelopmental disorder that is likely to be the outcome of complex aetiological mechanisms. One strategy to provide insight is to study ASD within tuberous sclerosis complex (TSC), a rare disorder with a high incidence of ASD, but for which the genetic cause is determined. Individuals with ASD consistently demonstrate face processing impairments, but these have not been examined in adults with TSC using event-related potentials (ERPs) that are able to capture distinct temporal stages of processing. METHODS: For adults with TSC (n = 14), 6 of which had a diagnosis of ASD, and control adults (n = 13) passively viewed upright and inverted human faces with direct or averted gaze, with concurrent EEG recording. Amplitude and latency of the P1 and N170 ERPs were measured. RESULTS: Individuals with TSC + ASD exhibited longer N170 latencies to faces compared to typical adults. Typical adults and adults with TSC-only exhibited longer N170 latency to inverted versus upright faces, whereas individuals with TSC + ASD did not show latency differences according to face orientation. In addition, individuals with TSC + ASD showed increased N170 latency to averted compared to direct gaze, which was not demonstrated in typical adults. A reduced lateralization was shown for the TSC + ASD groups on P1 and N170 amplitude. CONCLUSIONS: The findings suggest that individuals with TSC + ASD may have similar electrophysiological abnormalities to idiopathic ASD and are suggestive of developmental delay. Identifying brain-based markers of ASD that are similar in TSC and idiopathic cases is likely to help elucidate the risk pathways to ASD.

11.
Cortex ; 71: 122-33, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26200892

RESUMEN

Adults diagnosed with autism spectrum disorder (ASD) show a reduced sensitivity (degree of selective response) to social stimuli such as human voices. In order to determine whether this reduced sensitivity is a consequence of years of poor social interaction and communication or is present prior to significant experience, we used functional MRI to examine cortical sensitivity to auditory stimuli in infants at high familial risk for later emerging ASD (HR group, N = 15), and compared this to infants with no family history of ASD (LR group, N = 18). The infants (aged between 4 and 7 months) were presented with voice and environmental sounds while asleep in the scanner and their behaviour was also examined in the context of observed parent-infant interaction. Whereas LR infants showed early specialisation for human voice processing in right temporal and medial frontal regions, the HR infants did not. Similarly, LR infants showed stronger sensitivity than HR infants to sad vocalisations in the right fusiform gyrus and left hippocampus. Also, in the HR group only, there was an association between each infant's degree of engagement during social interaction and the degree of voice sensitivity in key cortical regions. These results suggest that at least some infants at high-risk for ASD have atypical neural responses to human voice with and without emotional valence. Further exploration of the relationship between behaviour during social interaction and voice processing may help better understand the mechanisms that lead to different outcomes in at risk populations.


Asunto(s)
Percepción Auditiva , Trastorno del Espectro Autista/psicología , Voz , Estimulación Acústica , Adulto , Mapeo Encefálico , Emociones , Femenino , Hipocampo/fisiopatología , Humanos , Lactante , Relaciones Interpersonales , Imagen por Resonancia Magnética , Masculino , Relaciones Madre-Hijo , Riesgo , Sueño , Lóbulo Temporal/fisiopatología
12.
Cereb Cortex ; 25(10): 3261-77, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24907249

RESUMEN

In adults, patterns of neural activation associated with perhaps the most basic language skill--overt object naming--are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7-13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity.


Asunto(s)
Encéfalo/fisiología , Lenguaje , Reconocimiento Visual de Modelos/fisiología , Desempeño Psicomotor/fisiología , Adolescente , Adulto , Mapeo Encefálico , Niño , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Lóbulo Temporal/fisiología , Adulto Joven
13.
Neuropsychologia ; 65: 102-12, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25447067

RESUMEN

The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning.


Asunto(s)
Potenciales Evocados/fisiología , Lateralidad Funcional/fisiología , Semántica , Percepción del Habla/fisiología , Adulto , Comprensión/fisiología , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
14.
Dev Sci ; 17(1): 110-24, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24176002

RESUMEN

Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Niño , Señales (Psicología) , Femenino , Humanos , Masculino , Voz , Adulto Joven
15.
J Speech Lang Hear Res ; 56(6): 1800-12, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23926290

RESUMEN

PURPOSE: Pronouncing a novel word for the first time requires the transformation of a newly encoded speech signal into a series of coordinated, exquisitely timed oromotor movements. Individual differences in children's ability to repeat novel nonwords are associated with vocabulary development and later literacy. Nonword repetition (NWR) is often used to test clinical populations. While phonological/auditory memory contributions to learning and pronouncing nonwords have been extensively studied, much less is known about the contribution of children's oromotor skills to this process. METHOD: Two independent cohorts of children (7-13 years [N = 40] and 6.9-7.7 years [N = 37]) were tested on a battery of linguistic and nonlinguistic tests, including NWR and oromotor tasks. RESULTS: In both cohorts, individual differences in oromotor control were a significant contributor to NWR abilities; moreover, in an omnibus analysis including experimental and standardized tasks, oromotor control predicted the most unique variance in NWR. CONCLUSION: Results indicate that nonlinguistic oromotor skills contribute to children's NWR ability and suggest that important aspects of language learning and consequent language deficits may be rooted in the ability to perform complex sensorimotor transformations.


Asunto(s)
Trastornos de la Articulación/diagnóstico , Vías Eferentes/fisiología , Trastornos del Desarrollo del Lenguaje/diagnóstico , Boca/fisiología , Medición de la Producción del Habla/métodos , Habla/fisiología , Adolescente , Trastornos de la Articulación/fisiopatología , Niño , Femenino , Humanos , Trastornos del Desarrollo del Lenguaje/fisiopatología , Aprendizaje/fisiología , Labio/inervación , Labio/fisiología , Masculino , Boca/inervación , Movimiento/fisiología , Fonética , Valor Predictivo de las Pruebas , Lengua/inervación , Lengua/fisiología
16.
Dev Cogn Neurosci ; 5: 71-85, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23466656

RESUMEN

Children with autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD) demonstrate face processing abnormalities that may underlie social impairment. Despite substantial overlap between ASD and ADHD, ERP markers of face and gaze processing have not been directly compared across pure and comorbid cases. Children with ASD (n=19), ADHD (n=18), comorbid ASD+ADHD (n=29) and typically developing (TD) controls (n=26) were presented with upright/inverted faces with direct/averted gaze, with concurrent recording of the P1 and N170 components. While the N170 was predominant in the right hemisphere in TD and ADHD, children with ASD (ASD/ASD+ADHD) showed a bilateral distribution. In addition, children with ASD demonstrated altered response to gaze direction on P1 latency and no sensitivity to gaze direction on midline-N170 amplitude compared to TD and ADHD. In contrast, children with ADHD (ADHD/ASD+ADHD) exhibited a reduced face inversion effect on P1 latency compared to TD and ASD. These findings suggest children with ASD have specific abnormalities in gaze processing and altered neural specialisation, whereas children with ADHD show abnormalities at early visual attention stages. Children with ASD+ADHD are an additive co-occurrence with deficits of both disorders. Elucidating the neural basis of the overlap between ASD and ADHD is likely to inform aetiological investigation and clinical assessment.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad/fisiopatología , Atención/fisiología , Trastornos Generalizados del Desarrollo Infantil/fisiopatología , Cara , Fijación Ocular/fisiología , Estimulación Luminosa/métodos , Adolescente , Trastorno por Déficit de Atención con Hiperactividad/diagnóstico , Trastorno por Déficit de Atención con Hiperactividad/psicología , Niño , Trastornos Generalizados del Desarrollo Infantil/diagnóstico , Trastornos Generalizados del Desarrollo Infantil/psicología , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología , Tiempo de Reacción/fisiología
17.
Curr Biol ; 22(4): 338-42, 2012 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-22285033

RESUMEN

Autism spectrum disorders (henceforth autism) are diagnosed in around 1% of the population [1]. Familial liability confers risk for a broad spectrum of difficulties including the broader autism phenotype (BAP) [2, 3]. There are currently no reliable predictors of autism in infancy, but characteristic behaviors emerge during the second year, enabling diagnosis after this age [4, 5]. Because indicators of brain functioning may be sensitive predictors, and atypical eye contact is characteristic of the syndrome [6-9] and the BAP [10, 11], we examined whether neural sensitivity to eye gaze during infancy is associated with later autism outcomes [12, 13]. We undertook a prospective longitudinal study of infants with and without familial risk for autism. At 6-10 months, we recorded infants' event-related potentials (ERPs) in response to viewing faces with eye gaze directed toward versus away from the infant [14]. Longitudinal analyses showed that characteristics of ERP components evoked in response to dynamic eye gaze shifts during infancy were associated with autism diagnosed at 36 months. ERP responses to eye gaze may help characterize developmental processes that lead to later emerging autism. Findings also elucidate the mechanisms driving the development of the social brain in infancy.


Asunto(s)
Trastorno Autístico/fisiopatología , Encéfalo/fisiopatología , Potenciales Evocados Visuales , Fijación Ocular , Trastorno Autístico/diagnóstico , Preescolar , Femenino , Humanos , Lactante , Estudios Longitudinales , Masculino , Fenotipo , Estudios Prospectivos , Desempeño Psicomotor , Hermanos
18.
Curr Biol ; 21(14): 1220-4, 2011 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-21723130

RESUMEN

Human voices play a fundamental role in social communication, and areas of the adult "social brain" show specialization for processing voices and their emotional content (superior temporal sulcus, inferior prefrontal cortex, premotor cortical regions, amygdala, and insula). However, it is unclear when this specialization develops. Functional magnetic resonance (fMRI) studies suggest that the infant temporal cortex does not differentiate speech from music or backward speech, but a prior study with functional near-infrared spectroscopy revealed preferential activation for human voices in 7-month-olds, in a more posterior location of the temporal cortex than in adults. However, the brain networks involved in processing nonspeech human vocalizations in early development are still unknown. To address this issue, in the present fMRI study, 3- to 7-month-olds were presented with adult nonspeech vocalizations (emotionally neutral, emotionally positive, and emotionally negative) and nonvocal environmental sounds. Infants displayed significant differential activation in the anterior portion of the temporal cortex, similarly to adults. Moreover, sad vocalizations modulated the activity of brain regions involved in processing affective stimuli such as the orbitofrontal cortex and insula. These results suggest remarkably early functional specialization for processing human voice and negative emotions.


Asunto(s)
Percepción del Habla , Lóbulo Temporal/crecimiento & desarrollo , Lóbulo Temporal/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Emociones , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Lactante , Imagen por Resonancia Magnética , Masculino , Radiografía , Lóbulo Temporal/diagnóstico por imagen , Voz
19.
Prog Brain Res ; 189: 195-207, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-21489390

RESUMEN

Characteristic features of autism include atypical social perception and social-communication skills, and atypical visual attention, alongside rigid and repetitive thinking and behavior. Debate has focused on whether the later emergence of atypical social skills is a consequence of attention problems early in life, or, conversely, whether early social deficits have knock-on consequences for the later development of attention skills. We investigated this question based on evidence from infants at familial risk for a later diagnosis of autism by virtue of being younger siblings of children with a diagnosis. Around 9months, at-risk siblings differed as a group from controls, both in measures of social perception and inhibitory control. We present preliminary data from an ongoing longitudinal research program, suggesting clear associations between some of these infant measures and autism-related characteristics at 3years. We discuss the findings in terms of the emergent nature of autism as a result of complex developmental interactions among brain networks.


Asunto(s)
Atención/fisiología , Trastorno Autístico/fisiopatología , Conducta Social , Potenciales Evocados Visuales/fisiología , Humanos , Lactante , Pruebas Neuropsicológicas , Percepción Social
20.
Front Hum Neurosci ; 5: 6, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-21283529

RESUMEN

Previous event-related potentials research has suggested that the N170 component has a larger amplitude to faces and words than to other stimuli, but it remains unclear whether it indexes the same cognitive processes for faces and for words. The present study investigated how category-level repetition effects on the N170 differ across stimulus categories. Faces, cars, words, and non-words were presented in homogeneous (1 category) or mixed blocks (2 intermixed categories). We found a significant repetition effect of N170 amplitude for successively presented faces and cars (in homogeneous blocks), but not for words and unpronounceable consonant strings, suggesting that the N170 indexes different underlying cognitive processes for objects (including faces) and orthographic stimuli. The N170 amplitude was significantly smaller when multiple faces or multiple cars were presented in a row than when these stimuli were preceded by a stimulus of a different category. Moreover, the large N170 repetition effect for faces may be important to consider when comparing the relative N170 amplitude for different stimulus categories. Indeed, a larger N170 deflection for faces than for other stimulus categories was observed only when stimuli were preceded by a stimulus of a different category (in mixed blocks), suggesting that an enhanced N170 to faces may be more reliably observed when faces are presented within the context of some non-face stimuli.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...