Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Dev Sci ; 27(1): e13416, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37255282

RESUMEN

The hypothesis that impoverished language experience affects complex sentence structure development around the end of early childhood was tested using a fully randomized, sentence-to-picture matching study in American Sign Language (ASL). The participants were ASL signers who had impoverished or typical access to language in early childhood. Deaf signers whose access to language was highly impoverished in early childhood (N = 11) primarily comprehended structures consisting of a single verb and argument (Subject or Object), agreeing verbs, and the spatial relation or path of semantic classifiers. They showed difficulty comprehending more complex sentence structures involving dual lexical arguments or multiple verbs. As predicted, participants with typical language access in early childhood, deaf native signers (N = 17) or hearing second-language learners (N = 10), comprehended the range of 12 ASL sentence structures, independent of the subjective iconicity or frequency of the stimulus lexical items, or length of ASL experience and performance on non-verbal cognitive tasks. The results show that language experience in early childhood is necessary for the development of complex syntax. RESEARCH HIGHLIGHTS: Previous research with deaf signers suggests an inflection point around the end of early childhood for sentence structure development. Deaf signers who experienced impoverished language until the age of 9 or older comprehend several basic sentence structures but few complex structures. Language experience in early childhood is necessary for the development of complex sentence structure.


Asunto(s)
Sordera , Lenguaje , Preescolar , Humanos , Lengua de Signos , Semántica , Audición
2.
J Cogn Neurosci ; 34(2): 224-235, 2022 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-34964898

RESUMEN

Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


Asunto(s)
Lenguaje , Lengua de Signos , Mapeo Encefálico , Comprensión , Humanos , Lingüística , Imagen por Resonancia Magnética , Lóbulo Temporal
3.
Cereb Cortex ; 24(10): 2772-83, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23696277

RESUMEN

The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing.


Asunto(s)
Corteza Cerebral/fisiología , Desarrollo del Lenguaje , Lengua de Signos , Adolescente , Adulto , Factores de Edad , Período Crítico Psicológico , Sordera , Femenino , Lateralidad Funcional , Humanos , Aprendizaje/fisiología , Magnetoencefalografía , Masculino , Semántica , Adulto Joven
4.
J Neurosci ; 32(28): 9700-5, 2012 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-22787055

RESUMEN

Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.


Asunto(s)
Mapeo Encefálico , Sordera , Lateralidad Funcional/fisiología , Semántica , Lengua de Signos , Lóbulo Temporal/fisiopatología , Adolescente , Adulto , Sordera/congénito , Sordera/patología , Sordera/fisiopatología , Potenciales Evocados/fisiología , Femenino , Humanos , Campos Magnéticos , Imagen por Resonancia Magnética , Magnetoencefalografía , Masculino , Estimulación Luminosa , Factores de Tiempo , Adulto Joven
5.
J Exp Psychol Learn Mem Cogn ; 42(12): 2002-2006, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27929337

RESUMEN

In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and late-learning signers show variable patterns of activation in the presence of phonological competitors. We provide a logical rationale for our study design and present a reanalysis of our data using a modified time window, providing additional evidence for our claim. We maintain that target fixation patterns provide an important window into real-time processing of sign language. We conclude that the use of eye-tracking methods to study real-time processing in a visually perceived language such as ASL is a promising avenue for further exploration. (PsycINFO Database Record


Asunto(s)
Lengua de Signos , Percepción del Tiempo , Humanos , Lenguaje , Aprendizaje , Lingüística , Estados Unidos
6.
Brain Lang ; 95(2): 265-72, 2005 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-16246734

RESUMEN

The nature of the representations maintained in verbal working memory is a topic of debate. Some authors argue for a modality-dependent code, tied to particular sensory or motor systems. Others argue for a modality-neutral code. Sign language affords a unique perspective because it factors out the effects of modality. In an fMRI experiment, deaf participants viewed and covertly rehearsed strings of non-sense signs; analyses focused on regions responsive in both sensory and rehearsal phases. Compared with previous findings in hearing subjects, deaf subjects showed a significantly increased involvement of parietal regions. A lesion case study indicates that this network is left-dominant. The findings support the hypothesis that linguistic working memory is supported by modality-specific neural systems, but some modality-neutral systems may also be involved.


Asunto(s)
Sordera/fisiopatología , Imagen por Resonancia Magnética , Memoria a Corto Plazo/fisiología , Lóbulo Parietal/fisiología , Lengua de Signos , Mapeo Encefálico , Dominancia Cerebral/fisiología , Humanos , Lingüística
7.
J Exp Psychol Learn Mem Cogn ; 41(4): 1130-9, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-25528091

RESUMEN

Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf adults who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native signers demonstrated early and robust activation of sublexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received.


Asunto(s)
Desarrollo del Lenguaje , Reconocimiento Visual de Modelos , Reconocimiento en Psicología , Lengua de Signos , Adolescente , Adulto , Factores de Edad , Sordera , Medidas del Movimiento Ocular , Movimientos Oculares , Femenino , Humanos , Pruebas del Lenguaje , Masculino , Persona de Mediana Edad , Factores de Tiempo , Adulto Joven
8.
Lang Learn Dev ; 10(1)2014 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-24363628

RESUMEN

Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child's current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve the classic joint attention characteristic of young hearing children. The current study investigated the mechanisms used by sign language dyads to achieve joint attention within a single modality. Four deaf children, ages 1;9 to 3;7, were observed during naturalistic interactions with their deaf mothers. The children engaged in frequent and meaningful gaze shifts, and were highly sensitive to a range of maternal cues. Children's control of gaze in this sample was largely developed by age two. The gaze patterns observed in deaf children were not observed in a control group of hearing children, indicating that modality-specific patterns of joint attention behaviors emerge when the language of parent-infant interaction occurs in the visual mode.

9.
Front Hum Neurosci ; 7: 322, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23847496

RESUMEN

WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA