Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Psychol ; 13: 896254, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35756281

RESUMEN

In second language research, the concept of cross-linguistic influence or transfer has frequently been used to describe the interaction between the first language (L1) and second language (L2) in the L2 acquisition process. However, less is known about the L2 acquisition of a sign language in general and specifically the differences in the acquisition process of L2M2 learners (learners learning a sign language for the first time) and L2M1 learners (signers learning another sign language) from a multimodal perspective. Our study explores the influence of modality knowledge on learning Swedish Sign Language through a descriptive analysis of the sign lexicon in narratives produced by L2M1 and L2M2 learners, respectively. A descriptive mixed-methods framework was used to analyze narratives of adult L2M1 (n = 9) and L2M2 learners (n = 15), with a focus on sign lexicon, i.e., use and distribution of the sign types such as lexical signs, depicting signs (classifier predicates), fingerspelling, pointing, and gestures. The number and distribution of the signs are later compared between the groups. In addition, a comparison with a control group consisting of L1 signers (n = 9) is provided. The results suggest that L2M2 learners exhibit cross-modal cross-linguistic transfer from Swedish (through higher usage of lexical signs and fingerspelling). L2M1 learners exhibits same-modal cross-linguistic transfer from L1 sign languages (through higher usage of depicting signs and use of signs from L1 sign language and international signs). The study suggests that it is harder for L2M2 learners to acquire the modality-specific lexicon, despite possible underlying gestural knowledge. Furthermore, the study suggests that L2M1 learners' access to modality-specific knowledge, overlapping access to gestural knowledge and iconicity, facilitates faster L2 lexical acquisition, which is discussed from the perspective of linguistic relativity (including modality) and its role in sign L2 acquisition.

2.
Front Psychol ; 13: 738866, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35369269

RESUMEN

The processing of a language involves a neural language network including temporal, parietal, and frontal cortical regions. This applies to spoken as well as signed languages. Previous research suggests that spoken language proficiency is associated with resting-state functional connectivity (rsFC) between language regions and other regions of the brain. Given the similarities in neural activation for spoken and signed languages, rsFC-behavior associations should also exist for sign language tasks. In this study, we explored the associations between rsFC and two types of linguistic skills in sign language: phonological processing skill and accuracy in elicited sentence production. Fifteen adult, deaf early signers were enrolled in a resting-state functional magnetic resonance imaging (fMRI) study. In addition to fMRI data, behavioral tests of sign language phonological processing and sentence reproduction were administered. Using seed-to-voxel connectivity analysis, we investigated associations between behavioral proficiency and rsFC from language-relevant nodes: bilateral inferior frontal gyrus (IFG) and posterior superior temporal gyrus (STG). Results showed that worse sentence processing skill was associated with stronger positive rsFC between the left IFG and left sensorimotor regions. Further, sign language phonological processing skill was associated with positive rsFC from right IFG to middle frontal gyrus/frontal pole although this association could possibly be explained by domain-general cognitive functions. Our findings suggest a possible connection between rsFC and developmental language outcomes in deaf individuals.

3.
Cereb Cortex ; 31(7): 3165-3176, 2021 06 10.
Artículo en Inglés | MEDLINE | ID: mdl-33625498

RESUMEN

Stimulus degradation adds to working memory load during speech processing. We investigated whether this applies to sign processing and, if so, whether the mechanism implicates secondary auditory cortex. We conducted an fMRI experiment where 16 deaf early signers (DES) and 22 hearing non-signers performed a sign-based n-back task with three load levels and stimuli presented at high and low resolution. We found decreased behavioral performance with increasing load and decreasing visual resolution, but the neurobiological mechanisms involved differed between the two manipulations and did so for both groups. Importantly, while the load manipulation was, as predicted, accompanied by activation in the frontoparietal working memory network, the resolution manipulation resulted in temporal and occipital activation. Furthermore, we found evidence of cross-modal reorganization in the secondary auditory cortex: DES had stronger activation and stronger connectivity between this and several other regions. We conclude that load and stimulus resolution have different neural underpinnings in the visual-verbal domain, which has consequences for current working memory models, and that for DES the secondary auditory cortex is involved in the binding of representations when task demands are low.


Asunto(s)
Corteza Auditiva/diagnóstico por imagen , Sordera/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Memoria a Corto Plazo/fisiología , Lengua de Signos , Percepción Visual , Adulto , Corteza Auditiva/fisiología , Sordera/fisiopatología , Femenino , Humanos , Masculino , Plasticidad Neuronal/fisiología , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Adulto Joven
4.
Front Psychol ; 11: 534741, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33192776

RESUMEN

Auditory cortex in congenitally deaf early sign language users reorganizes to support cognitive processing in the visual domain. However, evidence suggests that the potential benefits of this reorganization are largely unrealized. At the same time, there is growing evidence that experience of playing computer and console games improves visual cognition, in particular visuospatial attentional processes. In the present study, we investigated in a group of deaf early signers whether those who reported recently playing computer or console games (deaf gamers) had better visuospatial attentional control than those who reported not playing such games (deaf non-gamers), and whether any such effect was related to cognitive processing in the visual domain. Using a classic test of attentional control, the Eriksen Flanker task, we found that deaf gamers performed on a par with hearing controls, while the performance of deaf non-gamers was poorer. Among hearing controls there was no effect of gaming. This suggests that deaf gamers may have better visuospatial attentional control than deaf non-gamers, probably because they are less susceptible to parafoveal distractions. Future work should examine the robustness of this potential gaming benefit and whether it is associated with neural plasticity in early deaf signers, as well as whether gaming intervention can improve visuospatial cognition in deaf people.

5.
Front Psychol ; 10: 2463, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31780988

RESUMEN

What do spelling errors look like in children with sign language knowledge but with variation in hearing background, and what strategies do these children rely on when they learn how to spell in written language? Earlier research suggests that the spelling of children with hearing loss is different, because of their lack of hearing, which requires them to rely on other strategies. In this study, we examine whether, and how, different variables such as hearing degree, sign language knowledge and bilingualism may affect the spelling strategies of children with Swedish sign language, Svenskt teckenspråk, (STS) knowledge, and whether these variables can be mirrored in these children's spelling. The spelling process of nineteen children with STS knowledge (mean age: 10.9) with different hearing degrees, born into deaf families, is described and compared with a group of fourteen hearing children without STS knowledge (mean age: 10.9). Keystroke logging was used to investigate the participants' writing process. The spelling behavior of the children was further analyzed and categorized into different spelling error categories. The results indicate that many children showed exceptionally few spelling errors compared to earlier studies, that may derive from their early exposure of STS, enabling them to use the fingerspelling strategy. All of the children also demonstrated similar typing skills. The deaf children showed a tendency to rely on a visual strategy during spelling, which may result in incorrect, but visually similar, words, i.e., a type of spelling errors not found in texts by hearing children with STS knowledge. The deaf children also showed direct transfer from STS in their spelling. It was found that hard-of-hearing children together with hearing children of deaf adults (CODAs), both with STS knowledge, used a sounding strategy, rather than a visual strategy. Overall, this study suggests that the ability to hear and to use sign language, together and respectively, play a significant role for the spelling patterns and spelling strategies used by the children with and without hearing loss.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...