RESUMO
The hypothesis that impoverished language experience affects complex sentence structure development around the end of early childhood was tested using a fully randomized, sentence-to-picture matching study in American Sign Language (ASL). The participants were ASL signers who had impoverished or typical access to language in early childhood. Deaf signers whose access to language was highly impoverished in early childhood (N = 11) primarily comprehended structures consisting of a single verb and argument (Subject or Object), agreeing verbs, and the spatial relation or path of semantic classifiers. They showed difficulty comprehending more complex sentence structures involving dual lexical arguments or multiple verbs. As predicted, participants with typical language access in early childhood, deaf native signers (N = 17) or hearing second-language learners (N = 10), comprehended the range of 12 ASL sentence structures, independent of the subjective iconicity or frequency of the stimulus lexical items, or length of ASL experience and performance on non-verbal cognitive tasks. The results show that language experience in early childhood is necessary for the development of complex syntax. RESEARCH HIGHLIGHTS: Previous research with deaf signers suggests an inflection point around the end of early childhood for sentence structure development. Deaf signers who experienced impoverished language until the age of 9 or older comprehend several basic sentence structures but few complex structures. Language experience in early childhood is necessary for the development of complex sentence structure.
Assuntos
Surdez , Idioma , Pré-Escolar , Humanos , Língua de Sinais , Semântica , AudiçãoRESUMO
Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.
Assuntos
Língua de Sinais , Dispositivos Eletrônicos Vestíveis , Humanos , Estados Unidos , Captura de Movimento , Neurônios , PercepçãoRESUMO
School-based programs are an important tobacco prevention tool. Yet, existing programs are not suitable for Deaf and Hard-of-Hearing (DHH) youth. Moreover, little research has examined the use of the full range of tobacco products and related knowledge in this group. To address this gap and inform development of a school-based tobacco prevention program for this population, we conducted a pilot study among DHH middle school (MS) and high school (HS) students attending Schools for the Deaf and mainstream schools in California (n = 114). American Sign Language (ASL) administered surveys, before and after receipt of a draft curriculum delivered by health or physical education teachers, assessed product use and tobacco knowledge. Thirty-five percent of students reported exposure to tobacco products at home, including cigarettes (19%) and e-cigarettes (15%). Tobacco knowledge at baseline was limited; 35% of students knew e-cigarettes contain nicotine, and 56% were aware vaping is prohibited on school grounds. Current product use was reported by 16% of students, most commonly e-cigarettes (12%) and cigarettes (10%); overall, 7% of students reported dual use. Use was greater among HS versus MS students. Changes in student knowledge following program delivery included increased understanding of harmful chemicals in tobacco products, including nicotine in e-cigarettes. Post-program debriefings with teachers yielded specific recommendations for modifications to better meet the educational needs of DHH students. Findings based on student and teacher feedback will guide curriculum development and inform next steps in our program of research aimed to prevent tobacco use in this vulnerable and heretofore understudied population group.
Assuntos
Sistemas Eletrônicos de Liberação de Nicotina , Pessoas com Deficiência Auditiva , Produtos do Tabaco , Humanos , Adolescente , Fumar/epidemiologia , Nicotina , Projetos PilotoRESUMO
This case study describes the use of a syntax intervention with two deaf children who did not acquire a complete first language (L1) from birth. It looks specifically at their ability to produce subject-verb-object (SVO) sentence structure in American Sign Language (ASL) after receiving intervention. This was an exploratory case study in which investigators utilized an intervention that contained visuals to help teach SVO word order to young deaf children. Baseline data were collected over three sessions before implementation of a targeted syntax intervention and two follow-up sessions over 3-4 weeks. Both participants demonstrated improvements in their ability to produce SVO structure in ASL in 6-10 sessions. Visual analysis revealed a positive therapeutic trend that was maintained in follow-up sessions. These data provide preliminary evidence that a targeted intervention may help young deaf children with an incomplete L1 learn to produce basic word order in ASL. Results from this case study can help inform the practice of professionals working with signing deaf children who did not acquire a complete L1 from birth (e.g., speech-language pathologists, deaf mentors/coaches, ASL specialists, etc.). Future research should investigate the use of this intervention with a larger sample of deaf children.
Assuntos
Idioma , Língua de Sinais , Criança , Humanos , Estados Unidos , Desenvolvimento da Linguagem , AprendizagemRESUMO
BACKGROUND: Deaf individuals who communicate using American Sign Language (ASL) seem to experience a range of disparities in health care, but there are few empirical data. OBJECTIVE: To examine the provision of common care practices in the emergency department (ED) to this population. METHODS: ED visits in 2018 at a U.S. academic medical center were assessed retrospectively in Deaf adults who primarily use ASL (n = 257) and hearing individuals who primarily use English, selected at random (n = 429). Logistic regression analyses adjusted for confounders compared the groups on the provision or nonprovision of four routine ED care practices (i.e., laboratories ordered, medications ordered, images ordered, placement of peripheral intravenous line [PIV]) and on ED disposition (admitted to hospital or not admitted). RESULTS: The ED encounters with Deaf ASL users were less likely to include laboratory tests being ordered: adjusted odds ratio 0.68 and 95% confidence interval 0.47-0.97. ED encounters with Deaf individuals were also less likely to include PIV placement, less likely to result in images being ordered in the ED care of ASL users of high acuity compared with English users of high acuity (but not low acuity), and less likely to result in hospital admission. CONCLUSION: Results suggest disparate provision of several types of routine ED care for adult Deaf ASL users. Limitations include the observational study design at a single site and reliance on the medical record, underscoring the need for further research and potential reasons for disparate ED care with Deaf individuals.
Assuntos
Serviços Médicos de Emergência , Língua de Sinais , Adulto , Humanos , Estados Unidos , Estudos Retrospectivos , Tratamento de Emergência , Serviço Hospitalar de EmergênciaRESUMO
Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%.
Assuntos
Aprendizado Profundo , Língua de Sinais , Humanos , Estados Unidos , Qualidade de Vida , Gestos , TecnologiaRESUMO
Word learning in young children requires coordinated attention between language input and the referent object. Current accounts of word learning are based on spoken language, where the association between language and objects occurs through simultaneous and multimodal perception. In contrast, deaf children acquiring American Sign Language (ASL) perceive both linguistic and non-linguistic information through the visual mode. In order to coordinate attention to language input and its referents, deaf children must allocate visual attention optimally between objects and signs. We conducted two eye-tracking experiments to investigate how young deaf children allocate attention and process referential cues in order to fast-map novel signs to novel objects. Participants were deaf children learning ASL between the ages of 17 and 71 months. In Experiment 1, participants (n = 30) were presented with a novel object and a novel sign, along with a referential cue that occurred either before or after the sign label. In Experiment 2, a new group of participants (n = 32) were presented with two novel objects and a novel sign, so that the referential cue was critical for identifying the target object. Across both experiments, participants showed evidence for fast-mapping the signs regardless of the timing of the referential cue. Individual differences in children's allocation of attention during exposure were correlated with their ability to fast-map the novel signs at test. This study provides first evidence for fast-mapping in sign language, and contributes to theoretical accounts of how word learning develops when all input occurs in the visual modality.
Assuntos
Aprendizagem , Língua de Sinais , Criança , Pré-Escolar , Humanos , Lactente , Desenvolvimento da Linguagem , Linguística , Aprendizagem VerbalRESUMO
Objective: We sought to identify current Emergency Medical Services (EMS) practitioner comfort levels and communication strategies when caring for the Deaf American Sign Language (ASL) user. Additionally, we created and evaluated the effect of an educational intervention and visual communication tool on EMS practitioner comfort levels and communication. Methods: This was a descriptive study assessing communication barriers at baseline and after the implementation of a novel educational intervention with cross-sectional surveys conducted at three time points (pre-, immediate-post, and three months post-intervention). Descriptive statistics characterized the study sample and we quantified responses from the baseline survey and both post-intervention surveys. Results: There were 148 EMS practitioners who responded to the baseline survey. The majority of participants (74%; 109/148) previously responded to a 9-1-1 call for a Deaf patient and 24% (35/148) reported previous training regarding the Deaf community. The majority felt that important details were lost during communication (83%; 90/109), reported that the Deaf patient appeared frustrated during an encounter (72%; 78/109), and felt that communication limited patient care (67%; 73/109). When interacting with a Deaf person, the most common communication strategies included written text (90%; 98/109), friend/family member (90%; 98/109), lip reading (55%; 60/109), and spoken English (50%; 55/109). Immediately after the training, most participants reported that the educational training expanded their knowledge of Deaf culture (93%; 126/135), communication strategies to use (93%; 125/135), and common pitfalls to avoid (96%; 129/135) when caring for Deaf patients. At 3 months, all participants (100%, 79/79) reported that the educational module was helpful. Some participants (19%, 15/79) also reported using the communication tool with other non-English speaking patients. Conclusions: The majority of EMS practitioners reported difficulty communicating with Deaf ASL users and acknowledged a sense of patient frustration. Nearly all participants felt the educational training was beneficial and clinically relevant; three months later, all participants found it to still be helpful. Additionally, the communication tool may be applicable to other populations that use English as a second language.
Assuntos
Serviços Médicos de Emergência , Língua de Sinais , Comunicação , Barreiras de Comunicação , Estudos Transversais , HumanosRESUMO
Complex hand gesture interactions among dynamic sign words may lead to misclassification, which affects the recognition accuracy of the ubiquitous sign language recognition system. This paper proposes to augment the feature vector of dynamic sign words with knowledge of hand dynamics as a proxy and classify dynamic sign words using motion patterns based on the extracted feature vector. In this method, some double-hand dynamic sign words have ambiguous or similar features across a hand motion trajectory, which leads to classification errors. Thus, the similar/ambiguous hand motion trajectory is determined based on the approximation of a probability density function over a time frame. Then, the extracted features are enhanced by transformation using maximal information correlation. These enhanced features of 3D skeletal videos captured by a leap motion controller are fed as a state transition pattern to a classifier for sign word classification. To evaluate the performance of the proposed method, an experiment is performed with 10 participants on 40 double hands dynamic ASL words, which reveals 97.98% accuracy. The method is further developed on challenging ASL, SHREC, and LMDHG data sets and outperforms conventional methods by 1.47%, 1.56%, and 0.37%, respectively.
Assuntos
Reconhecimento Automatizado de Padrão , Língua de Sinais , Algoritmos , Gestos , Mãos , Humanos , Movimento (Física) , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento PsicológicoRESUMO
Most of the existing methods focus mainly on the extraction of shape-based, rotation-based, and motion-based features, usually neglecting the relationship between hands and body parts, which can provide significant information to address the problem of similar sign words based on the backhand approach. Therefore, this paper proposes four feature-based models. The spatial-temporal body parts and hand relationship patterns are the main feature. The second model consists of the spatial-temporal finger joint angle patterns. The third model consists of the spatial-temporal 3D hand motion trajectory patterns. The fourth model consists of the spatial-temporal double-hand relationship patterns. Then, a two-layer bidirectional long short-term memory method is used to deal with time-independent data as a classifier. The performance of the method was evaluated and compared with the existing works using 26 ASL letters, with an accuracy and F1-score of 97.34% and 97.36%, respectively. The method was further evaluated using 40 double-hand ASL words and achieved an accuracy and F1-score of 98.52% and 98.54%, respectively. The results demonstrated that the proposed method outperformed the existing works under consideration. However, in the analysis of 72 new ASL words, including single- and double-hand words from 10 participants, the accuracy and F1-score were approximately 96.99% and 97.00%, respectively.
Assuntos
Corpo Humano , Língua de Sinais , Mãos , Humanos , Movimento (Física) , Estados UnidosRESUMO
Deaf people who use American Sign Language (ASL) are more likely to use the emergency department (ED) than their hearing English-speaking counterparts and are also at higher risk of receiving inaccessible communication. The purpose of this study is to explore the ED communication experience of Deaf patients. A descriptive qualitative study was performed by interviewing 11 Deaf people who had used the ED in the past 2 years. Applying a descriptive thematic analysis, we developed five themes: (1) requesting communication access can be stressful, frustrating, and time-consuming; (2) perspectives and experiences with Video Remote Interpreting (VRI); (3) expectations, benefits, and drawbacks of using on-site ASL interpreters; (4) written and oral communication provides insufficient information to Deaf patients; and (5) ED staff and providers lack cultural sensitivity and awareness towards Deaf patients. Findings are discussed with respect to medical and interpreting ethics to improve ED communication for Deaf patients.
Assuntos
Surdez , Pessoas com Deficiência Auditiva , Comunicação , Serviço Hospitalar de Emergência , Humanos , Língua de Sinais , Estados UnidosRESUMO
Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.
Assuntos
Nomes , Língua de Sinais , Humanos , Linguística , Idioma , Tempo de Reação/fisiologiaRESUMO
Limited language experience in childhood is common among deaf individuals, which prior research has shown to lead to low levels of language processing. Although basic structures such as word order have been found to be resilient to conditions of sparse language input in early life, whether they are robust to conditions of extreme language delay is unknown. The sentence comprehension strategies of post-childhood, first-language (L1) learners of American Sign Language (ASL) with at least 9 years of language experience were investigated, in comparison to two control groups of learners with full access to language from birth (deaf native signers and hearing L2 learners who were native English speakers). The results of a sentence-to-picture matching experiment show that event knowledge overrides word order for post-childhood L1 learners, regardless of the animacy of the subject, while both deaf native signers and hearing L2 signers consistently rely on word order to comprehend sentences. Language inaccessibility throughout early childhood impedes the acquisition of even basic word order. Similar to the strategies used by very young children prior to the development of basic sentence structure, post-childhood L1 learners rely more on context and event knowledge to comprehend sentences. Language experience during childhood is critical to the development of basic sentence structure.
Assuntos
Compreensão , Transtornos do Desenvolvimento da Linguagem , Pré-Escolar , Humanos , Idioma , Aprendizagem , Língua de SinaisRESUMO
Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important domain of research for a long time. Previously, sensor-based approaches have obtained higher accuracy than vision-based approaches. Due to the cost-effectiveness of vision-based approaches, researchers have been conducted here also despite the accuracy drop. The purpose of this research is to recognize American sign characters using hand images obtained from a web camera. In this work, the media-pipe hands algorithm was used for estimating hand joints from RGB images of hands obtained from a web camera and two types of features were generated from the estimated coordinates of the joints obtained for classification: one is the distances between the joint points and the other one is the angles between vectors and 3D axes. The classifiers utilized to classify the characters were support vector machine (SVM) and light gradient boosting machine (GBM). Three character datasets were used for recognition: the ASL Alphabet dataset, the Massey dataset, and the finger spelling A dataset. The results obtained were 99.39% for the Massey dataset, 87.60% for the ASL Alphabet dataset, and 98.45% for Finger Spelling A dataset. The proposed design for automatic American sign language recognition is cost-effective, computationally inexpensive, does not require any special sensors or devices, and has outperformed previous studies.
Assuntos
Mãos , Língua de Sinais , Algoritmos , Dedos , Humanos , Reconhecimento Psicológico , Estados UnidosRESUMO
Implicit causality (IC) biases, the tendency of certain verbs to elicit re-mention of either the first-mentioned noun phrase (NP1) or the second-mentioned noun phrase (NP2) from the previous clause, are important in psycholinguistic research. Understanding IC verbs and the source of their biases in signed as well as spoken languages helps elucidate whether these phenomena are language general or specific to the spoken modality. As the first of its kind, this study investigates IC biases in American Sign Language (ASL) and provides IC bias norms for over 200 verbs, facilitating future psycholinguistic studies of ASL and comparisons of spoken versus signed languages. We investigated whether native ASL signers continued sentences with IC verbs (e.g., ASL equivalents of 'Lisa annoys Maya because ') by mentioning NP1 (i.e., Lisa) or NP2 (i.e., Maya). We found a tendency towards more NP2-biased verbs. Previous work has found that a verb's thematic roles predict bias direction: stimulus-experiencer verbs (e.g., 'annoy'), where the first argument is the stimulus (causing annoyance) and the second argument is the experiencer (experiencing annoyance), elicit more NP1 continuations. Verbs with experiencer-stimulus thematic roles (e.g., 'love') elicit more NP2 continuations. We probed whether the trend towards more NP2-biased verbs was related to an existing claim that stimulus-experiencer verbs do not exist in sign languages. We found that stimulus-experiencer structure, while permitted, is infrequent, impacting the IC bias distribution in ASL. Nevertheless, thematic roles predict IC bias in ASL, suggesting that the thematic role-IC bias relationship is stable across languages as well as modalities.
Assuntos
Idioma , Língua de Sinais , Dissidências e Disputas , Humanos , Preconceito , Psicolinguística , Estados UnidosRESUMO
Despite advances in hearing technology, a growing body of research, as well as early intervention protocols, deaf children largely fail to meet age-based language milestones. This gap in language acquisition points to the inconsistencies that exist between research and practice. Current research suggests that bimodal bilingual early interventions at deaf identification provide children language foundations that can lead to more effective outcomes. Recommendations that support implementing bimodal bilingualism at deaf identification include early intervention protocols, language foundations, and the development of appropriate bimodal bilingual environments. All recommendations serve as multifaceted tools in a deaf child's repertoire as language and modality preferences develop and solidify. This versatile approach allows for children to determine their own language and communication preferences.
Assuntos
Intervenção Educacional Precoce/métodos , Desenvolvimento da Linguagem , Multilinguismo , Pessoas com Deficiência Auditiva/reabilitação , Ensino/tendências , Criança , Intervenção Educacional Precoce/tendências , Humanos , Pessoas com Deficiência Auditiva/estatística & dados numéricosRESUMO
Previous studies suggest that age of acquisition affects the outcomes of learning, especially at the morphosyntactic level. Unknown is how syntactic development is affected by increased cognitive maturity and delayed language onset. The current paper studied the early syntactic development of adolescent first language learners by examining word order patterns in American Sign Language (ASL). ASL uses a basic Subject-Verb-Object order, but also employs multiple word order variations. Child learners produce variable word order at the initial stage of acquisition, but later primarily produce canonical word order. We asked whether adolescent first language learners acquire ASL word order in a fashion parallel to child learners. We analyzed word order preference in spontaneous language samples from four adolescent L1 learners collected longitudinally from 12 months to six years of ASL exposure. Our results suggest that adolescent L1 learners go through stages similar to child native learners, although this process also appears to be prolonged.
Assuntos
Surdez , Desenvolvimento da Linguagem , Língua de Sinais , Adolescente , Fatores Etários , Cognição , Feminino , Humanos , Idioma , Aprendizagem , MasculinoRESUMO
Health information about inherited forms of cancer and the role of family history in cancer risk for the American Sign Language (ASL) Deaf community, a linguistic and cultural community, needs improvement. Cancer genetic education materials available in English print format are not accessible for many sign language users because English is not their native or primary language. Per Center for Disease Control and Prevention recommendations, the level of literacy for printed health education materials should not be higher than 6th grade level (~ 11 to 12 years old), and even with this recommendation, printed materials are still not accessible to sign language users or other nonnative English speakers. Genetic counseling is becoming an integral part of healthcare, but often ASL users are not considered when health education materials are developed. As a result, there are few genetic counseling materials available in ASL. Online tools such as video and closed captioning offer opportunities for educators and genetic counselors to provide digital access to genetic information in ASL to the Deaf community. The Deaf Genetics Project team used a bilingual approach to develop a 37-min interactive Cancer Genetics Education Module (CGEM) video in ASL with closed captions and quizzes, and demonstrated that this approach resulted in greater cancer genetic knowledge and increased intentions to obtain counseling or testing, compared to standard English text information (Palmer et al., Disability and Health Journal, 10(1):23-32, 2017). Though visually enhanced educational materials have been developed for sign language users with multimodal/lingual approach, little is known about design features that can accommodate a diverse audience of sign language users so the material is engaging to a wide audience. The main objectives of this paper are to describe the development of the CGEM and to determine if viewer demographic characteristics are associated with two measurable aspects of CGEM viewing behavior: (1) length of time spent viewing and (2) number of pause, play, and seek events. These objectives are important to address, especially for Deaf individuals because the amount of simultaneous content (video, print) requires cross-modal cognitive processing of visual and textual materials. The use of technology and presentational strategies is needed that enhance and not interfere with health learning in this population.
Assuntos
Surdez/psicologia , Aconselhamento Genético , Educação em Saúde/métodos , Língua de Sinais , Criança , Humanos , Neoplasias , Desenvolvimento de Programas , Avaliação de Programas e Projetos de Saúde , RiscoRESUMO
Communication barriers between healthcare providers and patients contribute to health disparities and the effectiveness of health promotion messages. This is especially true regarding communication between providers and deaf and hard of hearing (HOH) patients due to lack of understanding of cultural and linguistic differences, ineffectiveness of various means of communication and level of health literacy within that population. This research aimed to identify American Sign Language (ASL) interpreters' perceptions of barriers to effective communication between deaf and HOH patients and healthcare providers. We conducted a survey of ASL interpreters attending the 2015 National Symposium on Healthcare Interpreting with an overall response rate of 25%. Results indicated a significant difference (p < 0.05) in all areas of preferred communication between providers and deaf/HOH patients as perceived by interpreters. ASL interpreters observed that patients did not understand provider instructions in nearly half of appointments. Eighty-one percent of interpreters said that providers "hardly ever" use "teach-back" methods with patients to ensure understanding. A focus on improving health care and health promotion efforts in the deaf/HOH community depends on improving communication, health literacy, and patient empowerment and involves holding health care organizations accountable for assuring adequate staffing of ASL interpreters and communication resources in order to reduce health disparities in this population.
Assuntos
Barreiras de Comunicação , Acessibilidade aos Serviços de Saúde/normas , Pessoas com Deficiência Auditiva/estatística & dados numéricos , Língua de Sinais , Surdez , Feminino , Promoção da Saúde , Perda Auditiva , Humanos , Inquéritos e Questionários , Estados UnidosRESUMO
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.