Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 134
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Behav Res Methods ; 54(5): 2502-2521, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-34918219

RESUMEN

Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.


Asunto(s)
Nombres , Lengua de Signos , Humanos , Lingüística , Lenguaje , Tiempo de Reacción/fisiología
2.
J Deaf Stud Deaf Educ ; 27(4): 355-372, 2022 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-35775152

RESUMEN

The lexical quality hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension. In Study 1, we evaluated the contributions of lexical quality to reading comprehension in 97 deaf and 98 hearing adults matched for reading ability. While phonological awareness was a strong predictor for hearing readers, for deaf readers, orthographic precision and semantic knowledge, not phonology, predicted reading comprehension (assessed by two different tests). For deaf readers, the architecture of the reading system adapts by shifting reliance from (coarse-grained) phonological representations to high-quality orthographic and semantic representations. In Study 2, we examined the contribution of American Sign Language (ASL) variables to reading comprehension in 83 deaf adults. Fingerspelling (FS) and ASL comprehension skills predicted reading comprehension. We suggest that FS might reinforce orthographic-to-semantic mappings and that sign language comprehension may serve as a linguistic basis for the development of skilled reading in deaf signers.


Asunto(s)
Sordera , Lengua de Signos , Adulto , Comprensión , Humanos , Lectura , Semántica
3.
Hum Brain Mapp ; 42(2): 384-397, 2021 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-33098616

RESUMEN

The neural plasticity underlying language learning is a process rather than a single event. However, the dynamics of training-induced brain reorganization have rarely been examined, especially using a multimodal magnetic resonance imaging approach, which allows us to study the relationship between functional and structural changes. We focus on sign language acquisition in hearing adults who underwent an 8-month long course and five neuroimaging sessions. We assessed what neural changes occurred as participants learned a new language in a different modality-as reflected by task-based activity, connectivity changes, and co-occurring structural alterations. Major changes in the activity pattern appeared after just 3 months of learning, as indicated by increases in activation within the modality-independent perisylvian language network, together with increased activation in modality-dependent parieto-occipital, visuospatial and motion-sensitive regions. Despite further learning, no alterations in activation were detected during the following months. However, enhanced coupling between left-lateralized occipital and inferior frontal regions was observed as the proficiency increased. Furthermore, an increase in gray matter volume was detected in the left inferior frontal gyrus which peaked at the end of learning. Overall, these results showed complexity and temporal distinctiveness of various aspects of brain reorganization associated with learning of new language in different sensory modality.


Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Audición/fisiología , Aprendizaje/fisiología , Imagen por Resonancia Magnética/métodos , Lengua de Signos , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen Multimodal/métodos , Plasticidad Neuronal/fisiología , Estimulación Luminosa/métodos , Desempeño Psicomotor/fisiología , Adulto Joven
4.
Proc Natl Acad Sci U S A ; 115(39): 9708-9713, 2018 09 25.
Artículo en Inglés | MEDLINE | ID: mdl-30206151

RESUMEN

A defining feature of human cognition is the ability to quickly and accurately alternate between complex behaviors. One striking example of such an ability is bilinguals' capacity to rapidly switch between languages. This switching process minimally comprises disengagement from the previous language and engagement in a new language. Previous studies have associated language switching with increased prefrontal activity. However, it is unknown how the subcomputations of language switching individually contribute to these activities, because few natural situations enable full separation of disengagement and engagement processes during switching. We recorded magnetoencephalography (MEG) from American Sign Language-English bilinguals who often sign and speak simultaneously, which allows to dissociate engagement and disengagement. MEG data showed that turning a language "off" (switching from simultaneous to single language production) led to increased activity in the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (dlPFC), while turning a language "on" (switching from one language to two simultaneously) did not. The distinct representational nature of these on and off processes was also supported by multivariate decoding analyses. Additionally, Granger causality analyses revealed that (i) compared with "turning on" a language, "turning off" required stronger connectivity between left and right dlPFC, and (ii) dlPFC activity predicted ACC activity, consistent with models in which the dlPFC is a top-down modulator of the ACC. These results suggest that the burden of language switching lies in disengagement from the previous language as opposed to engaging a new language and that, in the absence of motor constraints, producing two languages simultaneously is not necessarily more cognitively costly than producing one.


Asunto(s)
Encéfalo/fisiología , Magnetoencefalografía , Multilingüismo , Lengua de Signos , Cognición/fisiología , Función Ejecutiva/fisiología , Humanos , Lenguaje
5.
Proc Natl Acad Sci U S A ; 115(45): 11369-11376, 2018 11 06.
Artículo en Inglés | MEDLINE | ID: mdl-30397135

RESUMEN

Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.


Asunto(s)
Percepción Auditiva/fisiología , Lenguaje , Percepción Olfatoria/fisiología , Psicolingüística , Percepción del Gusto/fisiología , Percepción del Tacto/fisiología , Percepción Visual/fisiología , África , Asia , Comparación Transcultural , Diversidad Cultural , Humanos , América Latina , Fonética , Semántica , Lengua de Signos
6.
J Deaf Stud Deaf Educ ; 26(2): 263-277, 2021 03 17.
Artículo en Inglés | MEDLINE | ID: mdl-33598676

RESUMEN

ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency ("guessability") ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.


Asunto(s)
Sordera , Lengua de Signos , Audición , Humanos , Lingüística , Semántica , Estados Unidos
7.
J Cogn Neurosci ; 32(6): 1079-1091, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32027582

RESUMEN

A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture-word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.


Asunto(s)
Sordera/fisiopatología , Potenciales Evocados/fisiología , Función Ejecutiva/fisiología , Reconocimiento Visual de Modelos/fisiología , Corteza Prefrontal/fisiología , Psicolingüística , Lengua de Signos , Habla/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Desempeño Psicomotor/fisiología
8.
Proc Natl Acad Sci U S A ; 119(28): e2208884119, 2022 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-35767673

Asunto(s)
Lenguaje
9.
J Deaf Stud Deaf Educ ; 25(4): 447-456, 2020 09 10.
Artículo en Inglés | MEDLINE | ID: mdl-32476020

RESUMEN

As spatial languages, sign languages rely on spatial cognitive processes that are not involved for spoken languages. Interlocutors have different visual perspectives of the signer's hands requiring a mental transformation for successful communication about spatial scenes. It is unknown whether visual-spatial perspective-taking (VSPT) or mental rotation (MR) abilities support signers' comprehension of perspective-dependent American Sign Language (ASL) structures. A total of 33 deaf ASL adult signers completed tasks examining nonlinguistic VSPT ability, MR ability, general ASL proficiency (ASL-Sentence Reproduction Task [ASL-SRT]), and an ASL comprehension test involving perspective-dependent classifier constructions (the ASL Spatial Perspective Comprehension Test [ASPCT] test). Scores on the linguistic (ASPCT) and VSPT tasks positively correlated with each other and both correlated with MR ability; however, VSPT abilities predicted linguistic perspective-taking better than did MR ability. ASL-SRT scores correlated with ASPCT accuracy (as both require ASL proficiency) but not with VSPT scores. Therefore, the ability to comprehend perspective-dependent ASL classifier constructions relates to ASL proficiency and to nonlinguistic VSPT and MR abilities.


Asunto(s)
Lengua de Signos , Adulto , Comprensión , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Percepción Espacial , Percepción Visual , Adulto Joven
10.
J Deaf Stud Deaf Educ ; 24(3): 201-213, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-30882873

RESUMEN

Social abilities relate to performance on visual-spatial perspective-taking (VSPT) tasks for hearing nonsigners but may relate differently to VSPT abilities for deaf signers because of their distinct linguistic and social experiences. This research investigated whether deaf adults approach VSPT tasks nonsocially (as previously suggested for deaf children) or socially (as seen for hearing adults). Adult hearing nonsigners (n = 45) and deaf signers (n = 44) performed a nonlinguistic VSPT task, mental rotation and spatial orientation tasks, and completed a questionnaire measuring social abilities and degree of socialness. No group differences were observed for any of the spatial tasks. Hearing nonsigners with better social abilities performed better on the VSPT task but deaf signers who were less social performed better on the VSPT task. Therefore, social abilities and VSPT skill relate differently for deaf and hearing individuals, possibly due to differences in communication modality and/or sociocultural experiences.


Asunto(s)
Sordera/psicología , Audición/fisiología , Lengua de Signos , Habilidades Sociales , Procesamiento Espacial/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Encuestas y Cuestionarios , Adulto Joven
11.
J Deaf Stud Deaf Educ ; 24(3): 214-222, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-30856254

RESUMEN

In ASL spatial classifier expressions, the location of the hands in signing space depicts the relative position of described objects. When objects are physically present, the arrangement of the hands maps to the observed position of objects in the world (Shared Space). For non-present objects, interlocutors must perform a mental transformation to take the signer's perspective ("Signer Space"). The ASL Spatial Perspective Comprehension Test (ASPCT) was developed to assess the comprehension of locative expressions produced in both Shared and Signer Space, viewed from both canonical Face-to-face and 90° offset viewing angles. Deaf signers (N = 38) performed better with Shared than Signer Space. Viewing angle only impacted Signer Space comprehension (90° offset better than 180° Face-to-face). ASPCT performance correlated positively with both nonverbal intelligence and ASL proficiency. These findings indicate that the mental transformation required to understand a signer's perspective is not automatic, takes time, and is cognitively demanding.


Asunto(s)
Comprensión/fisiología , Sordera/psicología , Lengua de Signos , Navegación Espacial/fisiología , Adulto , Análisis de Varianza , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Grabación en Video
12.
J Deaf Stud Deaf Educ ; 23(4): 399-407, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-29733368

RESUMEN

This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in 1 min) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.


Asunto(s)
Lengua de Signos , Adulto , Aptitud , Femenino , Humanos , Lenguaje , Masculino , Multilingüismo , Personas con Deficiencia Auditiva , Semántica
13.
Hum Brain Mapp ; 38(8): 4109-4124, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28513102

RESUMEN

Bilingual experience can delay cognitive decline during aging. A general hypothesis is that the executive control system of bilinguals faces an increased load due to controlling two languages, and this increased load results in a more "tuned brain" that eventually creates a neural reserve. Here we explored whether such a neuroprotective effect is independent of language modality, i.e., not limited to bilinguals who speak two languages but also occurs for bilinguals who use a spoken and a signed language. We addressed this issue by comparing bimodal bilinguals to monolinguals in order to detect age-induced structural brain changes and to determine whether we can detect the same beneficial effects on brain structure, in terms of preservation of gray matter volume (GMV), for bimodal bilinguals as has been reported for unimodal bilinguals. Our GMV analyses revealed a significant interaction effect of age × group in the bilateral anterior temporal lobes, left hippocampus/amygdala, and left insula where bimodal bilinguals showed slight GMV increases while monolinguals showed significant age-induced GMV decreases. We further found through cortical surface-based measurements that this effect was present for surface area and not for cortical thickness. Moreover, to further explore the hypothesis that overall bilingualism provides neuroprotection, we carried out a direct comparison of GMV, extracted from the brain regions reported above, between bimodal bilinguals, unimodal bilinguals, and monolinguals. Bilinguals, regardless of language modality, exhibited higher GMV compared to monolinguals. This finding highlights the general beneficial effects provided by experience handling two language systems, whether signed or spoken. Hum Brain Mapp 38:4109-4124, 2017. © 2017 Wiley Periodicals, Inc.


Asunto(s)
Envejecimiento/patología , Envejecimiento/psicología , Encéfalo/diagnóstico por imagen , Multilingüismo , Neuroprotección/fisiología , Adulto , Anciano , Encéfalo/patología , Femenino , Sustancia Gris/diagnóstico por imagen , Sustancia Gris/patología , Humanos , Procesamiento de Imagen Asistido por Computador , Modelos Lineales , Masculino , Persona de Mediana Edad , Tamaño de los Órganos , Lengua de Signos
14.
Behav Brain Sci ; 40: e54, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-29342516

RESUMEN

Linguistic and psycholinguistic tests will be more useful than motion capture technology in calibrating the borders between sign and gesture. The analogy between motion capture (mocap) technology and the spectrograph is flawed because only vocal articulators are hidden. Although information about gradience and variability will be obtained, the technology provides less information about linguistic constraints and categories. Better models are needed to account for differences between co-speech and co-sign gesture (e.g., different degrees of optionality, existence of beat gestures).


Asunto(s)
Gestos , Lengua de Signos , Humanos , Lenguaje , Psicolingüística , Habla
15.
Behav Res Methods ; 49(2): 784-801, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27193158

RESUMEN

ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .


Asunto(s)
Bases de Datos Factuales , Lengua de Signos , Adulto , Femenino , Humanos , Lenguaje , Masculino , Traducciones , Estados Unidos , Adulto Joven
16.
J Deaf Stud Deaf Educ ; 22(4): 402-403, 2017 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-28961873

RESUMEN

This issue begins the inclusion of a series of articles on multimodal, multilingual communication development. This special section is intended to run for two or three issues, with two or three contributions in each issue.


Asunto(s)
Educación de Personas con Discapacidad Auditiva , Multilingüismo , Niño , Humanos , Lenguaje
17.
J Deaf Stud Deaf Educ ; 22(1): 72-87, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27789552

RESUMEN

We conducted three immediate serial recall experiments that manipulated type of stimulus presentation (printed or fingerspelled words) and word similarity (speech-based or manual). Matched deaf American Sign Language signers and hearing non-signers participated (mean reading age = 14-15 years). Speech-based similarity effects were found for both stimulus types indicating that deaf signers recoded both printed and fingerspelled words into a speech-based phonological code. A manual similarity effect was not observed for printed words indicating that print was not recoded into fingerspelling (FS). A manual similarity effect was observed for fingerspelled words when similarity was based on joint angles rather than on handshape compactness. However, a follow-up experiment suggested that the manual similarity effect was due to perceptual confusion at encoding. Overall, these findings suggest that FS is strongly linked to English phonology for deaf adult signers who are relatively skilled readers. This link between fingerspelled words and English phonology allows for the use of a more efficient speech-based code for retaining fingerspelled words in short-term memory and may strengthen the representation of English vocabulary.


Asunto(s)
Sordera/psicología , Memoria a Corto Plazo/fisiología , Lengua de Signos , Adolescente , Adulto , Concienciación/fisiología , Estudios de Casos y Controles , Femenino , Dedos , Humanos , Masculino , Persona de Mediana Edad , Pruebas Psicológicas , Lectura , Reconocimiento en Psicología/fisiología , Habla/fisiología , Adulto Joven
18.
Behav Brain Sci ; 39: e70, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27562833

RESUMEN

Signed and spoken languages emerge, change, are acquired, and are processed under distinct perceptual, motor, and memory constraints. Therefore, the Now-or-Never bottleneck has different ramifications for these languages, which are highlighted in this commentary. The extent to which typological differences in linguistic structure can be traced to processing differences provides unique evidence for the claim that structure is processing.


Asunto(s)
Lenguaje , Lengua de Signos , Humanos , Lingüística
19.
J Deaf Stud Deaf Educ ; 21(2): 213-21, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26657077

RESUMEN

Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use these to constrain English word recognition. Comprehension facilitation via semantic integration of words and signs is consistent with co-speech gesture research demonstrating facilitative effects of gesture integration on language comprehension.


Asunto(s)
Comprensión/fisiología , Pérdida Auditiva/rehabilitación , Multilingüismo , Semántica , Lengua de Signos , Percepción del Habla/fisiología , Adolescente , Adulto , Femenino , Gestos , Humanos , Masculino , Tiempo de Reacción , Adulto Joven
20.
J Deaf Stud Deaf Educ ; 21(1): 64-9, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26590608

RESUMEN

The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf non-native signers, and hearing ASL students. The results revealed that the ASL-CT has good internal reliability (α = 0.834). Discriminant validity was established by demonstrating that deaf native signers performed significantly better than deaf non-native signers and hearing native signers. Concurrent validity was established by demonstrating that test results positively correlated with another measure of ASL ability (r = .715) and that hearing ASL students' performance positively correlated with the level of ASL courses they were taking (r = .726). Researchers can use the ASL-CT to characterize an individual's ASL comprehension skills, to establish a minimal skill level as an inclusion criterion for a study, to group study participants by ASL skill (e.g., proficient vs. nonproficient), or to provide a measure of ASL skill as a dependent variable.


Asunto(s)
Comprensión , Pérdida Auditiva/rehabilitación , Pruebas del Lenguaje/normas , Personas con Deficiencia Auditiva/rehabilitación , Psicometría/métodos , Lengua de Signos , Adolescente , Investigación Empírica , Humanos , Reproducibilidad de los Resultados , Estudiantes , Estados Unidos , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA