Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 100
Filter
1.
Hear Res ; 451: 109074, 2024 09 15.
Article in English | MEDLINE | ID: mdl-39018768

ABSTRACT

Many children with profound hearing loss have received cochlear implants (CI) to help restore some sense of hearing. There is, however, limited research on long-term neurocognitive outcomes in young adults who have grown up hearing through a CI. This study compared the cognitive outcomes of early-implanted (n = 20) and late-implanted (n = 21) young adult CI users, and typically hearing (TH) controls (n=56), all of whom were enrolled in college. Cognitive fluidity, nonverbal intelligence, and American Sign Language (ASL) comprehension were assessed, revealing no significant differences in cognition and nonverbal intelligence between the early and late-implanted groups. However, there was a difference in ASL comprehension, with the late-implanted group having significantly higher ASL comprehension. Although young adult CI users showed significantly lower scores in a working memory and processing speed task than TH age-matched controls, there were no significant differences in tasks involving executive function shifting, inhibitory control, and episodic memory between young adult CI and young adult TH participants. In an exploratory analysis of a subset of CI participants (n = 17) in whom we were able to examine crossmodal plasticity, we saw greater evidence of crossmodal recruitment from the visual system in late-implanted compared with early-implanted CI young adults. However, cortical visual evoked potential latency biomarkers of crossmodal plasticity were not correlated with cognitive measures or ASL comprehension. The results suggest that in the late-implanted CI users, early access to sign language may have served as a scaffold for appropriate cognitive development, while in the early-implanted group early access to oral language benefited cognitive development. Furthermore, our results suggest that the persistence of crossmodal neuroplasticity into adulthood does not necessarily impact cognitive development. In conclusion, early access to language - spoken or signed - may be important for cognitive development, with no observable effect of crossmodal plasticity on cognitive outcomes.


Subject(s)
Cochlear Implantation , Cochlear Implants , Cognition , Comprehension , Neuronal Plasticity , Persons With Hearing Impairments , Humans , Male , Young Adult , Female , Cochlear Implantation/instrumentation , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Adult , Case-Control Studies , Adolescent , Time Factors , Age Factors , Neuropsychological Tests , Memory, Short-Term , Executive Function , Treatment Outcome , Hearing , Correction of Hearing Impairment/instrumentation
2.
Front Artif Intell ; 7: 1297347, 2024.
Article in English | MEDLINE | ID: mdl-38957453

ABSTRACT

Addressing the increasing demand for accessible sign language learning tools, this paper introduces an innovative Machine Learning-Driven Web Application dedicated to Sign Language Learning. This web application represents a significant advancement in sign language education. Unlike traditional approaches, the application's unique methodology involves assigning users different words to spell. Users are tasked with signing each letter of the word, earning a point upon correctly signing the entire word. The paper delves into the development, features, and the machine learning framework underlying the application. Developed using HTML, CSS, JavaScript, and Flask, the web application seamlessly accesses the user's webcam for a live video feed, displaying the model's predictions on-screen to facilitate interactive practice sessions. The primary aim is to provide a learning platform for those who are not familiar with sign language, offering them the opportunity to acquire this essential skill and fostering inclusivity in the digital age.

3.
Neurobiol Lang (Camb) ; 5(2): 553-588, 2024.
Article in English | MEDLINE | ID: mdl-38939730

ABSTRACT

We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic discrimination in deaf individuals with cochlear implants (CIs). Deaf CI users (N = 18, age = 18-24 yrs) who were exposed to a signed language at different ages and hearing individuals (N = 18, age = 18-21 yrs) completed a phonemic discrimination task in a spoken native (English) and non-native (Hindi) language while undergoing functional near-infrared spectroscopy neuroimaging. Behaviorally, deaf CI users who received a CI early versus later in life showed better English phonemic discrimination, albeit phonemic discrimination was poor relative to hearing individuals. Importantly, the age of exposure to ASL was not related to phonemic discrimination. Neurally, early-life language exposure, irrespective of modality, was associated with greater neural activation of left-hemisphere language areas critically involved in phonological processing during the phonemic discrimination task in deaf CI users. In particular, early exposure to ASL was associated with increased activation in the left hemisphere's classic language regions for native versus non-native language phonemic contrasts for deaf CI users who received a CI later in life. For deaf CI users who received a CI early in life, the age of exposure to ASL was not related to neural activation during phonemic discrimination. Together, the findings suggest that early signed language exposure does not negatively impact spoken language processing in deaf CI users, but may instead potentially offset the negative effects of language deprivation that deaf children without any signed language exposure experience prior to implantation. This empirical evidence aligns with and lends support to recent perspectives regarding the impact of ASL exposure in the context of CI usage.

4.
MedEdPORTAL ; 20: 11396, 2024.
Article in English | MEDLINE | ID: mdl-38722734

ABSTRACT

Introduction: People with disabilities and those with non-English language preferences have worse health outcomes than their counterparts due to barriers to communication and poor continuity of care. As members of both groups, people who are Deaf users of American Sign Language have compounded health disparities. Provider discomfort with these specific demographics is a contributing factor, often stemming from insufficient training in medical programs. To help address these health disparities, we created a session on disability, language, and communication for undergraduate medical students. Methods: This 2-hour session was developed as a part of a 2020 curriculum shift for a total of 404 second-year medical student participants. We utilized a retrospective postsession survey to analyze learning objective achievement through a comparison of medians using the Wilcoxon signed rank test (α = .05) for the first 2 years of course implementation. Results: When assessing 158 students' self-perceived abilities to perform each of the learning objectives, students reported significantly higher confidence after the session compared to their retrospective presession confidence for all four learning objectives (ps < .001, respectively). Responses signifying learning objective achievement (scores of 4, probably yes, or 5, definitely yes), when averaged across the first 2 years of implementation, increased from 73% before the session to 98% after the session. Discussion: Our evaluation suggests medical students could benefit from increased educational initiatives on disability culture and health disparities caused by barriers to communication, to strengthen cultural humility, the delivery of health care, and, ultimately, health equity.


Subject(s)
Curriculum , Decision Making, Shared , Disabled Persons , Education, Medical, Undergraduate , Students, Medical , Humans , Students, Medical/psychology , Students, Medical/statistics & numerical data , Retrospective Studies , Education, Medical, Undergraduate/methods , Communication Barriers , Surveys and Questionnaires , Male , Female , Sign Language , Language
6.
J Appl Behav Anal ; 57(3): 657-667, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38742862

ABSTRACT

Multiple-baseline-across-word-sets designs were used to determine whether a computer-based intervention would enhance accurate word signing with four participants. Each participant was a hearing college student with reading disorders. Learning trials included 3 s to observe printed words on the screen and a video model performing the sign twice (i.e., simultaneous prompting), 3 s to make the sign, 3 s to observe the same clip, and 3 s to make the sign again. For each participant and word set, no words were accurately signed during baseline. After the intervention, all four participants increased their accurate word signing across all three word sets, providing 12 demonstrations of experimental control. For each participant, accurate word signing was maintained. Application of efficient, technology-based, simultaneous prompting interventions for enhancing American Sign Language learning and future research designed to investigate causal mechanisms and optimize intervention effects are discussed.


Subject(s)
Dyslexia , Sign Language , Humans , Male , Dyslexia/rehabilitation , Dyslexia/therapy , Female , Computer-Assisted Instruction/methods , Young Adult , Learning , Students/psychology
7.
Sensors (Basel) ; 24(2)2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38257544

ABSTRACT

Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.


Subject(s)
Sign Language , Wearable Electronic Devices , Humans , United States , Motion Capture , Neurons , Perception
8.
Appl Linguist Rev ; 15(1): 309-333, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38221976

ABSTRACT

Hearing parents with deaf children face difficult decisions about what language(s) to use with their child. Sign languages such as American Sign Language (ASL) are fully accessible to deaf children, yet most hearing parents are not proficient in ASL prior to having a deaf child. Parents are often discouraged from learning ASL based in part on an assumption that it will be too difficult, yet there is little evidence supporting this claim. In this mixed-methods study, we surveyed hearing parents of deaf children (n = 100) who had learned ASL to learn more about their experiences. In their survey responses, parents identified a range of resources that supported their ASL learning as well as frequent barriers. Parents identified strongly with belief statements indicating the importance of ASL and affirmed that learning ASL is attainable for hearing parents. We discuss the implications of this study for parents who are considering ASL as a language choice and for the professionals who guide them.

9.
J Deaf Stud Deaf Educ ; 29(2): 105-114, 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-37973400

ABSTRACT

This case study describes the use of a syntax intervention with two deaf children who did not acquire a complete first language (L1) from birth. It looks specifically at their ability to produce subject-verb-object (SVO) sentence structure in American Sign Language (ASL) after receiving intervention. This was an exploratory case study in which investigators utilized an intervention that contained visuals to help teach SVO word order to young deaf children. Baseline data were collected over three sessions before implementation of a targeted syntax intervention and two follow-up sessions over 3-4 weeks. Both participants demonstrated improvements in their ability to produce SVO structure in ASL in 6-10 sessions. Visual analysis revealed a positive therapeutic trend that was maintained in follow-up sessions. These data provide preliminary evidence that a targeted intervention may help young deaf children with an incomplete L1 learn to produce basic word order in ASL. Results from this case study can help inform the practice of professionals working with signing deaf children who did not acquire a complete L1 from birth (e.g., speech-language pathologists, deaf mentors/coaches, ASL specialists, etc.). Future research should investigate the use of this intervention with a larger sample of deaf children.


Subject(s)
Language , Sign Language , Child , Humans , United States , Language Development , Learning
10.
Health Promot Pract ; 25(1): 65-76, 2024 Jan.
Article in English | MEDLINE | ID: mdl-36760068

ABSTRACT

School-based programs are an important tobacco prevention tool. Yet, existing programs are not suitable for Deaf and Hard-of-Hearing (DHH) youth. Moreover, little research has examined the use of the full range of tobacco products and related knowledge in this group. To address this gap and inform development of a school-based tobacco prevention program for this population, we conducted a pilot study among DHH middle school (MS) and high school (HS) students attending Schools for the Deaf and mainstream schools in California (n = 114). American Sign Language (ASL) administered surveys, before and after receipt of a draft curriculum delivered by health or physical education teachers, assessed product use and tobacco knowledge. Thirty-five percent of students reported exposure to tobacco products at home, including cigarettes (19%) and e-cigarettes (15%). Tobacco knowledge at baseline was limited; 35% of students knew e-cigarettes contain nicotine, and 56% were aware vaping is prohibited on school grounds. Current product use was reported by 16% of students, most commonly e-cigarettes (12%) and cigarettes (10%); overall, 7% of students reported dual use. Use was greater among HS versus MS students. Changes in student knowledge following program delivery included increased understanding of harmful chemicals in tobacco products, including nicotine in e-cigarettes. Post-program debriefings with teachers yielded specific recommendations for modifications to better meet the educational needs of DHH students. Findings based on student and teacher feedback will guide curriculum development and inform next steps in our program of research aimed to prevent tobacco use in this vulnerable and heretofore understudied population group.


Subject(s)
Electronic Nicotine Delivery Systems , Persons With Hearing Impairments , Tobacco Products , Humans , Adolescent , Smoking/epidemiology , Nicotine , Pilot Projects
11.
Dev Sci ; 27(1): e13416, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37255282

ABSTRACT

The hypothesis that impoverished language experience affects complex sentence structure development around the end of early childhood was tested using a fully randomized, sentence-to-picture matching study in American Sign Language (ASL). The participants were ASL signers who had impoverished or typical access to language in early childhood. Deaf signers whose access to language was highly impoverished in early childhood (N = 11) primarily comprehended structures consisting of a single verb and argument (Subject or Object), agreeing verbs, and the spatial relation or path of semantic classifiers. They showed difficulty comprehending more complex sentence structures involving dual lexical arguments or multiple verbs. As predicted, participants with typical language access in early childhood, deaf native signers (N = 17) or hearing second-language learners (N = 10), comprehended the range of 12 ASL sentence structures, independent of the subjective iconicity or frequency of the stimulus lexical items, or length of ASL experience and performance on non-verbal cognitive tasks. The results show that language experience in early childhood is necessary for the development of complex syntax. RESEARCH HIGHLIGHTS: Previous research with deaf signers suggests an inflection point around the end of early childhood for sentence structure development. Deaf signers who experienced impoverished language until the age of 9 or older comprehend several basic sentence structures but few complex structures. Language experience in early childhood is necessary for the development of complex sentence structure.


Subject(s)
Deafness , Language , Child, Preschool , Humans , Sign Language , Semantics , Hearing
12.
Sensors (Basel) ; 23(18)2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37766026

ABSTRACT

Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%.


Subject(s)
Deep Learning , Sign Language , Humans , United States , Quality of Life , Gestures , Technology
13.
Neurobiol Lang (Camb) ; 4(2): 361-381, 2023.
Article in English | MEDLINE | ID: mdl-37546690

ABSTRACT

Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL-English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL-English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent.

14.
J Emerg Med ; 65(3): e163-e171, 2023 09.
Article in English | MEDLINE | ID: mdl-37640633

ABSTRACT

BACKGROUND: Deaf individuals who communicate using American Sign Language (ASL) seem to experience a range of disparities in health care, but there are few empirical data. OBJECTIVE: To examine the provision of common care practices in the emergency department (ED) to this population. METHODS: ED visits in 2018 at a U.S. academic medical center were assessed retrospectively in Deaf adults who primarily use ASL (n = 257) and hearing individuals who primarily use English, selected at random (n = 429). Logistic regression analyses adjusted for confounders compared the groups on the provision or nonprovision of four routine ED care practices (i.e., laboratories ordered, medications ordered, images ordered, placement of peripheral intravenous line [PIV]) and on ED disposition (admitted to hospital or not admitted). RESULTS: The ED encounters with Deaf ASL users were less likely to include laboratory tests being ordered: adjusted odds ratio 0.68 and 95% confidence interval 0.47-0.97. ED encounters with Deaf individuals were also less likely to include PIV placement, less likely to result in images being ordered in the ED care of ASL users of high acuity compared with English users of high acuity (but not low acuity), and less likely to result in hospital admission. CONCLUSION: Results suggest disparate provision of several types of routine ED care for adult Deaf ASL users. Limitations include the observational study design at a single site and reliance on the medical record, underscoring the need for further research and potential reasons for disparate ED care with Deaf individuals.


Subject(s)
Emergency Medical Services , Sign Language , Adult , Humans , United States , Retrospective Studies , Emergency Treatment , Emergency Service, Hospital
15.
Front Nutr ; 10: 1125075, 2023.
Article in English | MEDLINE | ID: mdl-37090777

ABSTRACT

Deaf and Hard of Hearing (DHH) patients are at high risk of developing chronic illness, and when they do, are at higher risk of poor outcomes than in a hearing community. Rochester Lifestyle Medicine Institute adapted its online, Zoom-based, medically-facilitated 15 Day Whole-Food Plant-Based (WFPB) Jumpstart program, to give DHH participants knowledge, skills, and support to make dietary changes to improve their health. Adaptations included having a medical provider present who is fluent in American Sign Language (ASL), is board-certified in Lifestyle Medicine, and has a Master of Science in Deaf Education; spotlighting participants when asking a question during the Q&A session; using ASL interpreters; utilizing closed captioning/automatic transcription during all Zoom meetings; and employing a Success Specialist to provide outreach via text and email throughout the program. Participants had significant positive changes in their eating pattern. They reported improvements in biometric measures as well as in how they were feeling. They all reported that they planned to continue to eat a more WFPB diet than they did prior to Jumpstart. All either agreed or strongly agreed that they learned important information, were confident that they knew the best eating pattern for health, and gained the skills they needed to make changes. Although this was a small pilot program, it suggests that this model can be used to provide education and support for behavior change that will lead to improved health in a DHH community.

16.
Acta Psychol (Amst) ; 236: 103923, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37087958

ABSTRACT

For sign languages, transitional movements of the hands are fully visible and may be used to predict upcoming linguistic input. We investigated whether and how deaf signers and hearing nonsigners use transitional information to detect a target item in a string of either pseudosigns or grooming gestures, as well as whether motor imagery ability was related to this skill. Transitional information between items was either intact (Normal videos), digitally altered such that the hands were selectively blurred (Blurred videos), or edited to only show the frame prior to the transition which was frozen for the entire transition period, removing all transitional information (Static videos). For both pseudosigns and gestures, signers and nonsigners had faster target detection times for Blurred than Static videos, indicating similar use of movement transition cues. For linguistic stimuli (pseudosigns), only signers made use of transitional handshape information, as evidenced by faster target detection times for Normal than Blurred videos. This result indicates that signers can use their linguistic knowledge to interpret transitional handshapes to predict the upcoming signal. Signers and nonsigners did not differ in motor imagery abilities, but only non-signers exhibited evidence of using motor imagery as a prediction strategy. Overall, these results suggest that signers use transitional movement and handshape cues to facilitate sign recognition.


Subject(s)
Gestures , Hearing , Humans , Cues , Linguistics , Sign Language , Perception
17.
Multimed Tools Appl ; : 1-31, 2023 Jan 31.
Article in English | MEDLINE | ID: mdl-36743996

ABSTRACT

In recent years, researchers have been focusing on developing Human-Computer Interfaces that are fast, intuitive, and allow direct interaction with the computing environment. One of the most natural ways of communication is hand gestures. In this context, many systems were developed to recognize hand gestures using numerous vision-based techniques, these systems are highly affected by acquisition constraints, such as resolution, noise, lighting condition, hand shape, and pose. To enhance the performance under such constraints, we propose a static and dynamic hand gesture recognition system, which utilizes the Dual-Tree Complex Wavelet Transform to produce an approximation image characterized by less noise and redundancy. Subsequently, the Histogram of Oriented Gradients is applied to the resulting image to extract relevant information and produce a compact features vector. For classification, we compare the performance of three Artificial Neural Networks, namely, MLP, PNN, and RBNN. Random Decision Forest and SVM classifiers are also used to ameliorate the efficiency of our system. Experimental evaluation is performed on four datasets composed of alphabet signs and dynamic gestures. The obtained results demonstrate the efficiency of the combined features, for which the achieved recognition rates were comparable to the state-of-the-art.

18.
Neuropsychologia ; 183: 108516, 2023 05 03.
Article in English | MEDLINE | ID: mdl-36796720

ABSTRACT

Prior research has found that iconicity facilitates sign production in picture-naming paradigms and has effects on ERP components. These findings may be explained by two separate hypotheses: (1) a task-specific hypothesis that suggests these effects occur because visual features of the iconic sign form can map onto the visual features of the pictures, and (2) a semantic feature hypothesis that suggests that the retrieval of iconic signs results in greater semantic activation due to the robust representation of sensory-motor semantic features compared to non-iconic signs. To test these two hypotheses, iconic and non-iconic American Sign Language (ASL) signs were elicited from deaf native/early signers using a picture-naming task and an English-to-ASL translation task, while electrophysiological recordings were made. Behavioral facilitation (faster response times) and reduced negativities were observed for iconic signs (both prior to and within the N400 time window), but only in the picture-naming task. No ERP or behavioral differences were found between iconic and non-iconic signs in the translation task. This pattern of results supports the task-specific hypothesis and provides evidence that iconicity only facilitates sign production when the eliciting stimulus and the form of the sign can visually overlap (a picture-sign alignment effect).


Subject(s)
Electrophysiology , Evoked Potentials , Models, Neurological , Sign Language , Translations , United States , Reaction Time , Photic Stimulation , Semantics , Humans , Deafness/physiopathology , Male , Female , Adult , Analysis of Variance
19.
Med Sci Educ ; 33(1): 11-13, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36713277

ABSTRACT

Language and cultural-concordant healthcare providers improve health outcomes for deaf patients, yet training opportunities are lacking. The Deaf Health Pathway was developed to train medical students on cultural humility and communication in American Sign Language to better connect with deaf community members and bridge the gap in their healthcare.

20.
Lang Cogn ; 14(4): 622-644, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36426211

ABSTRACT

Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs. Experiment 1a's match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners' judgments. We speculate that the morphophonological distinctions in noun-verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.

SELECTION OF CITATIONS
SEARCH DETAIL