RESUMO
Disability is an important and often overlooked component of diversity. Individuals with disabilities bring a rare perspective to science, technology, engineering, mathematics, and medicine (STEMM) because of their unique experiences approaching complex issues related to health and disability, navigating the healthcare system, creatively solving problems unfamiliar to many individuals without disabilities, managing time and resources that are limited by physical or mental constraints, and advocating for themselves and others in the disabled community. Yet, individuals with disabilities are underrepresented in STEMM. Professional organizations can address this underrepresentation by recruiting individuals with disabilities for leadership opportunities, easing financial burdens, providing equal access, fostering peer-mentor groups, and establishing a culture of equity and inclusion spanning all facets of diversity. We are a group of deaf and hard-of-hearing (D/HH) engineers, scientists, and clinicians, most of whom are active in clinical practice and/or auditory research. We have worked within our professional societies to improve access and inclusion for D/HH individuals and others with disabilities. We describe how different models of disability inform our understanding of disability as a form of diversity. We address heterogeneity within disabled communities, including intersectionality between disability and other forms of diversity. We highlight how the Association for Research in Otolaryngology has supported our efforts to reduce ableism and promote access and inclusion for D/HH individuals. We also discuss future directions and challenges. The tools and approaches discussed here can be applied by other professional organizations to include individuals with all forms of diversity in STEMM.
RESUMO
The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.
Assuntos
Estimulação Acústica/métodos , Estimulação Luminosa/métodos , Percepção da Fala , Percepção Visual , Adolescente , Adulto , Compreensão , Sinais (Psicologia) , Feminino , Fixação Ocular , Humanos , Individualidade , Masculino , Ruído , Processamento Espacial , Adulto JovemRESUMO
Understanding speech in the presence of background sound can be challenging for older adults. Speech comprehension in noise appears to depend on working memory and executive-control processes (e.g., Heald and Nusbaum, 2014), and their augmentation through training may have rehabilitative potential for age-related hearing loss. We examined the efficacy of adaptive working-memory training (Cogmed; Klingberg et al., 2002) in 24 older adults, assessing generalization to other working-memory tasks (near-transfer) and to other cognitive domains (far-transfer) using a cognitive test battery, including the Reading Span test, sensitive to working memory (e.g., Daneman and Carpenter, 1980). We also assessed far transfer to speech-in-noise performance, including a closed-set sentence task (Kidd et al., 2008). To examine the effect of cognitive training on benefit obtained from semantic context, we also assessed transfer to open-set sentences; half were semantically coherent (high-context) and half were semantically anomalous (low-context). Subjects completed 25 sessions (0.5-1 h each; 5 sessions/week) of both adaptive working memory training and placebo training over 10 weeks in a crossover design. Subjects' scores on the adaptive working-memory training tasks improved as a result of training. However, training did not transfer to other working memory tasks, nor to tasks recruiting other cognitive domains. We did not observe any training-related improvement in speech-in-noise performance. Measures of working memory correlated with the intelligibility of low-context, but not high-context, sentences, suggesting that sentence context may reduce the load on working memory. The Reading Span test significantly correlated only with a test of visual episodic memory, suggesting that the Reading Span test is not a pure-test of working memory, as is commonly assumed.
RESUMO
Accumulating evidence points to a link between age-related hearing loss and cognitive decline, but their relationship is not clear. Does one cause the other, or does some third factor produce both? The answer has critical implications for prevention, rehabilitation, and health policy but has been difficult to establish for several reasons. First, determining a causal relationship in natural, correlational samples is problematic, and hearing and cognition are difficult to measure independently. Here, we critically review the evidence for a link between hearing loss and cognitive decline. We conclude that the evidence is convincing, but that the effects are small when hearing is measured audiometrically. We review four different directional hypotheses that have been offered as explanations for such a link, and conclude that no single hypothesis is sufficient. We introduce a framework that highlights that hearing and cognition rely on shared neurocognitive resources, and relate to each other in several different ways. We also discuss interventions for sensory and cognitive decline that may permit more causal inferences.
Assuntos
Envelhecimento/patologia , Envelhecimento/psicologia , Transtornos Cognitivos/etiologia , Perda Auditiva/etiologia , Transtornos Cognitivos/patologia , Perda Auditiva/patologia , HumanosRESUMO
Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a cochlear implant (noise-vocoded [NV] speech) is enhanced by the provision of VSI. Experiment 1 demonstrates that provision of VSI concurrently with a clear auditory form of an utterance as feedback after each NV utterance during training does not enhance learning over clear auditory feedback alone, suggesting that VSI does not play a special role in retuning of perceptual representations of speech. Experiment 2 demonstrates that provision of VSI concurrently with NV speech (a simulation of typical real-world experience) facilitates perceptual learning of NV speech, but only when an NV-only repetition of each utterance is presented after the composite NV/VSI form during training. Experiment 3 shows that this more efficient learning of NV speech is probably due to the additional listening effort required to comprehend the utterance when clear feedback is never provided and is not specifically due to the provision of VSI. Our results suggest that rehabilitation after cochlear implantation does not necessarily require naturalistic audiovisual input, but may be most effective when (a) training utterances are relatively intelligible (approximately 85% of words reported correctly during effortful listening), and (b) the individual has the opportunity to map what they know of an utterance's linguistic content onto the degraded form.