RESUMO
Both age-related hearing loss (ARHL) and age-related loss in vestibular function (ARVL) are prevalent conditions with deleterious consequences on the health and quality of life. Age-related changes in the inner ear are key contributors to both conditions. The auditory and vestibular systems rely on a shared sensory organ - the inner ear - and, like other sensory organs, the inner ear is susceptible to the effects of aging. Despite involvement of the same sensory structure, ARHL and ARVL are often considered separately. Insight essential for the development of improved diagnostics and treatments for both ARHL and ARVL can be gained by careful examination of their shared and unique pathophysiology in the auditory and vestibular end organs of the inner ear. To this end, this review begins by comparing the prevalence patterns of ARHL and ARVL. Next, the normal and age-related changes in the structure and function of the auditory and vestibular end organs are compared. Then, the contributions of various molecular mechanisms, notably inflammaging, oxidative stress, and genetic factors, are evaluated as possible common culprits that interrelate pathophysiology in the cochlea and vestibular end organs as part of ARHL and ARVL. A careful comparison of these changes reveals that the patterns of pathophysiology show similarities but also differences both between the cochlea and vestibular end organs and among the vestibular end organs. Future progress will depend on the development and application of new research strategies and the integrated investigation of ARHL and ARVL using both clinical and animal models.
RESUMO
Speech comprehension is often thought of as an entirely auditory process, but both normal hearing and hearing-impaired individuals sometimes use visual attention to disambiguate speech, particularly when it is difficult to hear. Many studies have investigated how visual attention (or the lack thereof) impacts the perception of simple speech sounds such as isolated consonants, but there is a gap in the literature concerning visual attention during natural speech comprehension. This issue needs to be addressed, as individuals process sounds and words in everyday speech differently than when they are separated into individual elements with no competing sound sources or noise. Moreover, further research is needed to explore patterns of eye movements during speech comprehension - especially in the presence of noise - as such an investigation would allow us to better understand how people strategically use visual information while processing speech. To this end, we conducted an experiment to track eye-gaze behavior during a series of listening tasks as a function of the number of speakers, background noise intensity, and the presence or absence of simulated hearing impairment. Our specific aims were to discover how individuals might adapt their oculomotor behavior to compensate for the difficulty of the listening scenario, such as when listening in noisy environments or experiencing simulated hearing loss. Speech comprehension difficulty was manipulated by simulating hearing loss and varying background noise intensity. Results showed that eye movements were affected by the number of speakers, simulated hearing impairment, and the presence of noise. Further, findings showed that differing levels of signal-to-noise ratio (SNR) led to changes in eye-gaze behavior. Most notably, we found that the addition of visual information (i.e. videos vs. auditory information only) led to enhanced speech comprehension - highlighting the strategic usage of visual information during this process.