Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.857
Filter
1.
J Neural Eng ; 21(3)2024 May 22.
Article in English | MEDLINE | ID: mdl-38729132

ABSTRACT

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Subject(s)
Attention , Auditory Perception , Deep Learning , Electroencephalography , Hearing Loss , Humans , Attention/physiology , Female , Electroencephalography/methods , Male , Middle Aged , Hearing Loss/physiopathology , Hearing Loss/rehabilitation , Hearing Loss/diagnosis , Aged , Auditory Perception/physiology , Noise , Adult , Hearing Aids , Speech Perception/physiology , Neural Networks, Computer
2.
Trends Hear ; 28: 23312165241252240, 2024.
Article in English | MEDLINE | ID: mdl-38715410

ABSTRACT

In recent years, tools for early detection of irreversible trauma to the basilar membrane during hearing preservation cochlear implant (CI) surgery were established in several clinics. A link with the degree of postoperative hearing preservation in patients was investigated, but patient populations were usually small. Therefore, this study's aim was to analyze data from intraoperative extracochlear electrocochleography (ECochG) recordings for a larger group.During hearing preservation CI surgery, extracochlear recordings were made before, during, and after CI electrode insertion using a cotton wick electrode placed at the promontory. Before and after insertion, amplitudes and stimulus response thresholds were recorded at 250, 500, and 1000 Hz. During insertion, response amplitudes were recorded at one frequency and one stimulus level. Data from 121 patient ears were analyzed.The key benefit of extracochlear recordings is that they can be performed before, during, and after CI electrode insertion. However, extracochlear ECochG threshold changes before and after CI insertion were relatively small and did not independently correlate well with hearing preservation, although at 250 Hz they added some significant information. Some tendencies-although no significant relationships-were detected between amplitude behavior and hearing preservation. Rising amplitudes seem favorable and falling amplitudes disadvantageous, but constant amplitudes do not appear to allow stringent predictions.Extracochlear ECochG measurements seem to only partially realize expected benefits. The questions now are: do gains justify the effort, and do other procedures or possible combinations lead to greater benefits for patients?


Subject(s)
Audiometry, Evoked Response , Auditory Threshold , Cochlea , Cochlear Implantation , Cochlear Implants , Hearing , Humans , Audiometry, Evoked Response/methods , Retrospective Studies , Cochlear Implantation/instrumentation , Female , Middle Aged , Male , Aged , Adult , Hearing/physiology , Cochlea/surgery , Cochlea/physiopathology , Treatment Outcome , Adolescent , Predictive Value of Tests , Young Adult , Child , Audiometry, Pure-Tone , Aged, 80 and over , Child, Preschool , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Hearing Loss/surgery , Hearing Loss/rehabilitation
3.
Eur J Neurosci ; 59(9): 2373-2390, 2024 May.
Article in English | MEDLINE | ID: mdl-38303554

ABSTRACT

Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.


Subject(s)
Sound Localization , Visual Perception , Humans , Female , Adult , Male , Visual Perception/physiology , Sound Localization/physiology , Young Adult , Saccades/physiology , Auditory Perception/physiology , Hearing Loss/physiopathology , Photic Stimulation/methods , Acoustic Stimulation/methods , Space Perception/physiology
4.
Nat Rev Nephrol ; 20(5): 295-312, 2024 May.
Article in English | MEDLINE | ID: mdl-38287134

ABSTRACT

Hearing loss affects nearly 1.6 billion people and is the third-leading cause of disability worldwide. Chronic kidney disease (CKD) is also a common condition that is associated with adverse clinical outcomes and high health-care costs. From a developmental perspective, the structures responsible for hearing have a common morphogenetic origin with the kidney, and genetic abnormalities that cause familial forms of hearing loss can also lead to kidney disease. On a cellular level, normal kidney and cochlea function both depend on cilial activities at the apical surface, and kidney tubular cells and sensory epithelial cells of the inner ear use similar transport mechanisms to modify luminal fluid. The two organs also share the same collagen IV basement membrane network. Thus, strong developmental and physiological links exist between hearing and kidney function. These theoretical considerations are supported by epidemiological data demonstrating that CKD is associated with a graded and independent excess risk of sensorineural hearing loss. In addition to developmental and physiological links between kidney and cochlear function, hearing loss in patients with CKD may be driven by specific medications or treatments, including haemodialysis. The associations between these two common conditions are not commonly appreciated, yet have important implications for research and clinical practice.


Subject(s)
Renal Insufficiency, Chronic , Humans , Renal Insufficiency, Chronic/physiopathology , Renal Insufficiency, Chronic/complications , Hearing Loss/etiology , Hearing Loss/physiopathology , Hearing Loss, Sensorineural/etiology , Hearing Loss, Sensorineural/physiopathology
5.
JAMA Otolaryngol Head Neck Surg ; 149(7): 571-578, 2023 07 01.
Article in English | MEDLINE | ID: mdl-37166823

ABSTRACT

Importance: Hearing loss is the most important modifiable risk factor for cognitive impairment; however, the association of hearing loss with anatomical and functional connectivity is not fully understood. This association may be elucidated by evaluating the findings of newer imaging technologies. Objectives: To evaluate the association of hearing loss with anatomical and functional connectivity in patients with mild cognitive impairment (MCI) by using multimodal imaging technology. Design, Setting, and Participants: This was a prospective cross-sectional study of patients with MCI under the care of a neurology clinic at the Soonchunhyang University Bucheon Hospital (Republic of Korea) from April to September 2021. Data were analyzed from April 1 to June 30, 2022. Main Outcomes and Measures: Pure tone averages (PTA) and word recognition scores were used to measure hearing acuity. Magnetic resonance imaging (MRI) and positron emission tomography scans of the brain were used to assess functional and anatomical connectivity. Results of diffusion MRI, voxel- and surface-based morphometric imaging, and global brain amyloid standardized uptake ratio were analyzed. Neuroimaging parameters of patients with MCI plus hearing loss were compared with those of patients with MCI and no hearing loss. Correlation analyses among neuroimaging parameters, PTA, and word recognition scores were performed. Results: Of 48 patients with MCI, 30 (62.5%) had hearing loss (PTA >25 dB) and 18 (37.5%) did not (PTA ≤25 dB). Median (IQR) age was 73.5 (69.0-78.0) years in the group with hearing loss and 75.0 (65.0-78.0) years in the group with normal hearing; there were 20 (66.7%) and 14 (77.8%) women in each group, respectively. The group with MCI plus hearing loss demonstrated decreased functional connectivity between the bilateral insular and anterior divisions of the cingulate cortex, and decreased fractional anisotropy in the bilateral fornix, corpus callosum forceps major and tapetum, left parahippocampal cingulum, and left superior thalamic radiation. Fractional anisotropy in the corpus callosum forceps major and bilateral parahippocampal cingulum negatively correlated with the severity of hearing loss shown by PTA testing. The 2 groups were not significantly different in global ß-amyloid uptake, gray matter volume, and cortical thickness. Conclusion and Relevance: The findings of this prospective cross-sectional study suggest that alterations in the salience network may contribute to the neural basis of cognitive impairment associated with hearing loss in patients who are on the Alzheimer disease continuum.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Deafness , Hearing Loss , Humans , Female , Aged , Male , Cross-Sectional Studies , Prospective Studies , Neuropsychological Tests , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Alzheimer Disease/diagnosis , Hearing Loss/physiopathology , Deafness/physiopathology
6.
J Biol Chem ; 299(5): 104631, 2023 05.
Article in English | MEDLINE | ID: mdl-36963494

ABSTRACT

For decades, sarcomeric myosin heavy chain proteins were assumed to be restricted to striated muscle where they function as molecular motors that contract muscle. However, MYH7b, an evolutionarily ancient member of this myosin family, has been detected in mammalian nonmuscle tissues, and mutations in MYH7b are linked to hereditary hearing loss in compound heterozygous patients. These mutations are the first associated with hearing loss rather than a muscle pathology, and because there are no homologous mutations in other myosin isoforms, their functional effects were unknown. We generated recombinant human MYH7b harboring the D515N or R1651Q hearing loss-associated mutation and studied their effects on motor activity and structural and assembly properties, respectively. The D515N mutation had no effect on steady-state actin-activated ATPase rate or load-dependent detachment kinetics but increased actin sliding velocity because of an increased displacement during the myosin working stroke. Furthermore, we found that the D515N mutation caused an increase in the proportion of myosin heads that occupy the disordered-relaxed state, meaning more myosin heads are available to interact with actin. Although we found no impact of the R1651Q mutation on myosin rod secondary structure or solubility, we observed a striking aggregation phenotype when this mutation was introduced into nonmuscle cells. Our results suggest that each mutation independently affects MYH7b function and structure. Together, these results provide the foundation for further study of a role for MYH7b outside the sarcomere.


Subject(s)
Hearing Loss , Myosin Heavy Chains , Animals , Humans , Mice , Actins/metabolism , Cell Line , Chlorocebus aethiops , COS Cells , Hearing Loss/genetics , Hearing Loss/physiopathology , Kinetics , Mutation , Myosin Heavy Chains/genetics , Myosin Heavy Chains/metabolism , Protein Aggregates/genetics , Recombinant Proteins/genetics , Recombinant Proteins/metabolism
7.
Trends Hear ; 27: 23312165231151468, 2023.
Article in English | MEDLINE | ID: mdl-36946195

ABSTRACT

Electroencephalography could serve as an objective tool to evaluate hearing aid benefit in infants who are developmentally unable to participate in hearing tests. We investigated whether speech-evoked envelope following responses (EFRs), a type of electroencephalography-based measure, could predict improved audibility with the use of a hearing aid in children with mild-to-severe permanent, mainly sensorineural, hearing loss. In 18 children, EFRs were elicited by six male-spoken band-limited phonemic stimuli--the first formants of /u/ and /i/, the second and higher formants of /u/ and /i/, and the fricatives /s/ and /∫/--presented together as /su∫i/. EFRs were recorded between the vertex and nape, when /su∫i/ was presented at 55, 65, and 75 dB SPL using insert earphones in unaided conditions and individually fit hearing aids in aided conditions. EFR amplitude and detectability improved with the use of a hearing aid, and the degree of improvement in EFR amplitude was dependent on the extent of change in behavioral thresholds between unaided and aided conditions. EFR detectability was primarily influenced by audibility; higher sensation level stimuli had an increased probability of detection. Overall EFR sensitivity in predicting audibility was significantly higher in aided (82.1%) than unaided conditions (66.5%) and did not vary as a function of stimulus or frequency. EFR specificity in ascertaining inaudibility was 90.8%. Aided improvement in EFR detectability was a significant predictor of hearing aid-facilitated change in speech discrimination accuracy. Results suggest that speech-evoked EFRs could be a useful objective tool in predicting hearing aid benefit in children with hearing loss.


Subject(s)
Hearing Aids , Hearing Loss , Speech Perception , Adolescent , Child , Female , Humans , Male , Evoked Potentials, Auditory , Hearing Loss/physiopathology , Hearing Loss/therapy , Speech Perception/physiology , Speech/physiology
8.
J Speech Lang Hear Res ; 65(7): 2709-2719, 2022 07 18.
Article in English | MEDLINE | ID: mdl-35728021

ABSTRACT

PURPOSE: The effect of onset asynchrony on dichotic vowel segregation and identification in normal-hearing (NH) and hearing-impaired (HI) listeners was examined. We hypothesized that fusion would decrease and identification performance would improve with increasing onset asynchrony. Additionally, we hypothesized that HI listeners would gain more benefit from onset asynchrony. METHOD: A total of 18 adult subjects (nine NH, nine HI) participated. Testing included dichotic presentation of synthetic vowels, /i/, /u/, /a/, and /ae/. Vowel pairs were presented with the same or different fundamental frequency (f o; f o = 106.9, 151.2, or 201.8 Hz) across the two ears and one onset asynchrony of 0, 1, 2, 4, 10, or 20 ms throughout a block (one block = 80 runs). Subjects identified the one or two vowels that they perceived on a touchscreen. Subjects were not informed that two vowels were always presented or that there was onset asynchrony. RESULTS: The effect of onset asynchrony on fusion and vowel identification was greatest in both groups when Δf o = 0 Hz. Mean fusion scores across increasing onset asynchronies differed significantly between the two groups with HI listeners exhibiting less fusion across pooled Δf o. There was no significant difference with identification performance. CONCLUSIONS: As onset asynchrony increased, dichotic vowel fusion decreased and identification performance improved. Onset asynchrony exerted a greater effect on fusion and identification of vowels when Δf o = 0, especially in HI listeners. Therefore, the temporal cue promotes segregation in both groups of listeners, especially in HI listeners when the f o cue was unavailable.


Subject(s)
Cues , Hearing Loss , Hearing , Speech Perception , Adult , Hearing/physiology , Hearing Loss/physiopathology , Humans , Speech Perception/physiology
9.
J Speech Lang Hear Res ; 65(6): 2343-2363, 2022 06 08.
Article in English | MEDLINE | ID: mdl-35623338

ABSTRACT

PURPOSE: Growing evidence suggests that fatigue associated with listening difficulties is particularly problematic for children with hearing loss (CHL). However, sensitive, reliable, and valid measures of listening-related fatigue do not exist. To address this gap, this article describes the development, psychometric evaluation, and preliminary validation of a suite of scales designed to assess listening-related fatigue in CHL: the pediatric versions of the Vanderbilt Fatigue Scale (VFS-Peds). METHOD: Test development employed best practices, including operationalizing the construct of listening-related fatigue from the perspective of target respondents (i.e., children, their parents, and teachers). Test items were developed based on input from these groups. Dimensionality was evaluated using exploratory factor analyses (EFAs). Item response theory (IRT) and differential item functioning (DIF) analyses were used to identify high-quality items, which were further evaluated and refined to create the final versions of the VFS-Peds. RESULTS: The VFS-Peds is appropriate for use with children aged 6-17 years and consists of child self-report (VFS-C), parent proxy-report (VFS-P), and teacher proxy-report (VFS-T) scales. EFA of child self-report and teacher proxy data suggested that listening-related fatigue was unidimensional in nature. In contrast, parent data suggested a multidimensional construct, composed of mental (cognitive, social, and emotional) and physical domains. IRT analyses suggested that items were of good quality, with high information and good discriminability. DIF analyses revealed the scales provided a comparable measure of fatigue regardless of the child's gender, age, or hearing status. Test information was acceptable over a wide range of fatigue severities and all scales yielded acceptable reliability and validity. CONCLUSIONS: This article describes the development, psychometric evaluation, and validation of the VFS-Peds. Results suggest that the VFS-Peds provide a sensitive, reliable, and valid measure of listening-related fatigue in children that may be appropriate for clinical use. Such scales could be used to identify those children most affected by listening-related fatigue, and given their apparent sensitivity, the scales may also be useful for examining the effectiveness of potential interventions targeting listening-related fatigue in children. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.19836154.


Subject(s)
Auditory Perception , Hearing Loss , Mental Fatigue , Surveys and Questionnaires , Adolescent , Auditory Perception/physiology , Child , Hearing Loss/physiopathology , Humans , Mental Fatigue/diagnosis , Parents , Proxy , Psychometrics , Reproducibility of Results , School Teachers
10.
Sci Rep ; 12(1): 3083, 2022 02 23.
Article in English | MEDLINE | ID: mdl-35197556

ABSTRACT

Although significant progress has been made in understanding outcomes following cochlear implantation, predicting performance remains a challenge. Duration of hearing loss, age at implantation, and electrode positioning within the cochlea together explain ~ 25% of the variability in speech-perception scores in quiet using the cochlear implant (CI). Electrocochleography (ECochG) responses, prior to implantation, account for 47% of the variance in the same speech-perception measures. No study to date has explored CI performance in noise, a more realistic measure of natural listening. This study aimed to (1) validate ECochG total response (ECochG-TR) as a predictor of performance in quiet and (2) evaluate whether ECochG-TR explained variability in noise performance. Thirty-five adult CI recipients were enrolled with outcomes assessed at 3-months post-implantation. The results confirm previous studies showing a strong correlation of ECochG-TR with speech-perception in quiet (r = 0.77). ECochG-TR independently explained 34% of the variability in noise performance. Multivariate modeling using ECochG-TR and Montreal Cognitive Assessment (MoCA) scores explained 60% of the variability in speech-perception in noise. Thus, ECochG-TR, a measure of the cochlear substrate prior to implantation, is necessary but not sufficient for explaining performance in noise. Rather, a cognitive measure is also needed to improve prediction of noise performance.


Subject(s)
Audiometry, Evoked Response , Cochlear Implantation , Cochlear Implants , Cognition/physiology , Hearing Loss/psychology , Hearing Loss/surgery , Noise , Speech Perception/physiology , Adult , Age Factors , Audiometry , Female , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Humans , Male , Treatment Outcome
11.
Neurosci Lett ; 772: 136493, 2022 02 16.
Article in English | MEDLINE | ID: mdl-35114332

ABSTRACT

Autophagy plays a pathogenic role in neurodegenerative disease. However, the involvement of autophagy in the pathogenesis of age-related hearing loss (ARHL) remains obscure. Naturally aged C57BL/6J mice were used to identify the role of autophagy in ARHL, and rapamycin, a mammalian target of rapamycin (mTOR) inhibitor, was administered for 34 weeks to explore the potential therapeutic effect of rapamycin in ARHL. We found that the number of autophagosomes and the expression of microtubule-associated protein 1 light chain 3B (LC3B) decreased as the mice aged. The expression of autophagy-related (Atg) proteins, including Beclin1 and Atg5, and the ratio of LC3-II/I was reduced in aged mice, while mTOR activity in aged mice gradually increased. Rapamycin improved the auditory brainstem response (ABR) threshold (at 8, 12, and 24 kHz). Further exploration demonstrated that spiral ganglion neuron (SGN) density was enhanced in response to administration of rapamycin. The rate of apoptosis in the basal turn SGNs was decreased, whereas autophagy activity was increased in the experimental group. Meanwhile, mTOR activity in the experimental group was decreased. Our findings indicate that age-related deficiency in autophagy may lead to increased apoptosis of aged SGNs. Rapamycin enhances autophagy of SGNs by inhibiting mTOR activation, resulting in amelioration of ARHL. Therapeutic strategy targeting autophagy may provide a potential approach for treating ARHL.


Subject(s)
Aging/pathology , Autophagy , Hearing Loss/drug therapy , Sirolimus/pharmacology , Spiral Ganglion/drug effects , Aging/metabolism , Animals , Autophagy-Related Protein 5/metabolism , Beclin-1/metabolism , Evoked Potentials, Auditory, Brain Stem , Hearing Loss/metabolism , Hearing Loss/physiopathology , Male , Mice , Mice, Inbred C57BL , Microtubule-Associated Proteins/metabolism , Sirolimus/therapeutic use , Spiral Ganglion/metabolism , Spiral Ganglion/physiopathology , TOR Serine-Threonine Kinases/metabolism
12.
PLoS One ; 17(2): e0263516, 2022.
Article in English | MEDLINE | ID: mdl-35134072

ABSTRACT

The ability to determine a sound's location is critical in everyday life. However, sound source localization is severely compromised for patients with hearing loss who receive bilateral cochlear implants (BiCIs). Several patient factors relate to poorer performance in listeners with BiCIs, associated with auditory deprivation, experience, and age. Critically, characteristic errors are made by patients with BiCIs (e.g., medial responses at lateral target locations), and the relationship between patient factors and the type of errors made by patients has seldom been investigated across individuals. In the present study, several different types of analysis were used to understand localization errors and their relationship with patient-dependent factors (selected based on their robustness of prediction). Binaural hearing experience is required for developing accurate localization skills, auditory deprivation is associated with degradation of the auditory periphery, and aging leads to poorer temporal resolution. Therefore, it was hypothesized that earlier onsets of deafness would be associated with poorer localization acuity and longer periods without BiCI stimulation or older age would lead to greater amounts of variability in localization responses. A novel machine learning approach was introduced to characterize the types of errors made by listeners with BiCIs, making them simple to interpret and generalizable to everyday experience. Sound localization performance was measured in 48 listeners with BiCIs using pink noise trains presented in free-field. Our results suggest that older age at testing and earlier onset of deafness are associated with greater average error, particularly for sound sources near the center of the head, consistent with previous research. The machine learning analysis revealed that variability of localization responses tended to be greater for individuals with earlier compared to later onsets of deafness. These results suggest that early bilateral hearing is essential for best sound source localization outcomes in listeners with BiCIs.


Subject(s)
Hearing Loss, Bilateral/physiopathology , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Age Factors , Age of Onset , Aged , Aged, 80 and over , Auditory Perception/physiology , Cochlear Implantation/methods , Cochlear Implants/adverse effects , Cues , Deafness/physiopathology , Female , Hearing/physiology , Hearing Loss/physiopathology , Hearing Tests , Humans , Male , Middle Aged , Sound
13.
Sci Rep ; 12(1): 402, 2022 01 10.
Article in English | MEDLINE | ID: mdl-35013422

ABSTRACT

There is a lack of studies assessing how hearing impairment relates to reproductive outcomes. We examined whether childhood hearing impairment (HI) affects reproductive patterns based on longitudinal Norwegian population level data for birth cohorts 1940-1980. We used Poisson regression to estimate the association between the number of children ever born and HI. The association with childlessness is estimated by a logit model. As a robustness check, we also estimated family fixed effects Poisson and logit models. Hearing was assessed at ages 7, 10 and 13, and reproduction was observed at adult ages until 2014. Air conduction hearing threshold levels were obtained by pure-tone audiometry at eight frequencies from 0.25 to 8 kHz. Fertility data were collected from Norwegian administrative registers. The combined dataset size was N = 50,022. Our analyses reveal that HI in childhood is associated with lower fertility in adulthood, especially for men. The proportion of childless individuals among those with childhood HI was almost twice as large as that of individuals with normal childhood hearing (20.8% vs. 10.7%). The negative association is robust to the inclusion of family fixed effects in the model that allow to control for the unobserved heterogeneity that are shared between siblings, including factors related to the upbringing and parent characteristics. Less family support in later life could add to the health challenges faced by those with HI. More attention should be given to how fertility relates to HI.


Subject(s)
Fertility , Hearing Loss/epidemiology , Hearing , Infertility, Female/epidemiology , Infertility, Male/epidemiology , Persons With Hearing Impairments , Reproduction , Adolescent , Age Factors , Aged , Audiometry, Pure-Tone , Auditory Threshold , Child , Family Characteristics , Female , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Humans , Infertility, Female/diagnosis , Infertility, Female/physiopathology , Infertility, Male/diagnosis , Infertility, Male/physiopathology , Longitudinal Studies , Male , Middle Aged , Norway/epidemiology , Reproductive Behavior , Risk Assessment , Risk Factors , Sex Factors , Time Factors
14.
Sci Rep ; 12(1): 301, 2022 01 07.
Article in English | MEDLINE | ID: mdl-34997062

ABSTRACT

Hearing loss is a heterogeneous disorder. Identification of causative mutations is demanding due to genetic heterogeneity. In this study, we investigated the genetic cause of sensorineural hearing loss in patients with severe/profound deafness. After the exclusion of GJB2-GJB6 mutations, we performed whole exome sequencing in 32 unrelated Argentinean families. Mutations were detected in 16 known deafness genes in 20 patients: ACTG1, ADGRV1 (GPR98), CDH23, COL4A3, COL4A5, DFNA5 (GSDDE), EYA4, LARS2, LOXHD1, MITF, MYO6, MYO7A, TECTA, TMPRSS3, USH2A and WSF1. Notably, 11 variants affecting 9 different non-GJB2 genes resulted novel: c.12829C > T, p.(Arg4277*) in ADGRV1; c.337del, p.(Asp109*) and c.3352del, p.(Gly1118Alafs*7) in CDH23; c.3500G > A, p.(Gly1167Glu) in COL4A3; c.1183C > T, p.(Pro395Ser) and c.1759C > T, p.(Pro587Ser) in COL4A5; c.580 + 2 T > C in EYA4; c.1481dup, p.(Leu495Profs*31) in LARS2; c.1939 T > C, p.(Phe647Leu), in MYO6; c.733C > T, p.(Gln245*) in MYO7A and c.242C > G, p.(Ser81*) in TMPRSS3 genes. To predict the effect of these variants, novel protein modeling and protein stability analysis were employed. These results highlight the value of whole exome sequencing to identify candidate variants, as well as bioinformatic strategies to infer their pathogenicity.


Subject(s)
Hearing Loss/genetics , Hearing/genetics , Mutation , Adolescent , Adult , Child , Female , Genetic Association Studies , Genetic Predisposition to Disease , Genotyping Techniques , Hearing Loss/diagnosis , Hearing Loss/metabolism , Hearing Loss/physiopathology , Heredity , Humans , Infant , Male , Models, Molecular , Pedigree , Phenotype , Protein Conformation , Structure-Activity Relationship , Exome Sequencing , Young Adult
15.
Am J Otolaryngol ; 43(1): 103200, 2022.
Article in English | MEDLINE | ID: mdl-34600410

ABSTRACT

PURPOSE: Managing hearing health in older adults has become a public health imperative, and cochlear implantation is now the standard of care for aural rehabilitation when hearing aids no longer provide sufficient benefit. The aim of our study was to compare speech performance in cochlear implant patients ≥80 years of age (Very Elderly) to a younger elderly cohort between ages 65-79 years (Less Elderly). MATERIALS AND METHODS: Data were collected from 53 patients ≥80 years of age and 92 patients age 65-79 years who underwent cochlear implantation by the senior author between April 1, 2017 and May 12, 2020. The primary outcome measure compared preoperative AzBio Quiet scores to 6-month post-activation AzBio Quiet results for both cohorts. RESULTS: Very Elderly patients progressed from an average AzBio Quiet score of 22% preoperatively to a score of 45% in the implanted ear at 6-months post-activation (p < 0.001) while the Less Elderly progressed from an average score of 27% preoperatively to 60% at 6-months (p < 0.001). Improvements in speech intelligibility were statistically significant within each of these cohorts (p < 0.001). Comparative statistics using independent samples t-test and evaluation of effect size using the Hedges' g statistic demonstrated a significant difference for average improvement of AzBio in quiet scores between groups with a medium effect size (p = 0.03, g = 0.35). However, when the very oldest patients (90+ years) were removed, the statistical difference between groups disappeared (p = 0.09). CONCLUSIONS: When assessing CI performance, those over age 65 are typically compared to younger patients; however, this manuscript further stratifies audiometric outcomes for older CI recipients in a single-surgeon, high-volume practice. Our data indicates that for speech intelligibility, patients between age 65-79 perform similarly to CI recipients 80-90 years of age and should not be dismissed as potential cochlear implant candidates.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Loss/physiopathology , Hearing Loss/rehabilitation , Speech Intelligibility , Age Factors , Aged , Aged, 80 and over , Audiometry , Cohort Studies , Female , Humans , Male , Treatment Outcome
16.
Hum Brain Mapp ; 43(2): 633-646, 2022 02 01.
Article in English | MEDLINE | ID: mdl-34609038

ABSTRACT

Neuromodulation treatment effect size for bothersome tinnitus may be larger and more predictable by adopting a target selection approach guided by personalized striatal networks or functional connectivity maps. Several corticostriatal mechanisms are likely to play a role in tinnitus, including the dorsal/ventral striatum and the putamen. We examined whether significant tinnitus treatment response by deep brain stimulation (DBS) of the caudate nucleus may be related to striatal network increased functional connectivity with tinnitus networks that involve the auditory cortex or ventral cerebellum. The first study was a cross-sectional 2-by-2 factorial design (tinnitus, no tinnitus; hearing loss, normal hearing, n = 68) to define cohort level abnormal functional connectivity maps using high-field 7.0 T resting-state fMRI. The second study was a pilot case-control series (n = 2) to examine whether tinnitus modulation response to caudate tail subdivision stimulation would be contingent on individual level striatal connectivity map relationships with tinnitus networks. Resting-state fMRI identified five caudate subdivisions with abnormal cohort level functional connectivity maps. Of those, two connectivity maps exhibited increased connectivity with tinnitus networks-dorsal caudate head with Heschl's gyrus and caudate tail with the ventral cerebellum. DBS of the caudate tail in the case-series responder resulted in dramatic reductions in tinnitus severity and loudness, in contrast to the nonresponder who showed no tinnitus modulation. The individual level connectivity map of the responder was in alignment with the cohort expectation connectivity map, where the caudate tail exhibited increased connectivity with tinnitus networks, whereas the nonresponder individual level connectivity map did not.


Subject(s)
Auditory Cortex/physiopathology , Caudate Nucleus/physiopathology , Cerebellum/physiopathology , Connectome , Deep Brain Stimulation , Hearing Loss/physiopathology , Nerve Net/physiopathology , Tinnitus/physiopathology , Tinnitus/therapy , Adult , Aged , Auditory Cortex/diagnostic imaging , Case-Control Studies , Caudate Nucleus/diagnostic imaging , Cerebellum/diagnostic imaging , Cross-Sectional Studies , Female , Hearing Loss/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/diagnostic imaging , Tinnitus/diagnostic imaging
17.
Otolaryngol Head Neck Surg ; 166(1): 171-178, 2022 01.
Article in English | MEDLINE | ID: mdl-34032520

ABSTRACT

OBJECTIVE: To use an automated speech-processing technology to identify patterns in sound environments and language output for deaf or hard-of-hearing infants and toddlers. STUDY DESIGN: Observational study based on a convenience sample. SETTING: Home observation conducted by tertiary children's hospital. METHODS: The system analyzed 115 naturalistic recordings of 28 children <3.5 years old. Hearing ability was stratified into groups by access to sound. Outcomes were compared across hearing groups, and multivariable linear regression was used to test associations. RESULTS: There was a significant difference in age-adjusted child vocalizations (P = .042), conversational turns (P = .022), and language development scores (P = .05) between hearing groups but no significant difference in adult words (P = .11). Conversational turns were positively associated with each language development measure, while adult words were not. For each hour of electronic media, there were significant reductions in child vocalizations (ß = -0.47; 95% CI, -0.71 to -0.19), conversational turns (ß = -0.45; 95% CI, -0.65 to -0.22), and language development (ß = -0.37; 95% CI, -0.61 to -0.15). CONCLUSIONS: Conversational turn scores differ among hearing groups and are positively associated with language development outcomes. Electronic media is associated with reduced discernible adult speech, child vocalizations, conversational turns, and language development scores. This effect was larger in children who are deaf or hard of hearing as compared with other reports in typically hearing populations. These findings underscore the need to optimize early language environments and limit electronic noise exposure in children who are deaf or hard of hearing.


Subject(s)
Hearing Loss/psychology , Language Development , Verbal Behavior/physiology , Adult , Child, Preschool , Female , Hearing Loss/physiopathology , Humans , Infant , Male , Sound Recordings , Speech Production Measurement , Television
18.
J Gerontol B Psychol Sci Soc Sci ; 77(1): 10-17, 2022 01 12.
Article in English | MEDLINE | ID: mdl-33606882

ABSTRACT

OBJECTIVES: Frequent social contact benefits cognition in later life although evidence is lacking on the potential relevance of the modes chosen by older adults, including those living with hearing loss, for interacting with others in their social network. METHOD: 11,418 participants in the English Longitudinal Study of Ageing provided baseline information on hearing status and social contact mode and frequency of use. Multilevel growth curve models compared episodic memory (immediate and delayed recall) at baseline and longitudinally in participants who interacted frequently (offline only or offline and online combined), compared to infrequently, with others in their social network. RESULTS: Frequent offline (B = 0.23; SE = 0.09) and combined offline and online (B = 0.71; SE = 0.09) social interactions predicted better episodic memory after adjustment for multiple confounders. We observed positive, longitudinal associations between combined offline and online interactions and episodic memory in participants without hearing loss (B = 0.50, SE = 0.11) but not with strictly offline interactions (B = 0.01, SE = 0.11). In those with hearing loss, episodic memory was positively related to both modes of engagement (offline only: B = 0.79, SE = 0.20; combined online and offline: B = 1.27, SE = 0.20). Sensitivity analyses confirmed the robustness of these findings. DISCUSSION: Supplementing conventional social interactions with online communication modes may help older adults, especially those living with hearing loss, sustain, and benefit cognitively from, personal relationships.


Subject(s)
Aging/physiology , Hearing Loss/physiopathology , Memory, Episodic , Mental Recall/physiology , Social Interaction , Social Networking , Aged , Aged, 80 and over , England , Female , Humans , Longitudinal Studies , Male , Middle Aged , Online Social Networking
19.
Audiol., Commun. res ; 27: e2661, 2022. tab
Article in Portuguese | LILACS | ID: biblio-1420255

ABSTRACT

RESUMO Objetivo Verificar a correlação das diferentes médias tonais (tritonal, quadritonal e octonal) com o Índice Percentual de Reconhecimento de Fala e com a desvantagem auditiva. Métodos Participaram do estudo 56 sujeitos, distribuídos em dois grupos, com configuração audiométrica descendente: Grupo 1 (G1) - 28 sujeitos com média tritonal igual ou inferior a 25 dBNA e Grupo 2 (G2) - 28 sujeitos com média tritonal pior que 25 dBNA (G2), sendo pareados quanto ao gênero e idade (p= 0,544). Todos foram submetidos à audiometria tonal liminar, Índice Percentual de Reconhecimento de Fala (IPRF) com lista monossilábica de palavras gravadas, medidas de imitância acústica e ao questionário Hearing Handicap Inventory for Adults. A análise de correlação foi realizada entre as médias de três frequências (M3), de quatro frequências (M4) e de oito frequências (M8) com o IPRF e com a desvantagem auditiva, utilizando o teste de correlação de Spearman, sendo o nível de significância considerado <0,05 (5%). Resultados Evidenciou-se correlação estatisticamente significativa do IPRF com a M8, para o G1, e do IPRF com M4 e M8, para o G2. Observou-se tendência à significância, tanto para o G1, como para o G2, em relação à M8, quando correlacionada com a desvantagem auditiva, demonstrando que analisar as oito frequências do audiograma (frequências mais agudas que 4000 Hz) parece possibilitar maior compreensão em relação à desvantagem auditiva do paciente. Conclusão Houve correlação estatisticamente significativa do IPRF com a M8, nos dois grupos, denotando uma redução no desempenho do IPRF, com o aumento da média, considerando as oito frequências. A M8 refletiu melhor a desvantagem auditiva causada pela perda auditiva, no G1.


ABSTRACT Purpose To verify the correlation of different tonal means (tritonal, quadritonal and octanol) with the Percentage Index of Speech Recognition and with hearing disadvantage. Methods 56 subjects participated in the study, distributed into two groups, with descendant audiometric configuration: Subjects with tritonal average equal to or less than 25 dB HL(G1) and subjects with a tritone average worse than 25 dB HL(G2), being matched for sex and age (p=0.544). All were safe by Threshold Tone Audiometry, Speech Recognition Percentage Index (IPRF) with a list of keywords, Acoustic I Measures and the Elearing Handicap Inventory for Adults. The correlation analysis was performed between the averages, of three frequencies (M3), of four frequencies (M4) and of eight frequencies (M8) with the IPRF and with auditory disadvantage, using the Spearman correlation test, the significance level being considered <0.05 (5%). Results There was a statistically significant correlation of the IPRF with M8 , for G1, and the IPRF with M4 and M8, for G2. There was a tendency towards significance, both for G1 and G2, in relation to M8 when correlated with hearing impairment, demonstrating that analyzing the eight frequencies of the audiogram (frequencies higher than 4000 Hz) seems to allow a greater understanding of the patient's hearing handicap. Conclusion There was a statistically significant correlation between the IPRF and M8, in both groups, denoting a reduction in the performance of the IPRF, with an increase in the mean, considering the eight frequencies. M8 better reflected the hearing disadvantage caused by the hearing loss in G1.


Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Audiometry, Pure-Tone/methods , Auditory Perception , Speech Acoustics , Voice Recognition , Hearing Loss/physiopathology
20.
PLoS One ; 16(12): e0261433, 2021.
Article in English | MEDLINE | ID: mdl-34972151

ABSTRACT

Diagnostic tests for hearing impairment not only determines the presence (or absence) of hearing loss, but also evaluates its degree and type, and provides physicians with essential data for future treatment and rehabilitation. Therefore, accurately measuring hearing loss conditions is very important for proper patient understanding and treatment. In current-day practice, to quantify the level of hearing loss, physicians exploit specialized test scores such as the pure-tone audiometry (PTA) thresholds and speech discrimination scores (SDS) as quantitative metrics in examining a patient's auditory function. However, given that these metrics can be easily affected by various human factors, which includes intentional (or accidental) patient intervention, there are needs to cross validate the accuracy of each metric. By understanding a "normal" relationship between the SDS and PTA, physicians can reveal the need for re-testing, additional testing in different dimensions, and also potential malingering cases. For this purpose, in this work, we propose a prediction model for estimating the SDS of a patient by using PTA thresholds via a Random Forest-based machine learning approach to overcome the limitations of the conventional statistical (or even manual) methods. For designing and evaluating the Random Forest-based prediction model, we collected a large-scale dataset from 12,697 subjects, and report a SDS level prediction accuracy of 95.05% and 96.64% for the left and right ears, respectively. We also present comparisons with other widely-used machine learning algorithms (e.g., Support Vector Machine, Multi-layer Perceptron) to show the effectiveness of our proposed Random Forest-based approach. Results obtained from this study provides implications and potential feasibility in providing a practically-applicable screening tool for identifying patient-intended malingering in hearing loss-related tests.


Subject(s)
Audiometry, Pure-Tone/methods , Discrimination Learning , Machine Learning , Speech Perception , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Auditory Threshold , Child , Child, Preschool , Computational Biology , Female , Hearing , Hearing Loss/physiopathology , Humans , Infant , Infant, Newborn , Male , Middle Aged , Models, Statistical , Neural Networks, Computer , Reproducibility of Results , Republic of Korea , Speech Reception Threshold Test , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...