Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 84
Filter
Add more filters

Country/Region as subject
Publication year range
1.
J Neurosci ; 42(8): 1477-1490, 2022 02 23.
Article in English | MEDLINE | ID: mdl-34983817

ABSTRACT

Listeners with sensorineural hearing loss (SNHL) struggle to understand speech, especially in noise, despite audibility compensation. These real-world suprathreshold deficits are hypothesized to arise from degraded frequency tuning and reduced temporal-coding precision; however, peripheral neurophysiological studies testing these hypotheses have been largely limited to in-quiet artificial vowels. Here, we measured single auditory-nerve-fiber responses to a connected speech sentence in noise from anesthetized male chinchillas with normal hearing (NH) or noise-induced hearing loss (NIHL). Our results demonstrated that temporal precision was not degraded following acoustic trauma, and furthermore that sharpness of cochlear frequency tuning was not the major factor affecting impaired peripheral coding of connected speech in noise. Rather, the loss of cochlear tonotopy, a hallmark of NH, contributed the most to both consonant-coding and vowel-coding degradations. Because distorted tonotopy varies in degree across etiologies (e.g., noise exposure, age), these results have important implications for understanding and treating individual differences in speech perception for people suffering from SNHL.SIGNIFICANCE STATEMENT Difficulty understanding speech in noise is the primary complaint in audiology clinics and can leave people with sensorineural hearing loss (SNHL) suffering from communication difficulties that affect their professional, social, and family lives, as well as their mental health. We measured single-neuron responses from a preclinical SNHL animal model to characterize salient neural-coding deficits for naturally spoken speech in noise. We found the major mechanism affecting neural coding was not a commonly assumed factor, but rather a disruption of tonotopicity, the systematic mapping of acoustic frequency to cochlear place that is a hallmark of normal hearing. Because the degree of distorted tonotopy varies across hearing-loss etiologies, these results have important implications for precision audiology approaches to diagnosis and treatment of SNHL.


Subject(s)
Hearing Loss, Noise-Induced , Hearing Loss, Sensorineural , Speech Perception , Acoustic Stimulation/methods , Animals , Auditory Threshold/physiology , Hearing Loss, Sensorineural/etiology , Humans , Male , Noise , Speech , Speech Perception/physiology
2.
J Neurosci ; 42(2): 240-254, 2022 01 12.
Article in English | MEDLINE | ID: mdl-34764159

ABSTRACT

Temporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e., the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We then explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. We tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions and whether predictions were improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial for rigorously testing models of speech perception; however, to the best of our knowledge, they have not been used in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise and that physiological computations that exist early along the auditory pathway may contribute to this process.SIGNIFICANCE STATEMENT Temporal coherence of sound fluctuations across distinct frequency channels is thought to be important for auditory scene analysis. Prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, and it was unknown whether speech understanding in noise may be shaped by across-channel processing that exists in earlier auditory areas. Using physiologically plausible computational modeling to predict consonant confusions across different listening conditions, we find that across-channel temporal coherence contributes significantly to scene analysis and speech perception and that such processing may arise in the auditory pathway as early as the brainstem. By virtue of providing a richer characterization of error patterns not obtainable with just intelligibility scores, consonant confusions yield unique insight into scene analysis mechanisms.


Subject(s)
Auditory Pathways/physiology , Auditory Perception/physiology , Cochlea/physiology , Speech/physiology , Acoustic Stimulation , Auditory Threshold/physiology , Humans , Models, Neurological , Perceptual Masking
3.
Angew Chem Int Ed Engl ; 62(38): e202308680, 2023 Sep 18.
Article in English | MEDLINE | ID: mdl-37515484

ABSTRACT

We describe a unique catalytic system with an efficient coupling of Ti- and Cr-catalysis in a reaction network that allows the use of [BH4 ]- as stoichiometric hydrogen atom and electron donor in catalytic radical chemistry. The key feature is a relay hydrogen atom transfer from [BH4 ]- to Cr generating the active catalysts under mild conditions. This enables epoxide reductions, regiodivergent epoxide opening and radical cyclizations that are not possible with cooperative catalysis with radicals or by epoxide reductions via Meinwald rearrangement and ensuing carbonyl reduction. No typical SN 2-type reactivity of [BH4 ]- salts is observed.

4.
PLoS Comput Biol ; 17(2): e1008155, 2021 02.
Article in English | MEDLINE | ID: mdl-33617548

ABSTRACT

Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss.


Subject(s)
Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Models, Neurological , Acoustic Stimulation , Animals , Chinchilla/physiology , Cochlear Nerve/physiology , Computational Biology , Disease Models, Animal , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/psychology , Humans , Models, Animal , Nonlinear Dynamics , Psychoacoustics , Sound , Spatio-Temporal Analysis , Speech Intelligibility/physiology , Speech Perception/physiology , Translational Research, Biomedical
5.
J Math Biol ; 86(1): 11, 2022 12 07.
Article in English | MEDLINE | ID: mdl-36478092

ABSTRACT

Recent progress in nanotechnology-enabled sensors that can be placed inside of living plants has shown that it is possible to relay and record real-time chemical signaling stimulated by various abiotic and biotic stresses. The mathematical form of the resulting local reactive oxygen species (ROS) wave released upon mechanical perturbation of plant leaves appears to be conserved across a large number of species, and produces a distinct waveform from other stresses including light, heat and pathogen-associated molecular pattern (PAMP)-induced stresses. Herein, we develop a quantitative theory of the local ROS signaling waveform resulting from mechanical stress in planta. We show that nonlinear, autocatalytic production and Fickian diffusion of H2O2 followed by first order decay well describes the spatial and temporal properties of the waveform. The reaction-diffusion system is analyzed in terms of a new approximate solution that we introduce for such problems based on a single term logistic function ansatz. The theory is able to describe experimental ROS waveforms and degradation dynamics such that species-dependent dimensionless wave velocities are revealed, corresponding to subtle changes in higher moments of the waveform through an apparently conserved signaling mechanism overall. This theory has utility in potentially decoding other stress signaling waveforms for light, heat and PAMP-induced stresses that are similarly under investigation. The approximate solution may also find use in applied agricultural sensing, facilitating the connection between measured waveform and plant physiology.


Subject(s)
Hydrogen Peroxide , Stress, Mechanical
6.
Clin Chem ; 67(2): 415-424, 2021 01 30.
Article in English | MEDLINE | ID: mdl-33098427

ABSTRACT

BACKGROUND: Rapid, reliable, and widespread testing is required to curtail the ongoing COVID-19 pandemic. Current gold-standard nucleic acid tests are hampered by supply shortages in critical reagents including nasal swabs, RNA extraction kits, personal protective equipment, instrumentation, and labor. METHODS: To overcome these challenges, we developed a rapid colorimetric assay using reverse-transcription loop-mediated isothermal amplification (RT-LAMP) optimized on human saliva samples without an RNA purification step. We describe the optimization of saliva pretreatment protocols to enable analytically sensitive viral detection by RT-LAMP. We optimized the RT-LAMP reaction conditions and implemented high-throughput unbiased methods for assay interpretation. We tested whether saliva pretreatment could also enable viral detection by conventional reverse-transcription quantitative polymerase chain reaction (RT-qPCR). Finally, we validated these assays on clinical samples. RESULTS: The optimized saliva pretreatment protocol enabled analytically sensitive extraction-free detection of SARS-CoV-2 from saliva by colorimetric RT-LAMP or RT-qPCR. In simulated samples, the optimized RT-LAMP assay had a limit of detection of 59 (95% confidence interval: 44-104) particle copies per reaction. We highlighted the flexibility of LAMP assay implementation using 3 readouts: naked-eye colorimetry, spectrophotometry, and real-time fluorescence. In a set of 30 clinical saliva samples, colorimetric RT-LAMP and RT-qPCR assays performed directly on pretreated saliva samples without RNA extraction had accuracies greater than 90%. CONCLUSIONS: Rapid and extraction-free detection of SARS-CoV-2 from saliva by colorimetric RT-LAMP is a simple, sensitive, and cost-effective approach with broad potential to expand diagnostic testing for the virus causing COVID-19.


Subject(s)
COVID-19 Nucleic Acid Testing/methods , COVID-19/diagnosis , Nucleic Acid Amplification Techniques/methods , RNA, Viral/analysis , SARS-CoV-2/isolation & purification , Saliva/virology , COVID-19/epidemiology , Colorimetry/methods , Endopeptidase K/chemistry , Humans , Limit of Detection , Pandemics , Point-of-Care Testing , SARS-CoV-2/chemistry
7.
J Acoust Soc Am ; 150(5): 3581, 2021 11.
Article in English | MEDLINE | ID: mdl-34852572

ABSTRACT

A difference in fundamental frequency (F0) between two vowels is an important segregation cue prior to identifying concurrent vowels. To understand the effects of this cue on identification due to age and hearing loss, Chintanpalli, Ahlstrom, and Dubno [(2016). J. Acoust. Soc. Am. 140, 4142-4153] collected concurrent vowel scores across F0 differences for younger adults with normal hearing (YNH), older adults with normal hearing (ONH), and older adults with hearing loss (OHI). The current modeling study predicts these concurrent vowel scores to understand age and hearing loss effects. The YNH model cascaded the temporal responses of an auditory-nerve model from Bruce, Efrani, and Zilany [(2018). Hear. Res. 360, 40-45] with a modified F0-guided segregation algorithm from Meddis and Hewitt [(1992). J. Acoust. Soc. Am. 91, 233-245] to predict concurrent vowel scores. The ONH model included endocochlear-potential loss, while the OHI model also included hair cell damage; however, both models incorporated cochlear synaptopathy, with a larger effect for OHI. Compared with the YNH model, concurrent vowel scores were reduced across F0 differences for ONH and OHI models, with the lowest scores for OHI. These patterns successfully captured the age and hearing loss effects in the concurrent-vowel data. The predictions suggest that the inability to utilize an F0-guided segregation cue, resulting from peripheral changes, may reduce scores for ONH and OHI listeners.


Subject(s)
Deafness , Hearing Loss , Speech Perception , Aged , Cochlear Nerve , Hearing , Hearing Loss/diagnosis , Humans
8.
J Acoust Soc Am ; 150(4): 2664, 2021 10.
Article in English | MEDLINE | ID: mdl-34717498

ABSTRACT

To understand the mechanisms of speech perception in everyday listening environments, it is important to elucidate the relative contributions of different acoustic cues in transmitting phonetic content. Previous studies suggest that the envelope of speech in different frequency bands conveys most speech content, while the temporal fine structure (TFS) can aid in segregating target speech from background noise. However, the role of TFS in conveying phonetic content beyond what envelopes convey for intact speech in complex acoustic scenes is poorly understood. The present study addressed this question using online psychophysical experiments to measure the identification of consonants in multi-talker babble for intelligibility-matched intact and 64-channel envelope-vocoded stimuli. Consonant confusion patterns revealed that listeners had a greater tendency in the vocoded (versus intact) condition to be biased toward reporting that they heard an unvoiced consonant, despite envelope and place cues being largely preserved. This result was replicated when babble instances were varied across independent experiments, suggesting that TFS conveys voicing information beyond what is conveyed by envelopes for intact speech in babble. Given that multi-talker babble is a masker that is ubiquitous in everyday environments, this finding has implications for the design of assistive listening devices such as cochlear implants.


Subject(s)
Cochlear Implants , Speech Perception , Acoustic Stimulation , Noise/adverse effects , Perceptual Masking , Phonetics , Speech , Speech Intelligibility
9.
J Acoust Soc Am ; 150(3): 2230, 2021 09.
Article in English | MEDLINE | ID: mdl-34598642

ABSTRACT

A fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap. We find that the neural envelope-domain signal-to-noise ratio in target-speech encoding, which is shaped by masker modulations, predicts intelligibility over a range of strategically chosen realistic listening conditions unseen by the predictive model. This provides neurophysiological evidence for modulation masking. Moreover, using high-resolution vocoding to carefully control peripheral envelopes, we show that target-envelope coding fidelity in the brain depends not only on envelopes conveyed by the cochlea, but also on the temporal fine structure (TFS), which supports scene segregation. Our results are consistent with the notion that temporal coherence of sound elements across envelopes and/or TFS influences scene analysis and attentive selection of a target sound. Our findings also inform speech-intelligibility models and technologies attempting to improve real-world speech communication.


Subject(s)
Speech Intelligibility , Speech Perception , Acoustic Stimulation , Acoustics , Auditory Perception , Humans , Perceptual Masking , Signal-To-Noise Ratio
10.
J Neurosci ; 39(35): 6879-6887, 2019 08 28.
Article in English | MEDLINE | ID: mdl-31285299

ABSTRACT

Speech intelligibility can vary dramatically between individuals with similar clinically defined severity of hearing loss based on the audiogram. These perceptual differences, despite equal audiometric-threshold elevation, are often assumed to reflect central-processing variations. Here, we compared peripheral-processing in auditory nerve (AN) fibers of male chinchillas between two prevalent hearing loss etiologies: metabolic hearing loss (MHL) and noise-induced hearing loss (NIHL). MHL results from age-related reduction of the endocochlear potential due to atrophy of the stria vascularis. MHL in the present study was induced using furosemide, which provides a validated model of age-related MHL in young animals by reversibly inhibiting the endocochlear potential. Effects of MHL on peripheral processing were assessed using Wiener-kernel (system identification) analyses of single AN fiber responses to broadband noise, for direct comparison to previously published AN responses from animals with NIHL. Wiener-kernel analyses show that even mild NIHL causes grossly abnormal coding of low-frequency stimulus components. In contrast, for MHL the same abnormal coding was only observed with moderate to severe loss. For equal sensitivity loss, coding impairment was substantially less severe with MHL than with NIHL, probably due to greater preservation of the tip-to-tail ratio of cochlear frequency tuning with MHL compared with NIHL rather than different intrinsic AN properties. Differences in peripheral neural coding between these two pathologies-the more severe of which, NIHL, is preventable-likely contribute to individual speech perception differences. Our results underscore the need to minimize noise overexposure and for strategies to personalize diagnosis and treatment for individuals with sensorineural hearing loss.SIGNIFICANCE STATEMENT Differences in speech perception ability between individuals with similar clinically defined severity of hearing loss are often assumed to reflect central neural-processing differences. Here, we demonstrate for the first time that peripheral neural processing of complex sounds differs dramatically between the two most common etiologies of hearing loss. Greater processing impairment with noise-induced compared with an age-related (metabolic) hearing loss etiology may explain heightened speech perception difficulties in people overexposed to loud environments. These results highlight the need for public policies to prevent noise-induced hearing loss, an entirely avoidable hearing loss etiology, and for personalized strategies to diagnose and treat sensorineural hearing loss.


Subject(s)
Auditory Perception/physiology , Cochlear Nerve/physiopathology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Sensorineural/physiopathology , Hearing/physiology , Animals , Auditory Threshold , Chinchilla , Disease Models, Animal , Furosemide , Hearing Loss, Sensorineural/chemically induced , Hearing Loss, Sensorineural/etiology , Male
11.
J Acoust Soc Am ; 146(5): 3710, 2019 11.
Article in English | MEDLINE | ID: mdl-31795699

ABSTRACT

The chinchilla animal model for noise-induced hearing loss has an extensive history spanning more than 50 years. Many behavioral, anatomical, and physiological characteristics of the chinchilla make it a valuable animal model for hearing science. These include similarities with human hearing frequency and intensity sensitivity, the ability to be trained behaviorally with acoustic stimuli relevant to human hearing, a docile nature that allows many physiological measures to be made in an awake state, physiological robustness that allows for data to be collected from all levels of the auditory system, and the ability to model various types of conductive and sensorineural hearing losses that mimic pathologies observed in humans. Given these attributes, chinchillas have been used repeatedly to study anatomical, physiological, and behavioral effects of continuous and impulse noise exposures that produce either temporary or permanent threshold shifts. Based on the mechanistic insights from noise-exposure studies, chinchillas have also been used in pre-clinical drug studies for the prevention and rescue of noise-induced hearing loss. This review paper highlights the role of the chinchilla model in hearing science, its important contributions, and its advantages and limitations.


Subject(s)
Chinchilla/physiology , Disease Models, Animal , Hearing Loss, Noise-Induced/physiopathology , Animals , Behavior, Animal , Hearing , Hearing Loss, Noise-Induced/etiology , Hearing Loss, Noise-Induced/pathology , Humans , Species Specificity
12.
J Acoust Soc Am ; 143(3): 1287, 2018 03.
Article in English | MEDLINE | ID: mdl-29604696

ABSTRACT

Sensitivity to interaural time differences (ITDs) in envelope and temporal fine structure (TFS) of amplitude-modulated (AM) tones was assessed for young and older subjects, all with clinically normal hearing at the carrier frequencies of 250 and 500 Hz. Some subjects had hearing loss at higher frequencies. In experiment 1, thresholds for detecting changes in ITD were measured when the ITD was present in the TFS alone (ITDTFS), the envelope alone (ITDENV), or both (ITDTFS/ENV). Thresholds tended to be higher for the older than for the young subjects. ITDENV thresholds were much higher than ITDTFS thresholds, while ITDTFS/ENV thresholds were similar to ITDTFS thresholds. ITDTFS thresholds were lower than ITD thresholds obtained with an unmodulated pure tone, indicating that uninformative AM can improve ITDTFS discrimination. In experiment 2, equally detectable values of ITDTFS and ITDENV were combined so as to give consistent or inconsistent lateralization. There were large individual differences, but several subjects gave scores that were much higher than would be expected from the optimal combination of independent sources of information, even for the inconsistent condition. It is suggested that ITDTFS and ITDENV cues are processed partly independently, but that both cues influence lateralization judgments, even when one cue is uninformative.


Subject(s)
Auditory Perception , Acoustic Stimulation , Adult , Age Factors , Aged , Auditory Threshold , Cochlea/physiology , Hearing Loss/physiopathology , Humans , Middle Aged , Time Perception , Young Adult
13.
Acta Acust United Acust ; 104(5): 922-925, 2018.
Article in English | MEDLINE | ID: mdl-30369861

ABSTRACT

When presented with two vowels simultaneously, humans are often able to identify the constituent vowels. Computational models exist that simulate this ability, however they predict listener confusions poorly, particularly in the case where the two vowels have the same fundamental frequency. Presented here is a model that is uniquely able to predict the combined representation of concurrent vowels. The given model is able to predict listener's systematic perceptual decisions to a high degree of accuracy.

14.
J Neurosci ; 36(7): 2227-37, 2016 Feb 17.
Article in English | MEDLINE | ID: mdl-26888932

ABSTRACT

People with cochlear hearing loss have substantial difficulty understanding speech in real-world listening environments (e.g., restaurants), even with amplification from a modern digital hearing aid. Unfortunately, a disconnect remains between human perceptual studies implicating diminished sensitivity to fast acoustic temporal fine structure (TFS) and animal studies showing minimal changes in neural coding of TFS or slower envelope (ENV) structure. Here, we used general system-identification (Wiener kernel) analyses of chinchilla auditory nerve fiber responses to Gaussian noise to reveal pronounced distortions in tonotopic coding of TFS and ENV following permanent, noise-induced hearing loss. In basal fibers with characteristic frequencies (CFs) >1.5 kHz, hearing loss introduced robust nontonotopic coding (i.e., at the wrong cochlear place) of low-frequency TFS, while ENV responses typically remained at CF. As a consequence, the highest dominant frequency of TFS coding in response to Gaussian noise was 2.4 kHz in noise-overexposed fibers compared with 4.5 kHz in control fibers. Coding of ENV also became nontonotopic in more pronounced cases of cochlear damage. In apical fibers, more classical hearing-loss effects were observed, i.e., broadened tuning without a significant shift in best frequency. Because these distortions and dissociations of TFS/ENV disrupt tonotopicity, a fundamental principle of auditory processing necessary for robust signal coding in background noise, these results have important implications for understanding communication difficulties faced by people with hearing loss. Further, hearing aids may benefit from distinct amplification strategies for apical and basal cochlear regions to address fundamentally different coding deficits. SIGNIFICANCE STATEMENT: Speech-perception problems associated with noise overexposure are pervasive in today's society, even with modern digital hearing aids. Unfortunately, the underlying physiological deficits in neural coding remain unclear. Here, we used innovative system-identification analyses of auditory nerve fiber responses to Gaussian noise to uncover pronounced distortions in coding of rapidly varying acoustic temporal fine structure and slower envelope cues following noise trauma. Because these distortions degrade and diminish the tonotopic representation of temporal acoustic features, a fundamental principle of auditory processing, the results represent a critical advancement in our understanding of the physiological bases of communication disorders. The detailed knowledge provided by this work will help guide the design of signal-processing strategies aimed at alleviating everyday communication problems for people with hearing loss.


Subject(s)
Hearing Loss, Noise-Induced/physiopathology , Acoustic Stimulation , Animals , Chinchilla , Cochlea/injuries , Cochlear Nerve , Hearing Loss, Sensorineural , Male , Nerve Fibers
15.
Hum Mol Genet ; 24(1): 1-8, 2015 Jan 01.
Article in English | MEDLINE | ID: mdl-25113746

ABSTRACT

Neurofibromatosis type 2 (NF2) is an autosomal dominant genetic disorder resulting from germline mutations in the NF2 gene. Bilateral vestibular schwannomas, tumors on cranial nerve VIII, are pathognomonic for NF2 disease. Furthermore, schwannomas also commonly develop in other cranial nerves, dorsal root ganglia and peripheral nerves. These tumors are a major cause of morbidity and mortality, and medical therapies to treat them are limited. Animal models that accurately recapitulate the full anatomical spectrum of human NF2-related schwannomas, including the characteristic functional deficits in hearing and balance associated with cranial nerve VIII tumors, would allow systematic evaluation of experimental therapeutics prior to clinical use. Here, we present a genetically engineered NF2 mouse model generated through excision of the Nf2 gene driven by Cre expression under control of a tissue-restricted 3.9kbPeriostin promoter element. By 10 months of age, 100% of Postn-Cre; Nf2(flox/flox) mice develop spinal, peripheral and cranial nerve tumors histologically identical to human schwannomas. In addition, the development of cranial nerve VIII tumors correlates with functional impairments in hearing and balance, as measured by auditory brainstem response and vestibular testing. Overall, the Postn-Cre; Nf2(flox/flox) tumor model provides a novel tool for future mechanistic and therapeutic studies of NF2-associated schwannomas.


Subject(s)
Cell Adhesion Molecules/genetics , Ganglia, Spinal/pathology , Neurofibromatosis 2/genetics , Neurofibromin 2/genetics , Neuroma, Acoustic/physiopathology , Vestibulocochlear Nerve/pathology , Animals , Disease Models, Animal , Exons , Hearing , Humans , Mice , Mice, Transgenic , Mutation , Neurofibromatosis 2/complications , Neurofibromatosis 2/physiopathology , Neuroma, Acoustic/genetics , Neuroma, Acoustic/pathology
16.
Adv Exp Med Biol ; 894: 285-295, 2016.
Article in English | MEDLINE | ID: mdl-27080669

ABSTRACT

The compressive nonlinearity of cochlear signal transduction, reflecting outer-hair-cell function, manifests as suppressive spectral interactions; e.g., two-tone suppression. Moreover, for broadband sounds, there are multiple interactions between frequency components. These frequency-dependent nonlinearities are important for neural coding of complex sounds, such as speech. Acoustic-trauma-induced outer-hair-cell damage is associated with loss of nonlinearity, which auditory prostheses attempt to restore with, e.g., "multi-channel dynamic compression" algorithms.Neurophysiological data on suppression in hearing-impaired (HI) mammals are limited. We present data on firing-rate suppression measured in auditory-nerve-fiber responses in a chinchilla model of noise-induced hearing loss, and in normal-hearing (NH) controls at equal sensation level. Hearing-impaired (HI) animals had elevated single-fiber excitatory thresholds (by ~ 20-40 dB), broadened frequency tuning, and reduced-magnitude distortion-product otoacoustic emissions; consistent with mixed inner- and outer-hair-cell pathology. We characterized suppression using two approaches: adaptive tracking of two-tone-suppression threshold (62 NH, and 35 HI fibers), and Wiener-kernel analyses of responses to broadband noise (91 NH, and 148 HI fibers). Suppression-threshold tuning curves showed sensitive low-side suppression for NH and HI animals. High-side suppression thresholds were elevated in HI animals, to the same extent as excitatory thresholds. We factored second-order Wiener-kernels into excitatory and suppressive sub-kernels to quantify the relative strength of suppression. We found a small decrease in suppression in HI fibers, which correlated with broadened tuning. These data will help guide novel amplification strategies, particularly for complex listening situations (e.g., speech in noise), in which current hearing aids struggle to restore intelligibility.


Subject(s)
Cochlear Nerve/physiology , Hearing Loss, Noise-Induced/physiopathology , Nerve Fibers/physiology , Animals , Auditory Threshold , Chinchilla
17.
J Neurosci ; 34(36): 12145-54, 2014 Sep 03.
Article in English | MEDLINE | ID: mdl-25186758

ABSTRACT

The dichotomy between acoustic temporal envelope (ENV) and fine structure (TFS) cues has stimulated numerous studies over the past decade to understand the relative role of acoustic ENV and TFS in human speech perception. Such acoustic temporal speech cues produce distinct neural discharge patterns at the level of the auditory nerve, yet little is known about the central neural mechanisms underlying the dichotomy in speech perception between neural ENV and TFS cues. We explored the question of how the peripheral auditory system encodes neural ENV and TFS cues in steady or fluctuating background noise, and how the central auditory system combines these forms of neural information for speech identification. We sought to address this question by (1) measuring sentence identification in background noise for human subjects as a function of the degree of available acoustic TFS information and (2) examining the optimal combination of neural ENV and TFS cues to explain human speech perception performance using computational models of the peripheral auditory system and central neural observers. Speech-identification performance by human subjects decreased as the acoustic TFS information was degraded in the speech signals. The model predictions best matched human performance when a greater emphasis was placed on neural ENV coding rather than neural TFS. However, neural TFS cues were necessary to account for the full effect of background-noise modulations on human speech-identification performance.


Subject(s)
Auditory Pathways/physiology , Cues , Models, Neurological , Speech Perception , Adult , Female , Humans , Male , Noise
18.
BMC Genomics ; 16: 710, 2015 Sep 18.
Article in English | MEDLINE | ID: mdl-26385698

ABSTRACT

BACKGROUND: The arrival of RNA-seq as a high-throughput method competitive to the established microarray technologies has necessarily driven a need for comparative evaluation. To date, cross-platform comparisons of these technologies have been relatively few in number of platforms analyzed and were typically gene name annotation oriented. Here, we present a more extensive and yet precise assessment to elucidate differences and similarities in performance of numerous aspects including dynamic range, fidelity of raw signal and fold-change with sample titration, and concordance with qRT-PCR (TaqMan). To ensure that these results were not confounded by incompatible comparisons, we introduce the concept of probe mapping directed "transcript pattern". A transcript pattern identifies probe(set)s across platforms that target a common set of transcripts for a specific gene. Thus, three levels of data were examined: entire data sets, data derived from a subset of 15,442 RefSeq genes common across platforms, and data derived from the transcript pattern defined subset of 7,034 RefSeq genes. RESULTS: In general, there were substantial core similarities between all 6 platforms evaluated; but, to varying degrees, the two RNA-seq protocols outperformed three of the four microarray platforms in most categories. Notably, a fourth microarray platform, Agilent with a modified protocol, was comparable, or marginally superior, to the RNA-seq protocols within these same assessments, especially in regards to fold-change evaluation. Furthermore, these 3 platforms (Agilent and two RNA-seq methods) demonstrated over 80% fold-change concordance with the gold standard qRT-PCR (TaqMan). CONCLUSIONS: This study suggests that microarrays can perform on nearly equal footing with RNA-seq, in certain key features, specifically when the dynamic range is comparable. Furthermore, the concept of a transcript pattern has been introduced that may minimize potential confounding factors of multi-platform comparison and may be useful for similar evaluations.


Subject(s)
Gene Expression Profiling/instrumentation , RNA/genetics , Oligonucleotide Array Sequence Analysis , RNA/chemistry , Reproducibility of Results
19.
bioRxiv ; 2024 Mar 13.
Article in English | MEDLINE | ID: mdl-38586037

ABSTRACT

Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e., source segregation). In normal-hearing listeners, temporal coherence of sound fluctuations across frequency channels supports this process by promoting grouping of elements belonging to a single acoustic source. We hypothesized that reduced spectral resolution-a hallmark of both electric/CI (from current spread) and acoustic (from broadened tuning) hearing with sensorineural hearing loss-degrades segregation based on temporal coherence. This is because reduced frequency resolution decreases the likelihood that a single sound source dominates the activity driving any specific channel; concomitantly, it increases the correlation in activity across channels. Consistent with our hypothesis, predictions from a physiologically plausible model of temporal-coherence-based segregation suggest that CI current spread reduces comodulation masking release (CMR; a correlate of temporal-coherence processing) and speech intelligibility in noise. These predictions are consistent with our behavioral data with simulated CI listening. Our model also predicts smaller CMR with increasing levels of outer-hair-cell damage. These results suggest that reduced spectral resolution relative to normal hearing impairs temporal-coherence-based segregation and speech-in-noise outcomes.

20.
J Assoc Res Otolaryngol ; 25(1): 35-51, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38278969

ABSTRACT

PURPOSE: Frequency selectivity is a fundamental property of the peripheral auditory system; however, the invasiveness of auditory nerve (AN) experiments limits its study in the human ear. Compound action potentials (CAPs) associated with forward masking have been suggested as an alternative to assess cochlear frequency selectivity. Previous methods relied on an empirical comparison of AN and CAP tuning curves in animal models, arguably not taking full advantage of the information contained in forward-masked CAP waveforms. METHODS: To improve the estimation of cochlear frequency selectivity based on the CAP, we introduce a convolution model to fit forward-masked CAP waveforms. The model generates masking patterns that, when convolved with a unitary response, can predict the masking of the CAP waveform induced by Gaussian noise maskers. Model parameters, including those characterizing frequency selectivity, are fine-tuned by minimizing waveform prediction errors across numerous masking conditions, yielding robust estimates. RESULTS: The method was applied to click-evoked CAPs at the round window of anesthetized chinchillas using notched-noise maskers with various notch widths and attenuations. The estimated quality factor Q10 as a function of center frequency is shown to closely match the average quality factor obtained from AN fiber tuning curves, without the need for an empirical correction factor. CONCLUSION: This study establishes a moderately invasive method for estimating cochlear frequency selectivity with potential applicability to other animal species or humans. Beyond the estimation of frequency selectivity, the proposed model proved to be remarkably accurate in fitting forward-masked CAP responses and could be extended to study more complex aspects of cochlear signal processing (e.g., compressive nonlinearities).


Subject(s)
Cochlea , Cochlear Nerve , Animals , Humans , Action Potentials , Round Window, Ear , Chinchilla
SELECTION OF CITATIONS
SEARCH DETAIL