Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
bioRxiv ; 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38853938

RESUMEN

Parvalbumin-expressing inhibitory neurons (PVNs) stabilize cortical network activity, generate gamma rhythms, and regulate experience-dependent plasticity. Here, we observed that activation or inactivation of PVNs functioned like a volume knob in the mouse auditory cortex (ACtx), turning neural and behavioral classification of sound level up or down over a 20dB range. PVN loudness adjustments were "sticky", such that a single bout of 40Hz PVN stimulation sustainably suppressed ACtx sound responsiveness, potentiated feedforward inhibition, and behaviorally desensitized mice to loudness. Sensory sensitivity is a cardinal feature of autism, aging, and peripheral neuropathy, prompting us to ask whether PVN stimulation can persistently desensitize mice with ACtx hyperactivity, PVN hypofunction, and loudness hypersensitivity triggered by cochlear sensorineural damage. We found that a single 16-minute bout of 40Hz PVN stimulation session restored normal loudness perception for one week, showing that perceptual deficits triggered by irreversible peripheral injuries can be reversed through targeted cortical circuit interventions.

2.
Curr Biol ; 34(8): 1605-1620.e5, 2024 04 22.
Artículo en Inglés | MEDLINE | ID: mdl-38492568

RESUMEN

Sound elicits rapid movements of muscles in the face, ears, and eyes that protect the body from injury and trigger brain-wide internal state changes. Here, we performed quantitative facial videography from mice resting atop a piezoelectric force plate and observed that broadband sounds elicited rapid and stereotyped facial twitches. Facial motion energy (FME) adjacent to the whisker array was 30 dB more sensitive than the acoustic startle reflex and offered greater inter-trial and inter-animal reliability than sound-evoked pupil dilations or movement of other facial and body regions. FME tracked the low-frequency envelope of broadband sounds, providing a means to study behavioral discrimination of complex auditory stimuli, such as speech phonemes in noise. Approximately 25% of layer 5-6 units in the auditory cortex (ACtx) exhibited firing rate changes during facial movements. However, FME facilitation during ACtx photoinhibition indicated that sound-evoked facial movements were mediated by a midbrain pathway and modulated by descending corticofugal input. FME and auditory brainstem response (ABR) thresholds were closely aligned after noise-induced sensorineural hearing loss, yet FME growth slopes were disproportionately steep at spared frequencies, reflecting a central plasticity that matched commensurate changes in ABR wave 4. Sound-evoked facial movements were also hypersensitive in Ptchd1 knockout mice, highlighting the use of FME for identifying sensory hyper-reactivity phenotypes after adult-onset hyperacusis and inherited deficiencies in autism risk genes. These findings present a sensitive and integrative measure of hearing while also highlighting that even low-intensity broadband sounds can elicit a complex mixture of auditory, motor, and reafferent somatosensory neural activity.


Asunto(s)
Audición , Animales , Ratones , Masculino , Audición/fisiología , Sonido , Estimulación Acústica , Femenino , Corteza Auditiva/fisiología , Ratones Endogámicos C57BL , Movimiento , Potenciales Evocados Auditivos del Tronco Encefálico
3.
bioRxiv ; 2023 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-37645975

RESUMEN

Optimal speech perception in noise requires successful separation of the target speech stream from multiple competing background speech streams. The ability to segregate these competing speech streams depends on the fidelity of bottom-up neural representations of sensory information in the auditory system and top-down influences of effortful listening. Here, we use objective neurophysiological measures of bottom-up temporal processing using envelope-following responses (EFRs) to amplitude modulated tones and investigate their interactions with pupil-indexed listening effort, as it relates to performance on the Quick speech in noise (QuickSIN) test in young adult listeners with clinically normal hearing thresholds. We developed an approach using ear-canal electrodes and adjusting electrode montages for modulation rate ranges, which extended the rage of reliable EFR measurements as high as 1024Hz. Pupillary responses revealed changes in listening effort at the two most difficult signal-to-noise ratios (SNR), but behavioral deficits at the hardest SNR only. Neither pupil-indexed listening effort nor the slope of the EFR decay function independently related to QuickSIN performance. However, a linear model using the combination of EFRs and pupil metrics significantly explained variance in QuickSIN performance. These results suggest a synergistic interaction between bottom-up sensory coding and top-down measures of listening effort as it relates to speech perception in noise. These findings can inform the development of next-generation tests for hearing deficits in listeners with normal-hearing thresholds that incorporates a multi-dimensional approach to understanding speech intelligibility deficits.

4.
J Neurophysiol ; 129(4): 872-893, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36921210

RESUMEN

Dichotic pitches such as the Huggins pitch (HP) and the binaural edge pitch (BEP) are perceptual illusions whereby binaural noise that exhibits abrupt changes in interaural phase differences (IPDs) across frequency creates a tonelike pitch percept when presented to both ears, even though it does not produce a pitch when presented monaurally. At the perceptual and cortical levels, dichotic pitches behave as if an actual tone had been presented to the ears, yet investigations of neural correlates of dichotic pitch in single-unit responses at subcortical levels are lacking. We tested for cues to HP and BEP in the responses of binaural neurons in the auditory midbrain of anesthetized cats by varying the expected pitch frequency around each neuron's best frequency (BF). Neuronal firing rates showed specific features (peaks, troughs, or edges) when the pitch frequency crossed the BF, and the type of feature was consistent with a well-established model of binaural processing comprising frequency tuning, internal delays, and firing rates sensitive to interaural correlation. A Jeffress-like neural population model in which the behavior of individual neurons was governed by the cross-correlation model and the neurons were independently distributed along BF and best IPD predicted trends in human psychophysical HP detection but only when the model incorporated physiological BF and best IPD distributions. These results demonstrate the existence of a rate-place code for HP and BEP in the auditory midbrain and provide a firm physiological basis for models of dichotic pitches.NEW & NOTEWORTHY Dichotic pitches are perceptual illusions created centrally through binaural interactions that offer an opportunity to test theories of pitch and binaural hearing. Here we show that binaural neurons in auditory midbrain encode the frequency of two salient types of dichotic pitches via specific features in the pattern of firing rates along the tonotopic axis. This is the first combined single-unit and modeling study of responses of auditory neurons to stimuli evoking a dichotic pitch.


Asunto(s)
Ilusiones , Percepción de la Altura Tonal , Humanos , Percepción de la Altura Tonal/fisiología , Ruido , Audición , Mesencéfalo , Estimulación Acústica/métodos
5.
Elife ; 112022 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-36111669

RESUMEN

Neurons in sensory cortex exhibit a remarkable capacity to maintain stable firing rates despite large fluctuations in afferent activity levels. However, sudden peripheral deafferentation in adulthood can trigger an excessive, non-homeostatic cortical compensatory response that may underlie perceptual disorders including sensory hypersensitivity, phantom limb pain, and tinnitus. Here, we show that mice with noise-induced damage of the high-frequency cochlear base were behaviorally hypersensitive to spared mid-frequency tones and to direct optogenetic stimulation of auditory thalamocortical neurons. Chronic two-photon calcium imaging from ACtx pyramidal neurons (PyrNs) revealed an initial stage of spatially diffuse hyperactivity, hyper-correlation, and auditory hyperresponsivity that consolidated around deafferented map regions three or more days after acoustic trauma. Deafferented PyrN ensembles also displayed hypersensitive decoding of spared mid-frequency tones that mirrored behavioral hypersensitivity, suggesting that non-homeostatic regulation of cortical sound intensity coding following sensorineural loss may be an underlying source of auditory hypersensitivity. Excess cortical response gain after acoustic trauma was expressed heterogeneously among individual PyrNs, yet 40% of this variability could be accounted for by each cell's baseline response properties prior to acoustic trauma. PyrNs with initially high spontaneous activity and gradual monotonic intensity growth functions were more likely to exhibit non-homeostatic excess gain after acoustic trauma. This suggests that while cortical gain changes are triggered by reduced bottom-up afferent input, their subsequent stabilization is also shaped by their local circuit milieu, where indicators of reduced inhibition can presage pathological hyperactivity following sensorineural hearing loss.


Asunto(s)
Corteza Auditiva , Pérdida Auditiva Provocada por Ruido , Acúfeno , Estimulación Acústica , Animales , Calcio , Cóclea , Ratones , Ruido
6.
NPJ Digit Med ; 5(1): 127, 2022 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-36038708

RESUMEN

Tinnitus, or ringing in the ears, is a prevalent condition that imposes a substantial health and financial burden on the patient and to society. The diagnosis of tinnitus, like pain, relies on patient self-report, which can complicate the distinction between actual and fraudulent claims. Here, we combined tablet-based self-directed hearing assessments with neural network classifiers to automatically differentiate participants with tinnitus (N = 24) from a malingering cohort, who were instructed to feign an imagined tinnitus percept (N = 28). We identified clear differences between the groups, both in their overt reporting of tinnitus features, but also covert differences in their fingertip movement trajectories on the tablet surface as they performed the reporting assay. Using only 10 min of data, we achieved 81% accuracy classifying patients and malingerers (ROC AUC = 0.88) with leave-one-out cross validation. Quantitative, automated measurements of tinnitus salience could improve clinical outcome assays and more accurately determine tinnitus incidence.

7.
JASA Express Lett ; 2(6): 064403, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35719240

RESUMEN

In animal models, cochlear neural degeneration (CND) is associated with excess central gain and hyperacusis, but a compelling link between reduced cochlear neural inputs and heightened loudness perception in humans remains elusive. The present study examined whether greater estimated cochlear neural degeneration (eCND) in human participants with normal hearing thresholds is associated with heightened loudness perception and sound aversion. Results demonstrated that loudness perception was heightened in ears with greater eCND and in subjects who self-report loudness aversion via a hyperacusis questionnaire. These findings suggest that CND may be a potential trigger for loudness hypersensitivity.

8.
J Neurophysiol ; 127(1): 290-312, 2022 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-34879207

RESUMEN

The pitch of harmonic complex tones (HCTs) common in speech, music, and animal vocalizations plays a key role in the perceptual organization of sound. Unraveling the neural mechanisms of pitch perception requires animal models, but little is known about complex pitch perception by animals, and some species appear to use different pitch mechanisms than humans. Here, we tested rabbits' ability to discriminate the fundamental frequency (F0) of HCTs with missing fundamentals, using a behavioral paradigm inspired by foraging behavior in which rabbits learned to harness a spatial gradient in F0 to find the location of a virtual target within a room for a food reward. Rabbits were initially trained to discriminate HCTs with F0s in the range 400-800 Hz and with harmonics covering a wide frequency range (800-16,000 Hz) and then tested with stimuli differing in spectral composition to test the role of harmonic resolvability (experiment 1) or in F0 range (experiment 2) or in both F0 and spectral content (experiment 3). Together, these experiments show that rabbits can discriminate HCTs over a wide F0 range (200-1,600 Hz) encompassing the range of conspecific vocalizations and can use either the spectral pattern of harmonics resolved by the cochlea for higher F0s or temporal envelope cues resulting from interaction between unresolved harmonics for lower F0s. The qualitative similarity of these results to human performance supports the use of rabbits as an animal model for studies of pitch mechanisms, providing species differences in cochlear frequency selectivity and F0 range of vocalizations are taken into account.NEW & NOTEWORTHY Understanding the neural mechanisms of pitch perception requires experiments in animal models, but little is known about pitch perception by animals. Here we show that rabbits, a popular animal in auditory neuroscience, can discriminate complex sounds differing in pitch using either spectral cues or temporal cues. The results suggest that the role of spectral cues in pitch perception by animals may have been underestimated by predominantly testing low frequencies in the range of human voice.


Asunto(s)
Conducta Animal/fisiología , Señales (Psicología) , Discriminación en Psicología/fisiología , Percepción de la Altura Tonal/fisiología , Procesamiento Espacial/fisiología , Percepción del Tiempo/fisiología , Animales , Conejos , Vocalización Animal/fisiología
9.
Elife ; 102021 10 19.
Artículo en Inglés | MEDLINE | ID: mdl-34665127

RESUMEN

Excess noise damages sensory hair cells, resulting in loss of synaptic connections with auditory nerves and, in some cases, hair-cell death. The cellular mechanisms underlying mechanically induced hair-cell damage and subsequent repair are not completely understood. Hair cells in neuromasts of larval zebrafish are structurally and functionally comparable to mammalian hair cells but undergo robust regeneration following ototoxic damage. We therefore developed a model for mechanically induced hair-cell damage in this highly tractable system. Free swimming larvae exposed to strong water wave stimulus for 2 hr displayed mechanical injury to neuromasts, including afferent neurite retraction, damaged hair bundles, and reduced mechanotransduction. Synapse loss was observed in apparently intact exposed neuromasts, and this loss was exacerbated by inhibiting glutamate uptake. Mechanical damage also elicited an inflammatory response and macrophage recruitment. Remarkably, neuromast hair-cell morphology and mechanotransduction recovered within hours following exposure, suggesting severely damaged neuromasts undergo repair. Our results indicate functional changes and synapse loss in mechanically damaged lateral-line neuromasts that share key features of damage observed in noise-exposed mammalian ear. Yet, unlike the mammalian ear, mechanical damage to neuromasts is rapidly reversible.


Asunto(s)
Sistema de la Línea Lateral/lesiones , Mecanorreceptores/fisiología , Mecanotransducción Celular , Sinapsis/fisiología , Pez Cebra/lesiones , Animales , Fenómenos Biomecánicos , Células Ciliadas Auditivas/fisiología , Sistema de la Línea Lateral/fisiología , Pez Cebra/fisiología
10.
J Acoust Soc Am ; 150(4): 2492, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34717457

RESUMEN

In recent electrocochleographic studies, the amplitude of the summating potential (SP) was an important predictor of performance on word-recognition in difficult listening environments among normal-hearing listeners; paradoxically the SP was largest in those with the worst scores. SP has traditionally been extracted by visual inspection, a technique prone to subjectivity and error. Here, we assess the utility of a fitting algorithm [Kamerer, Neely, and Rasetshwane (2020). J Acoust Soc Am. 147, 25-31] using a summed-Gaussian model to objectify and improve SP identification. Results show that SPs extracted by visual inspection correlate better with word scores than those from the model fits. We also use fast Fourier transform to decompose these evoked responses into their spectral components to gain insight into the cellular generators of SP. We find a component at 310 Hz associated with word-identification tasks that correlates with SP amplitude. This component is absent in patients with genetic mutations affecting synaptic transmission and may reflect a contribution from excitatory post-synaptic potentials in auditory nerve fibers.


Asunto(s)
Audiometría de Respuesta Evocada , Pruebas Auditivas , Análisis de Fourier , Humanos
11.
Front Neurosci ; 15: 666627, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34305516

RESUMEN

The massive network of descending corticofugal projections has been long-recognized by anatomists, but their functional contributions to sound processing and auditory-guided behaviors remain a mystery. Most efforts to characterize the auditory corticofugal system have been inductive; wherein function is inferred from a few studies employing a wide range of methods to manipulate varying limbs of the descending system in a variety of species and preparations. An alternative approach, which we focus on here, is to first establish auditory-guided behaviors that reflect the contribution of top-down influences on auditory perception. To this end, we postulate that auditory corticofugal systems may contribute to active listening behaviors in which the timing of bottom-up sound cues can be predicted from top-down signals arising from cross-modal cues, temporal integration, or self-initiated movements. Here, we describe a behavioral framework for investigating how auditory perceptual performance is enhanced when subjects can anticipate the timing of upcoming target sounds. Our first paradigm, studied both in human subjects and mice, reports species-specific differences in visually cued expectation of sound onset in a signal-in-noise detection task. A second paradigm performed in mice reveals the benefits of temporal regularity as a perceptual grouping cue when detecting repeating target tones in complex background noise. A final behavioral approach demonstrates significant improvements in frequency discrimination threshold and perceptual sensitivity when auditory targets are presented at a predictable temporal interval following motor self-initiation of the trial. Collectively, these three behavioral approaches identify paradigms to study top-down influences on sound perception that are amenable to head-fixed preparations in genetically tractable animals, where it is possible to monitor and manipulate particular nodes of the descending auditory pathway with unparalleled precision.

12.
J Assoc Res Otolaryngol ; 22(3): 319-347, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33891217

RESUMEN

Although pitch is closely related to temporal periodicity, stimuli with a degree of temporal irregularity can evoke a pitch sensation in human listeners. However, the neural mechanisms underlying pitch perception for irregular sounds are poorly understood. Here, we recorded responses of single units in the inferior colliculus (IC) of normal hearing (NH) rabbits to acoustic pulse trains with different amounts of random jitter in the inter-pulse intervals and compared with responses to electric pulse trains delivered through a cochlear implant (CI) in a different group of rabbits. In both NH and CI animals, many IC neurons demonstrated tuning of firing rate to the average pulse rate (APR) that was robust against temporal jitter, although jitter tended to increase the firing rates for APRs ≥ 1280 Hz. Strength and limiting frequency of spike synchronization to stimulus pulses were also comparable between periodic and irregular pulse trains, although there was a slight increase in synchronization at high APRs with CI stimulation. There were clear differences between CI and NH animals in both the range of APRs over which firing rate tuning was observed and the prevalence of synchronized responses. These results suggest that the pitches of regular and irregular pulse trains are coded differently by IC neurons depending on the APR, the degree of irregularity, and the mode of stimulation. In particular, the temporal pitch produced by periodic pulse trains lacking spectral cues may be based on a rate code rather than a temporal code at higher APRs.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción de la Altura Tonal , Animales , Audición , Mesencéfalo , Conejos
13.
J Neurophysiol ; 125(4): 1213-1222, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-33656936

RESUMEN

Permanent threshold elevation after noise exposure or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of the synapses between sensory cells and auditory nerve fibers. Silencing these neurons is likely to degrade auditory processing and may contribute to difficulties understanding speech in noisy backgrounds. Reduction of suprathreshold ABR amplitudes can be used to quantify synaptopathy in inbred mice. However, ABR amplitudes are highly variable in humans, and thus more challenging to use. Since noise-induced neuropathy preferentially targets fibers with high thresholds and low spontaneous rate and because phase locking to temporal envelopes is particularly strong in these fibers, measuring envelope following responses (EFRs) might be a more robust measure of cochlear synaptopathy. A recent auditory model further suggests that modulation of carrier tones with rectangular envelopes should be less sensitive to cochlear amplifier dysfunction and, therefore, a better metric of cochlear neural damage than sinusoidal amplitude modulation. In this study, we measure performance scores on a variety of difficult word-recognition tasks among listeners with normal audiograms and assess correlations with EFR magnitudes to rectangular versus sinusoidal modulation. Higher harmonics of EFR magnitudes evoked by a rectangular-envelope stimulus were significantly correlated with word scores, whereas those evoked by sinusoidally modulated tones did not. These results support previous reports that individual differences in synaptopathy may be a source of speech recognition variability despite the presence of normal thresholds at standard audiometric frequencies.NEW & NOTEWORTHY Recent studies suggest that millions of people may be at risk of permanent impairment from cochlear synaptopathy, the age-related and noise-induced degeneration of neural connections in the inner ear. This study examines electrophysiological responses to stimuli designed to improve detection of neural damage in subjects with normal hearing sensitivity. The resultant correlations with word recognition performance are consistent with a contribution of cochlear neural damage to deficits in hearing in noise abilities.


Asunto(s)
Envejecimiento/fisiología , Audiometría , Umbral Auditivo/fisiología , Cóclea/fisiología , Nervio Coclear/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adolescente , Adulto , Factores de Edad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido , Reconocimiento en Psicología/fisiología , Adulto Joven
14.
Curr Biol ; 31(8): 1762-1770.e4, 2021 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-33609455

RESUMEN

In sensory systems, representational features of increasing complexity emerge at successive stages of processing. In the mammalian auditory pathway, the clearest change from brainstem to cortex is defined by what is lost, not by what is gained, in that high-fidelity temporal coding becomes increasingly restricted to slower acoustic modulation rates.1,2 Here, we explore the idea that sluggish temporal processing is more than just an inability for fast processing, but instead reflects an emergent specialization for encoding sound features that unfold on very slow timescales.3,4 We performed simultaneous single unit ensemble recordings from three hierarchical stages of auditory processing in awake mice - the inferior colliculus (IC), medial geniculate body of the thalamus (MGB) and primary auditory cortex (A1). As expected, temporal coding of brief local intervals (0.001 - 0.1 s) separating consecutive noise bursts was robust in the IC and declined across MGB and A1. By contrast, slowly developing (∼1 s period) global rhythmic patterns of inter-burst interval sequences strongly modulated A1 spiking, were weakly captured by MGB neurons, and not at all by IC neurons. Shifts in stimulus regularity were not represented by changes in A1 spike rates, but rather in how the spikes were arranged in time. These findings show that low-level auditory neurons with fast timescales encode isolated sound features but not the longer gestalt, while the extended timescales in higher-level areas can facilitate sensitivity to slower contextual changes in the sensory environment.


Asunto(s)
Colículos Inferiores , Estimulación Acústica , Animales , Corteza Auditiva , Vías Auditivas , Percepción Auditiva , Cuerpos Geniculados , Ratones
15.
Curr Biol ; 31(2): 310-321.e5, 2021 01 25.
Artículo en Inglés | MEDLINE | ID: mdl-33157020

RESUMEN

Corticothalamic (CT) neurons comprise the largest component of the descending sensory corticofugal pathway, but their contributions to brain function and behavior remain an unsolved mystery. To address the hypothesis that layer 6 (L6) CTs may be activated by extra-sensory inputs prior to anticipated sounds, we performed optogenetically targeted single-unit recordings and two-photon imaging of Ntsr1-Cre+ L6 CT neurons in the primary auditory cortex (A1) while mice were engaged in an active listening task. We found that L6 CTs and other L6 units began spiking hundreds of milliseconds prior to orofacial movements linked to sound presentation and reward, but not to other movements such as locomotion, which were not linked to an explicit behavioral task. Rabies tracing of monosynaptic inputs to A1 L6 CT neurons revealed a narrow strip of cholinergic and non-cholinergic projection neurons in the external globus pallidus, suggesting a potential source of motor-related input. These findings identify new pathways and local circuits for motor modulation of sound processing and suggest a new role for CT neurons in active sensing.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Movimiento/fisiología , Tálamo/fisiología , Estimulación Acústica , Animales , Corteza Auditiva/citología , Globo Pálido/fisiología , Microscopía Intravital , Masculino , Ratones , Vías Nerviosas/fisiología , Neuronas/fisiología , Imagen Óptica , Recompensa , Técnicas Estereotáxicas , Tálamo/citología
16.
Otol Neurotol ; 41(9): e1167-e1173, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32925865

RESUMEN

OBJECTIVE: Patients with chronic, subjective tinnitus are often administered a battery of audiometric tests to characterize their tinnitus percept. Even a comprehensive battery, if applied just once, cannot capture fluctuations in tinnitus strength or quality over time. Moreover, subjects experience a learning curve when reporting the detailed characteristics of their tinnitus percept, such that a single assessment will reflect a lack of familiarity with test requirements. We addressed these challenges by programming an automated software platform for at-home tinnitus characterization over a 2-week period. STUDY DESIGN: Prospective case series. SETTING: Tertiary referral center, patients' homes. INTERVENTIONS: Following an initial clinic visit, 25 subjects with chronic subjective tinnitus returned home with a tablet computer and calibrated headphones to complete questionnaires, hearing tests, and tinnitus psychoacoustic testing. We repeatedly characterized loudness discomfort levels and tinnitus matching over a 2-week period. MAIN OUTCOME MEASURES: Primary outcomes included intrasubject variability in loudness discomfort levels, tinnitus intensity, and tinnitus acoustic matching over the course of testing. RESULTS: Within-subject variability for all outcome measures could be reduced by approximately 25 to 50% by excluding initial measurements and by focusing only on tinnitus matching attempts where subjects report high confidence in the accuracy of their ratings. CONCLUSIONS: Tinnitus self-report is inherently variable but can converge on reliable values with extended testing. Repeated, self-directed tinnitus assessments may have implications for identifying malingerers. Further, these findings suggest that extending the baseline phase of tinnitus characterizations will increase the statistical power for future studies focused on tinnitus interventions.


Asunto(s)
Acúfeno , Audiometría , Humanos , Percepción Sonora , Estudios Prospectivos , Psicoacústica , Acúfeno/diagnóstico
17.
J Neurophysiol ; 124(2): 418-431, 2020 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-32639924

RESUMEN

Hearing loss caused by noise exposure, ototoxic drugs, or aging results from the loss of sensory cells, as reflected in audiometric threshold elevation. Animal studies show that loss of hair cells can be preceded by loss of auditory-nerve peripheral synapses, which likely degrades auditory processing. While this condition, known as cochlear synaptopathy, can be diagnosed in mice by a reduction of suprathreshold cochlear neural responses, its diagnosis in humans remains challenging. To look for evidence of cochlear nerve damage in normal hearing subjects, we measured their word recognition performance in difficult listening environments and compared it to cochlear function as assessed by otoacoustic emissions and click-evoked electrocochleography. Several electrocochleographic markers were correlated with word scores, whereas distortion product otoacoustic emissions were not. Specifically, the summating potential (SP) was larger and the cochlear nerve action potential (AP) was smaller in those with the worst word scores. Adding a forward masker or increasing stimulus rate reduced SP in the worst performers, suggesting that this potential includes postsynaptic components as well as hair cell receptor potentials. Results suggests that some of the variance in word scores among listeners with normal audiometric threshold arises from cochlear neural damage.NEW & NOTEWORTHY Recent animal studies suggest that millions of people may be at risk of permanent impairment from cochlear synaptopathy, the age-related and noise-induced degeneration of neural connections in the inner ear that "hides" behind a normal audiogram. This study examines electrophysiological responses to clicks in a large cohort of subjects with normal hearing sensitivity. The resultant correlations with word recognition performance are consistent with an important contribution cochlear neural damage to deficits in hearing in noise abilities.


Asunto(s)
Potenciales de Acción/fisiología , Nervio Coclear/fisiología , Células Ciliadas Auditivas/fisiología , Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Audiometría de Respuesta Evocada , Nervio Coclear/fisiopatología , Pérdida Auditiva/fisiopatología , Humanos , Persona de Mediana Edad , Ruido , Reconocimiento en Psicología/fisiología , Adulto Joven
18.
J Neurophysiol ; 123(5): 1791-1807, 2020 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-32186439

RESUMEN

The horizontal direction of a sound source (i.e., azimuth) is perceptually determined in a frequency-dependent manner: low- and high-frequency sounds are localized via differences in the arrival time and intensity of the sound at the two ears, respectively, called interaural time and level differences (ITDs and ILDs). In the central auditory system, these binaural cues to direction are thought to be separately encoded by neurons tuned to low and high characteristic frequencies (CFs). However, at high sound levels a neuron often responds to frequencies far from its CF, raising the possibility that individual neurons may encode the azimuths of both low- and high-frequency sounds using both binaural cues. We tested this possibility by measuring auditory-driven single-unit responses in the central nucleus of the inferior colliculus (ICC) of unanesthetized female Dutch Belted rabbits with a multitetrode drive. At 70 dB SPL, ICC neurons across the cochleotopic map transmitted information in their firing rates about the direction of both low- and high-frequency noise stimuli. We independently manipulated ITD and ILD cues in virtual acoustic space and found that sensitivity to ITD and ILD, respectively, shaped the directional sensitivity of ICC neurons to low (<1.5 kHz)- and high (>3 kHz)-pass stimuli, regardless of the neuron's CF. We also found evidence that high-CF neurons transmit information about both the fine-structure and envelope ITD of low-frequency sound. Our results indicate that at conversational sound levels the majority of the cochleotopic map is engaged in transmitting directional information, even for sources with narrowband spectra.NEW & NOTEWORTHY A "division of labor" has previously been assumed in which the directions of low- and high-frequency sound sources are thought to be encoded by neurons preferentially sensitive to low and high frequencies, respectively. Contrary to this, we found that auditory midbrain neurons encode the directions of both low- and high-frequency sounds regardless of their preferred frequencies. Neural responses were shaped by different sound localization cues depending on the stimulus spectrum-even within the same neuron.


Asunto(s)
Fenómenos Electrofisiológicos/fisiología , Colículos Inferiores/fisiología , Neuronas/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Animales , Señales (Psicología) , Femenino , Conejos , Factores de Tiempo
19.
Elife ; 92020 01 21.
Artículo en Inglés | MEDLINE | ID: mdl-31961322

RESUMEN

In social settings, speech waveforms from nearby speakers mix together in our ear canals. Normally, the brain unmixes the attended speech stream from the chorus of background speakers using a combination of fast temporal processing and cognitive active listening mechanisms. Of >100,000 patient records,~10% of adults visited our clinic because of reduced hearing, only to learn that their hearing was clinically normal and should not cause communication difficulties. We found that multi-talker speech intelligibility thresholds varied widely in normal hearing adults, but could be predicted from neural phase-locking to frequency modulation (FM) cues measured with ear canal EEG recordings. Combining neural temporal fine structure processing, pupil-indexed listening effort, and behavioral FM thresholds accounted for 78% of the variability in multi-talker speech intelligibility. The disordered bottom-up and top-down markers of poor multi-talker speech perception identified here could inform the design of next-generation clinical tests for hidden hearing disorders.


Asunto(s)
Audición , Inteligibilidad del Habla , Percepción del Habla , Adulto , Femenino , Pérdida Auditiva/fisiopatología , Humanos , Masculino
20.
Ear Hear ; 41(1): 25-38, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31584501

RESUMEN

OBJECTIVES: Permanent threshold elevation after noise exposure, ototoxic drugs, or aging is caused by loss of sensory cells; however, animal studies show that hair cell loss is often preceded by degeneration of synapses between sensory cells and auditory nerve fibers. The silencing of these neurons, especially those with high thresholds and low spontaneous rates, degrades auditory processing and may contribute to difficulties in understanding speech in noise. Although cochlear synaptopathy can be diagnosed in animals by measuring suprathreshold auditory brainstem responses, its diagnosis in humans remains a challenge. In mice, cochlear synaptopathy is also correlated with measures of middle ear muscle (MEM) reflex strength, possibly because the missing high-threshold neurons are important drivers of this reflex. The authors hypothesized that measures of the MEM reflex might be better than other assays of peripheral function in predicting difficulties hearing in difficult listening environments in human subjects. DESIGN: The authors recruited 165 normal-hearing healthy subjects, between 18 and 63 years of age, with no history of ear or hearing problems, no history of neurologic disorders, and unremarkable otoscopic examinations. Word recognition in quiet and in difficult listening situations was measured in four ways: using isolated words from the Northwestern University auditory test number six corpus with either (a) 0 dB signal to noise, (b) 45% time compression with reverberation, or (c) 65% time compression with reverberation, and (d) with a modified version of the QuickSIN. Audiometric thresholds were assessed at standard and extended high frequencies. Outer hair cell function was assessed by distortion product otoacoustic emissions (DPOAEs). Middle ear function and reflexes were assessed using three methods: the acoustic reflex threshold as measured clinically, wideband tympanometry as measured clinically, and a custom wideband method that uses a pair of click probes flanking an ipsilateral noise elicitor. Other aspects of peripheral auditory function were assessed by measuring click-evoked gross potentials, that is, summating potential (SP) and action potential (AP) from ear canal electrodes. RESULTS: After adjusting for age and sex, word recognition scores were uncorrelated with audiometric or DPOAE thresholds, at either standard or extended high frequencies. MEM reflex thresholds were significantly correlated with scores on isolated word recognition, but not with the modified version of the QuickSIN. The highest pairwise correlations were seen using the custom assay. AP measures were correlated with some of the word scores, but not as highly as seen for the MEM custom assay, and only if amplitude was measured from SP peak to AP peak, rather than baseline to AP peak. The highest pairwise correlations with word scores, on all four tests, were seen with the SP/AP ratio, followed closely by SP itself. When all predictor variables were combined in a stepwise multivariate regression, SP/AP dominated models for all four word score outcomes. MEM measures only enhanced the adjusted r values for the 45% time compression test. The only other predictors that enhanced model performance (and only for two outcome measures) were measures of interaural threshold asymmetry. CONCLUSIONS: Results suggest that, among normal-hearing subjects, there is a significant peripheral contribution to diminished hearing performance in difficult listening environments that is not captured by either threshold audiometry or DPOAEs. The significant univariate correlations between word scores and either SP/AP, SP, MEM reflex thresholds, or AP amplitudes (in that order) are consistent with a type of primary neural degeneration. However, interpretation is clouded by uncertainty as to the mix of pre- and postsynaptic contributions to the click-evoked SP. None of the assays presented here has the sensitivity to diagnose neural degeneration on a case-by-case basis; however, these tests may be useful in longitudinal studies to track accumulation of neural degeneration in individual subjects.


Asunto(s)
Potenciales Evocados Auditivos del Tronco Encefálico , Audición , Pruebas de Impedancia Acústica , Animales , Umbral Auditivo , Oído Medio , Ratones , Músculos , Emisiones Otoacústicas Espontáneas , Reflejo Acústico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...