Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Int Tinnitus J ; 27(2): 253-258, 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38507642

RESUMEN

The case study explores COVID-19 vaccination connection to tinnitus and hyperacusis, considering its onset and exacerbation post vaccination. The subject is a 47-year-old woman with a history of bilateral tinnitus, and her hearing history was tracked from 2014 to 2023. An intense episode of tinnitus occurred in 2021, distinct from previous experiences post COVID-19 vaccination, second dose. Symptoms manifested as sudden onset of hyperacusis, pronounced "roar" type tinnitus, and a sudden decline in hearing. Audiometric results showed reduce thresholds in low frequencies and lower speech scores in the left ear. This escalation significantly affects speech understanding in group conditions and noisy environments. There was a gradual improvement in tinnitus and hyperacusis severity, but the subject has a greater problem with speech understanding. The subject's journey involved visits to specialists, multiple testing including neuroimaging, naturopath consultations, and anxiety medication. It emphasizes the importance of healthcare practitioners recognizing and documenting these issues and need for timely multidisciplinary intervention and support. Further research is necessary to better understand the relationship between COVID-19, vaccination, and auditory symptoms.


Asunto(s)
COVID-19 , Acúfeno , Femenino , Humanos , Persona de Mediana Edad , Acúfeno/etiología , Acúfeno/diagnóstico , Hiperacusia/diagnóstico , Hiperacusia/etiología , Vacunas contra la COVID-19/efectos adversos , COVID-19/prevención & control , Audición
2.
Front Pharmacol ; 15: 1374320, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38841369

RESUMEN

Cases of tinnitus have been reported following administration of COVID-19 vaccines. The aim of this study was to characterize COVID-19 vaccination-related tinnitus to assess whether there is a causal relationship, and to examine potential risk factors for COVID-19 vaccination-related tinnitus. We analyzed a survey on 398 cases of COVID-19 vaccination-related tinnitus, and 699,839 COVID-19 vaccine-related reports in the Vaccine Adverse Effect Reporting System (VAERS) database that was retrieved on 4 December 2021. We found that following COVID-19 vaccination, 1) tinnitus report frequencies for Pfizer, Moderna and Janssen vaccines in VAERS are 47, 51 and 70 cases per million full vaccination; 2) the symptom onset was often rapid; 3) more women than men reported tinnitus and the sex difference increased with age; 4) for 2-dose vaccines, the frequency of tinnitus was higher following the first dose than the second dose; 5) for 2-dose vaccines, the chance of worsening tinnitus symptoms after second dose was approximately 50%; 6) tinnitus was correlated with other neurological and psychiatric symptoms; 7) pre-existing metabolic syndromes were correlated with the severity of the reported tinnitus. These findings suggest that COVID-19 vaccination increases the risk of tinnitus, and metabolic disorders is a risk factor for COVID-19 vaccination-related tinnitus.

3.
Trends Hear ; 25: 2331216520980968, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33749410

RESUMEN

Hearing aids classify acoustic environments into multiple, generic classes for the purposes of guiding signal processing. Information about environmental classification is made available to the clinician for fitting, counseling, and troubleshooting purposes. The goal of this study was to better inform scientists and clinicians about the nature of that information by comparing the classification schemes among five premium hearing instruments in a wide range of acoustic scenes including those that vary in signal-to-noise ratio and overall level (dB SPL). Twenty-eight acoustic scenes representing various prototypical environments were presented to five premium devices mounted on an acoustic manikin. Classification measures were recorded from the brand-specific fitting software then recategorized to generic labels to conceal the device company, including (a) Speech in Quiet, (b) Speech in Noise, (c) Noise, and (d) Music. Twelve normal-hearing listeners also classified each scene. The results revealed a variety of similarities and differences among the five devices and the human subjects. Where some devices were highly dependent on input overall level, others were influenced markedly by signal-to-noise ratio. Differences between human and hearing aid classification were evident for several speech and music scenes. Environmental classification is the heart of the signal processing strategy for any given device, providing key input to subsequent decision-making. Comprehensive assessment of environmental classification is essential when considering the cost of signal processing errors, the potential impact for typical wearers, and the information that is available for use by clinicians. The magnitude of differences among devices is remarkable and to be noted.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural , Percepción del Habla , Estimulación Acústica , Audición , Humanos , Ruido
4.
Brain Res ; 1714: 182-192, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-30796895

RESUMEN

When two voices compete, listeners can segregate and identify concurrent speech sounds using pitch (fundamental frequency, F0) and timbre (harmonic) cues. Speech perception is also hindered by the signal-to-noise ratio (SNR). How clear and degraded concurrent speech sounds are represented at early, pre-attentive stages of the auditory system is not well understood. To this end, we measured scalp-recorded frequency-following responses (FFR) from the EEG while human listeners heard two concurrently presented, steady-state (time-invariant) vowels whose F0 differed by zero or four semitones (ST) presented diotically in either clean (no noise) or noise-degraded (+5dB SNR) conditions. Listeners also performed a speeded double vowel identification task in which they were required to identify both vowels correctly. Behavioral results showed that speech identification accuracy increased with F0 differences between vowels, and this perceptual F0 benefit was larger for clean compared to noise degraded (+5dB SNR) stimuli. Neurophysiological data demonstrated more robust FFR F0 amplitudes for single compared to double vowels and considerably weaker responses in noise. F0 amplitudes showed speech-on-speech masking effects, along with a non-linear constructive interference at 0ST, and suppression effects at 4ST. Correlations showed that FFR F0 amplitudes failed to predict listeners' identification accuracy. In contrast, FFR F1 amplitudes were associated with faster reaction times, although this correlation was limited to noise conditions. The limited number of brain-behavior associations suggests subcortical activity mainly reflects exogenous processing rather than perceptual correlates of concurrent speech perception. Collectively, our results demonstrate that FFRs reflect pre-attentive coding of concurrent auditory stimuli that only weakly predict the success of identifying concurrent speech.


Asunto(s)
Tronco Encefálico/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adulto , Percepción Auditiva/fisiología , Señales (Psicología) , Electroencefalografía/métodos , Femenino , Audición/fisiología , Humanos , Masculino , Ruido , Fonética , Percepción de la Altura Tonal/fisiología , Tiempo de Reacción/fisiología , Adulto Joven
5.
Hear Res ; 361: 92-102, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29398142

RESUMEN

Parsing simultaneous speech requires listeners use pitch-guided segregation which can be affected by the signal-to-noise ratio (SNR) in the auditory scene. The interaction of these two cues may occur at multiple levels within the cortex. The aims of the current study were to assess the correspondence between oscillatory brain rhythms and determine how listeners exploit pitch and SNR cues to successfully segregate concurrent speech. We recorded electrical brain activity while participants heard double-vowel stimuli whose fundamental frequencies (F0s) differed by zero or four semitones (STs) presented in either clean or noise-degraded (+5 dB SNR) conditions. We found that behavioral identification was more accurate for vowel mixtures with larger pitch separations but F0 benefit interacted with noise. Time-frequency analysis decomposed the EEG into different spectrotemporal frequency bands. Low-frequency (θ, ß) responses were elevated when speech did not contain pitch cues (0ST > 4ST) or was noisy, suggesting a correlate of increased listening effort and/or memory demands. Contrastively, γ power increments were observed for changes in both pitch (0ST > 4ST) and SNR (clean > noise), suggesting high-frequency bands carry information related to acoustic features and the quality of speech representations. Brain-behavior associations corroborated these effects; modulations in low-frequency rhythms predicted the speed of listeners' perceptual decisions with higher bands predicting identification accuracy. Results are consistent with the notion that neural oscillations reflect both automatic (pre-perceptual) and controlled (post-perceptual) mechanisms of speech processing that are largely divisible into high- and low-frequency bands of human brain rhythms.


Asunto(s)
Ondas Encefálicas , Corteza Cerebral/fisiología , Ruido/efectos adversos , Enmascaramiento Perceptual , Percepción de la Altura Tonal , Acústica del Lenguaje , Percepción del Habla , Estimulación Acústica , Adulto , Umbral Auditivo , Mapeo Encefálico/métodos , Señales (Psicología) , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
6.
Hear Res ; 351: 34-44, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28578876

RESUMEN

Behavioral studies reveal listeners exploit intrinsic differences in voice fundamental frequency (F0) to segregate concurrent speech sounds-the so-called "F0-benefit." More favorable signal-to-noise ratio (SNR) in the environment, an extrinsic acoustic factor, similarly benefits the parsing of simultaneous speech. Here, we examined the neurobiological substrates of these two cues in the perceptual segregation of concurrent speech mixtures. We recorded event-related brain potentials (ERPs) while listeners performed a speeded double-vowel identification task. Listeners heard two concurrent vowels whose F0 differed by zero or four semitones presented in either clean (no noise) or noise-degraded (+5 dB SNR) conditions. Behaviorally, listeners were more accurate in correctly identifying both vowels for larger F0 separations but F0-benefit was more pronounced at more favorable SNRs (i.e., pitch × SNR interaction). Analysis of the ERPs revealed that only the P2 wave (∼200 ms) showed a similar F0 x SNR interaction as behavior and was correlated with listeners' perceptual F0-benefit. Neural classifiers applied to the ERPs further suggested that speech sounds are segregated neurally within 200 ms based on SNR whereas segregation based on pitch occurs later in time (400-700 ms). The earlier timing of extrinsic SNR compared to intrinsic F0-based segregation implies that the cortical extraction of speech from noise is more efficient than differentiating speech based on pitch cues alone, which may recruit additional cortical processes. Findings indicate that noise and pitch differences interact relatively early in cerebral cortex and that the brain arrives at the identities of concurrent speech mixtures as early as ∼200 ms.


Asunto(s)
Corteza Auditiva/fisiología , Señales (Psicología) , Ruido/efectos adversos , Enmascaramiento Perceptual , Percepción de la Altura Tonal , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Audiometría del Habla , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Tiempo de Reacción , Detección de Señal Psicológica , Factores de Tiempo , Adulto Joven
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda