Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters

Complementary Medicines
Database
Language
Affiliation country
Publication year range
1.
Int Tinnitus J ; 27(2): 253-258, 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38507642

ABSTRACT

The case study explores COVID-19 vaccination connection to tinnitus and hyperacusis, considering its onset and exacerbation post vaccination. The subject is a 47-year-old woman with a history of bilateral tinnitus, and her hearing history was tracked from 2014 to 2023. An intense episode of tinnitus occurred in 2021, distinct from previous experiences post COVID-19 vaccination, second dose. Symptoms manifested as sudden onset of hyperacusis, pronounced "roar" type tinnitus, and a sudden decline in hearing. Audiometric results showed reduce thresholds in low frequencies and lower speech scores in the left ear. This escalation significantly affects speech understanding in group conditions and noisy environments. There was a gradual improvement in tinnitus and hyperacusis severity, but the subject has a greater problem with speech understanding. The subject's journey involved visits to specialists, multiple testing including neuroimaging, naturopath consultations, and anxiety medication. It emphasizes the importance of healthcare practitioners recognizing and documenting these issues and need for timely multidisciplinary intervention and support. Further research is necessary to better understand the relationship between COVID-19, vaccination, and auditory symptoms.


Subject(s)
COVID-19 , Tinnitus , Female , Humans , Middle Aged , Tinnitus/etiology , Tinnitus/diagnosis , Hyperacusis/diagnosis , Hyperacusis/etiology , COVID-19 Vaccines/adverse effects , COVID-19/prevention & control , Hearing
2.
Brain Res ; 1714: 182-192, 2019 07 01.
Article in English | MEDLINE | ID: mdl-30796895

ABSTRACT

When two voices compete, listeners can segregate and identify concurrent speech sounds using pitch (fundamental frequency, F0) and timbre (harmonic) cues. Speech perception is also hindered by the signal-to-noise ratio (SNR). How clear and degraded concurrent speech sounds are represented at early, pre-attentive stages of the auditory system is not well understood. To this end, we measured scalp-recorded frequency-following responses (FFR) from the EEG while human listeners heard two concurrently presented, steady-state (time-invariant) vowels whose F0 differed by zero or four semitones (ST) presented diotically in either clean (no noise) or noise-degraded (+5dB SNR) conditions. Listeners also performed a speeded double vowel identification task in which they were required to identify both vowels correctly. Behavioral results showed that speech identification accuracy increased with F0 differences between vowels, and this perceptual F0 benefit was larger for clean compared to noise degraded (+5dB SNR) stimuli. Neurophysiological data demonstrated more robust FFR F0 amplitudes for single compared to double vowels and considerably weaker responses in noise. F0 amplitudes showed speech-on-speech masking effects, along with a non-linear constructive interference at 0ST, and suppression effects at 4ST. Correlations showed that FFR F0 amplitudes failed to predict listeners' identification accuracy. In contrast, FFR F1 amplitudes were associated with faster reaction times, although this correlation was limited to noise conditions. The limited number of brain-behavior associations suggests subcortical activity mainly reflects exogenous processing rather than perceptual correlates of concurrent speech perception. Collectively, our results demonstrate that FFRs reflect pre-attentive coding of concurrent auditory stimuli that only weakly predict the success of identifying concurrent speech.


Subject(s)
Brain Stem/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Auditory Perception/physiology , Cues , Electroencephalography/methods , Female , Hearing/physiology , Humans , Male , Noise , Phonetics , Pitch Perception/physiology , Reaction Time/physiology , Young Adult
3.
Hear Res ; 361: 92-102, 2018 04.
Article in English | MEDLINE | ID: mdl-29398142

ABSTRACT

Parsing simultaneous speech requires listeners use pitch-guided segregation which can be affected by the signal-to-noise ratio (SNR) in the auditory scene. The interaction of these two cues may occur at multiple levels within the cortex. The aims of the current study were to assess the correspondence between oscillatory brain rhythms and determine how listeners exploit pitch and SNR cues to successfully segregate concurrent speech. We recorded electrical brain activity while participants heard double-vowel stimuli whose fundamental frequencies (F0s) differed by zero or four semitones (STs) presented in either clean or noise-degraded (+5 dB SNR) conditions. We found that behavioral identification was more accurate for vowel mixtures with larger pitch separations but F0 benefit interacted with noise. Time-frequency analysis decomposed the EEG into different spectrotemporal frequency bands. Low-frequency (θ, ß) responses were elevated when speech did not contain pitch cues (0ST > 4ST) or was noisy, suggesting a correlate of increased listening effort and/or memory demands. Contrastively, γ power increments were observed for changes in both pitch (0ST > 4ST) and SNR (clean > noise), suggesting high-frequency bands carry information related to acoustic features and the quality of speech representations. Brain-behavior associations corroborated these effects; modulations in low-frequency rhythms predicted the speed of listeners' perceptual decisions with higher bands predicting identification accuracy. Results are consistent with the notion that neural oscillations reflect both automatic (pre-perceptual) and controlled (post-perceptual) mechanisms of speech processing that are largely divisible into high- and low-frequency bands of human brain rhythms.


Subject(s)
Brain Waves , Cerebral Cortex/physiology , Noise/adverse effects , Perceptual Masking , Pitch Perception , Speech Acoustics , Speech Perception , Acoustic Stimulation , Adult , Auditory Threshold , Brain Mapping/methods , Cues , Electroencephalography , Female , Humans , Male , Young Adult
4.
Hear Res ; 351: 34-44, 2017 08.
Article in English | MEDLINE | ID: mdl-28578876

ABSTRACT

Behavioral studies reveal listeners exploit intrinsic differences in voice fundamental frequency (F0) to segregate concurrent speech sounds-the so-called "F0-benefit." More favorable signal-to-noise ratio (SNR) in the environment, an extrinsic acoustic factor, similarly benefits the parsing of simultaneous speech. Here, we examined the neurobiological substrates of these two cues in the perceptual segregation of concurrent speech mixtures. We recorded event-related brain potentials (ERPs) while listeners performed a speeded double-vowel identification task. Listeners heard two concurrent vowels whose F0 differed by zero or four semitones presented in either clean (no noise) or noise-degraded (+5 dB SNR) conditions. Behaviorally, listeners were more accurate in correctly identifying both vowels for larger F0 separations but F0-benefit was more pronounced at more favorable SNRs (i.e., pitch × SNR interaction). Analysis of the ERPs revealed that only the P2 wave (∼200 ms) showed a similar F0 x SNR interaction as behavior and was correlated with listeners' perceptual F0-benefit. Neural classifiers applied to the ERPs further suggested that speech sounds are segregated neurally within 200 ms based on SNR whereas segregation based on pitch occurs later in time (400-700 ms). The earlier timing of extrinsic SNR compared to intrinsic F0-based segregation implies that the cortical extraction of speech from noise is more efficient than differentiating speech based on pitch cues alone, which may recruit additional cortical processes. Findings indicate that noise and pitch differences interact relatively early in cerebral cortex and that the brain arrives at the identities of concurrent speech mixtures as early as ∼200 ms.


Subject(s)
Auditory Cortex/physiology , Cues , Noise/adverse effects , Perceptual Masking , Pitch Perception , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Audiometry, Speech , Electroencephalography , Evoked Potentials, Auditory , Female , Humans , Male , Reaction Time , Signal Detection, Psychological , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL