Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.214
Filter
Add more filters

Publication year range
1.
Neuroimage ; 299: 120796, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39153523

ABSTRACT

PURPOSE: In this study, the objectification of the subjective perception of loudness was investigated using electroencephalography (EEG). In particular, the emergence of objective markers in the domain of the acoustic discomfort threshold was examined. METHODS: A cohort of 27 adults with normal hearing, aged between 18 and 30, participated in the study. The participants were presented with 500 ms long noise stimuli via in-ear headphones. The acoustic signals were presented with sound levels of [55, 65, 75, 85, 95 dB]. After each stimulus, the subjects provided their subjective assessment of the perceived loudness using a colored scale on a touchscreen. EEG signals were recorded, and afterward, event-related potentials (ERPs) locked to sound onset were analyzed. RESULTS: Our findings reveal a linear dependency between the N100 component and both the sound level and the subjective loudness categorization of the sound. Additionally, the data demonstrated a nonlinear relationship between the P300 potential and the sound level as well as for the subjective loudness rating. The P300 potential was elicited exclusively when the stimuli had been subjectively rated as "very loud" or "too loud". CONCLUSION: The findings of the present study suggest the possibility of the identification of the subjective uncomfortable loudness level by objective neural parameters.


Subject(s)
Electroencephalography , Loudness Perception , Humans , Adult , Male , Female , Electroencephalography/methods , Young Adult , Loudness Perception/physiology , Adolescent , Event-Related Potentials, P300/physiology , Acoustic Stimulation , Evoked Potentials, Auditory/physiology , Brain/physiology , Evoked Potentials/physiology
2.
Perception ; 53(7): 450-464, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38778785

ABSTRACT

This study aimed to investigate the perception of loudness in response to changes in fundamental frequency (F0) in spoken sounds, as well as the influence of linguistic background on this perceptual process. The results revealed that participants perceived changes in F0 to have accompanying changes in loudness, with a trend of lower F0 sounds being perceived as louder than higher F0 sounds. This finding contrasts with previous studies on pure tones, where increases in frequency typically led to increases in loudness. Furthermore, the study examined differences between two distinct groups of participants: Chinese-speaking and English-speaking individuals. It was observed that English-speaking participants exhibited a greater sensitivity to minor intensity changes compared to Chinese-speaking participants. This discrepancy in sensitivity suggests that linguistic background may play a significant role in shaping the perception of loudness in spoken sound. The study's findings contribute to our understanding of how F0 variations are perceived in terms of loudness, and highlight the potential impact of language experience on this perceptual process.


Subject(s)
Loudness Perception , Speech Perception , Humans , Male , Female , Loudness Perception/physiology , Adult , Young Adult , Speech Perception/physiology , Speech Acoustics , Language
3.
Psychol Res ; 88(5): 1602-1615, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38720089

ABSTRACT

For the auditory dimensions loudness and pitch a vertical SARC effect (Spatial Association of Response Codes) exists: When responding to loud (high) tones, participants are faster with top-sided responses compared to bottom-sided responses and vice versa for soft (low) tones. These effects are typically explained by two different spatial representations for both dimensions with pitch being represented on a helix structure and loudness being represented as spatially associated magnitude. Prior studies show incoherent results with regard to the question whether two SARC effects can occur at the same time as well as whether SARC effects interact with each other. Therefore, this study aimed to investigate the interrelation between the SARC effect for pitch and the SARC effect for loudness in a timbre discrimination task. Participants (N = 36) heard one tone per trial and had to decide whether the presented tone was a violin tone or an organ tone by pressing a top-sided or bottom-sided response key. Loudness and pitch were varied orthogonally. We tested the occurrence of SARC effects for pitch and loudness as well as their potential interaction by conducting a multiple linear regression with difference of reaction time (dRT) as dependent variable, and loudness and pitch as predictors. Frequentist and Bayesian analyses revealed that the regression coefficients of pitch and loudness were smaller than zero indicating the simultaneous occurrence of a SARC effects for both dimensions. In contrast, the interaction coefficient was not different from zero indicating an additive effect of both predictors.


Subject(s)
Loudness Perception , Pitch Perception , Humans , Male , Female , Adult , Loudness Perception/physiology , Young Adult , Pitch Perception/physiology , Reaction Time/physiology , Acoustic Stimulation
4.
J Acoust Soc Am ; 156(2): 989-1003, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39136635

ABSTRACT

In order to improve the prediction accuracy of the sound quality of vehicle interior noise, a novel sound quality prediction model was proposed based on the physiological response predicted metrics, i.e., loudness, sharpness, and roughness. First, a human-ear sound transmission model was constructed by combining the outer and middle ear finite element model with the cochlear transmission line model. This model converted external input noise into cochlear basilar membrane response. Second, the physiological perception models of loudness, sharpness, and roughness were constructed by transforming the basilar membrane response into sound perception related to neuronal firing. Finally, taking the calculated loudness, sharpness, and roughness of the physiological model and the subjective evaluation values of vehicle interior noise as the parameters, a sound quality prediction model was constructed by TabNet model. The results demonstrate that the loudness, sharpness, and roughness computed by the human-ear physiological model exhibit a stronger correlation with the subjective evaluation of sound quality annoyance compared to traditional psychoacoustic parameters. Furthermore, the average error percentage of sound quality prediction based on the physiological model is only 3.81%, which is lower than that based on traditional psychoacoustic parameters.


Subject(s)
Loudness Perception , Noise, Transportation , Psychoacoustics , Humans , Loudness Perception/physiology , Acoustic Stimulation/methods , Finite Element Analysis , Models, Biological , Automobiles , Basilar Membrane/physiology , Cochlea/physiology , Auditory Perception/physiology , Noise , Ear, Middle/physiology , Computer Simulation
5.
PLoS Comput Biol ; 17(8): e1009251, 2021 08.
Article in English | MEDLINE | ID: mdl-34339409

ABSTRACT

In the auditory system, tonotopy is postulated to be the substrate for a place code, where sound frequency is encoded by the location of the neurons that fire during the stimulus. Though conceptually simple, the computations that allow for the representation of intensity and complex sounds are poorly understood. Here, a mathematical framework is developed in order to define clearly the conditions that support a place code. To accommodate both frequency and intensity information, the neural network is described as a space with elements that represent individual neurons and clusters of neurons. A mapping is then constructed from acoustic space to neural space so that frequency and intensity are encoded, respectively, by the location and size of the clusters. Algebraic operations -addition and multiplication- are derived to elucidate the rules for representing, assembling, and modulating multi-frequency sound in networks. The resulting outcomes of these operations are consistent with network simulations as well as with electrophysiological and psychophysical data. The analyses show how both frequency and intensity can be encoded with a purely place code, without the need for rate or temporal coding schemes. The algebraic operations are used to describe loudness summation and suggest a mechanism for the critical band. The mathematical approach complements experimental and computational approaches and provides a foundation for interpreting data and constructing models.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Models, Neurological , Acoustic Stimulation , Animals , Auditory Pathways/physiology , Computational Biology , Computer Simulation , Evoked Potentials, Auditory/physiology , Humans , Loudness Perception/physiology , Nerve Net/physiology , Neural Networks, Computer , Pitch Perception/physiology , Synaptic Transmission/physiology
6.
Audiol Neurootol ; 27(6): 469-477, 2022.
Article in English | MEDLINE | ID: mdl-36007501

ABSTRACT

INTRODUCTION: The common mechanism of tinnitus, hyperacusis, and loudness perception is hypothesized to be related to central gain. Although central gain increases with attempts to compensate hearing loss, reduced input can also be observed in those with clinically normal hearing. This study aimed to evaluate the loudness growth function of tinnitus patients with and without hyperacusis using behavioural and electrophysiological methods. METHODS: The study consists of three groups with a total of 60 clinically normal hearing subjects, including the control group (10 men and 10 women; mean age 39.8, SD 11.8 years), tinnitus group (10 men and 10 women; mean age 40.9, SD 12.2 years), and hyperacusis group (also have tinnitus) (7 men and 13 women; mean age 38.7, SD 14.6 years). Loudness discomfort levels (LDLs), categorical loudness scaling (CLS), and cortical auditory evoked potentials were used for the evaluation of loudness growth. N1-P2 component amplitudes and latencies were measured. RESULTS: LDL results of 500, 1,000, 2,000, 4,000, and 8,000 Hz showed a significant difference between the hyperacusis group and the other two groups (p < 0.001). In the loudness scale test performed with 500 Hz and 2,000 Hz narrow-band noise (NBN) stimulus, a significant difference was observed between the hyperacusis group and the other two groups in the "medium," "loud," and "very loud" categories (p < 0.001). In the cortical examination performed with 500 Hz and 2,000 Hz NBN stimulus at 40, 60, and 80 dB nHL intensities, no significant difference was observed between the groups in the N1, P2 latency, and N1-P2 peak-to-peak amplitude. CONCLUSION: Although the hyperacusis group is significantly different between groups in behavioural tests, the same cannot be said for electrophysiological tests. In our attempt to differentiate tinnitus and hyperacusis with electrophysiological tests over the loudness growth function, N1 and P2 responses were not seen as suitable methods. However, it appears to be beneficial to use CLS in addition to LDLs in behavioural tests.


Subject(s)
Hyperacusis , Tinnitus , Male , Humans , Female , Adult , Hearing , Loudness Perception/physiology , Hearing Tests
7.
Med Sci Monit ; 28: e936373, 2022 Apr 09.
Article in English | MEDLINE | ID: mdl-35396343

ABSTRACT

Loudness recruitment is a common symptom of hearing loss induced by cochlear lesions, which is defined as an abnormally fast growth of loudness perception of sound intensity. This is different from hyperacusis, which is defined as "abnormal intolerance to regular noises" or "extreme amplification of sounds that are comfortable to the average individual". Although both are characterized by abnormally high sound amplification, the mechanisms of occurrence are distinct. Damage to the outer hair cells alters the nonlinear characteristics of the basilar membrane, resulting in aberrant auditory nerve responses that may be connected to loudness recruitment. In contrast, hyperacusis is an aberrant condition characterized by maladaptation of the central auditory system. Peripheral injury can produce fluctuations in loudness recruitment, but this is not always the source of hyperacusis. Hyperacusis can also be accompanied by aversion to sound and fear of sound stimuli, in which the limbic system may play a critical role. This brief review aims to present the current status of the neurobiological mechanisms that distinguish between loudness recruitment and hyperacusis.


Subject(s)
Hearing Loss , Hyperacusis , Acoustic Stimulation , Cochlear Nerve , Humans , Loudness Perception/physiology
8.
Hum Brain Mapp ; 42(6): 1742-1757, 2021 04 15.
Article in English | MEDLINE | ID: mdl-33544429

ABSTRACT

Psychoacoustic research suggests that judgments of perceived loudness change differ significantly between sounds with continuous increases and decreases of acoustic intensity, often referred to as "up-ramps" and "down-ramps." The magnitude and direction of this difference, in turn, appears to depend on focused attention and the specific task performed by the listeners. This has led to the suspicion that cognitive processes play an important role in the development of the observed context effects. The present study addressed this issue by exploring neural correlates of context-dependent loudness judgments. Normal hearing listeners continuously judged the loudness of complex-tone sequences which slowly changed in level over time while auditory fMRI was performed. Regression models that included information either about presented sound levels or about individual loudness judgments were used to predict activation throughout the brain. Our psychoacoustical data confirmed robust effects of the direction of intensity change on loudness judgments. Specifically, stimuli were judged softer when following a down-ramp, and louder in the context of an up-ramp. Levels and loudness estimates significantly predicted activation in several brain areas, including auditory cortex. However, only activation in nonauditory regions was more accurately predicted by context-dependent loudness estimates as compared with sound levels, particularly in the orbitofrontal cortex and medial temporal areas. These findings support the idea that cognitive aspects contribute to the generation of context effects with respect to continuous loudness judgments.


Subject(s)
Loudness Perception/physiology , Prefrontal Cortex/physiology , Psychoacoustics , Temporal Lobe/physiology , Adolescent , Adult , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Prefrontal Cortex/diagnostic imaging , Temporal Lobe/diagnostic imaging , Young Adult
9.
Neuroimage ; 213: 116733, 2020 06.
Article in English | MEDLINE | ID: mdl-32169543

ABSTRACT

Loudness dependence of auditory evoked potentials (LDAEP) has long been considered to reflect central basal serotonin transmission. However, the relationship between LDAEP and individual serotonin receptors and transporters has not been fully explored in humans and may involve other neurotransmitter systems. To examine LDAEP's relationship with the serotonin system, we performed PET using serotonin-1A (5-HT1A) imaging via [11C]CUMI-101 and serotonin transporter (5-HTT) imaging via [11C]DASB on a mixed sample of healthy controls (n â€‹= â€‹4: 4 females, 0 males), patients with unipolar (MDD, n â€‹= â€‹11: 4 females, 7 males) and bipolar depression (BD, n â€‹= â€‹8: 4 females, 4 males). On these same participants, we also performed electroencephalography (EEG) within a week of PET scanning, using 1000 â€‹Hz tones of varying intensity to evoke LDAEP. We then evaluated the relationship between LDAEP and 5-HT1A or 5-HTT binding in both the raphe (5-HT1A)/midbrain (5-HTT) areas and in the temporal cortex. We found that LDAEP was significantly correlated with 5-HT1A positively and with 5-HTT negatively in the temporal cortex (p â€‹< â€‹0.05), but not correlated with either in midbrain or raphe. In males only, exploratory analysis showed multiple regions in which LDAEP significantly correlated with 5-HT1A throughout the brain; we did not find this with 5-HTT. This multimodal study partially validates preclinical models of a serotonergic influence on LDAEP. Replication in larger samples is necessary to further clarify our understanding of the role of serotonin in perception of auditory tones.


Subject(s)
Brain/physiology , Evoked Potentials, Auditory/physiology , Loudness Perception/physiology , Serotonin Plasma Membrane Transport Proteins/metabolism , Serotonin/metabolism , Adolescent , Adult , Aged , Bipolar Disorder , Electroencephalography , Female , Humans , Male , Middle Aged , Positron-Emission Tomography , Young Adult
10.
J Acoust Soc Am ; 145(5): 3208, 2019 05.
Article in English | MEDLINE | ID: mdl-31153337

ABSTRACT

The aim of this study is to explore the performance of binaural and monaural recordings in soundscape evaluation. Twelve sites with different acoustic scenarios were chosen, where binaural and monaural recordings were simultaneously made. Nine soundscape indicators were assessed by residents through a laboratory-based auditory test. The results showed that the two recording methods present good agreement on most soundscape evaluation indicators including overall impression, acoustic comfort, pleasantness, annoyance, eventfulness, and loudness. The two recording methods were found to be correlated with different indicators in a similar way. For most sites, the two recording methods were significantly correlated excluding for directionality. For both recording methods, the A-weighted sound pressure level was found to have a weak impact on soundscape evaluation. Reverberation time significantly affects reverberance through binaural recordings. Overall, for most soundscape indicators, it is feasible to use both recording methods, although when "realism," "reverberance," and "directivity" are involved in evaluation, binaural recordings will render corresponding perception more consistently than the monaural.


Subject(s)
Auditory Threshold/physiology , Loudness Perception/physiology , Sound , Acoustic Stimulation/methods , Acoustics , Hearing Tests/methods , Humans
11.
J Acoust Soc Am ; 145(6): 3586, 2019 06.
Article in English | MEDLINE | ID: mdl-31255128

ABSTRACT

Contributions of individual frequency bands to judgments of total loudness can be assessed by varying the level of each band independently from one presentation to the next and determining the relation between the change in level of each band and the loudness judgment. In a previous study, measures of perceptual weight obtained in this way for noise stimuli consisting of 15 bands showed greater weight associated with the highest and lowest bands than loudness models would predict. This was true even for noise with the long-term average speech spectrum, where the highest band contained little energy. One explanation is that listeners were basing decisions on some attribute other than loudness. The current study replicated earlier results for noise stimuli and included conditions using 15 tones located at the center frequencies of the noise bands. Although the two types of stimuli sound very different, the patterns of perceptual weight were nearly identical, suggesting that both sets of results are based on loudness judgments and that the edge bands play an important role in those judgments. The importance of the highest band was confirmed in a loudness-matching task involving all combinations of noise and tonal stimuli.


Subject(s)
Auditory Perception/physiology , Auditory Threshold/physiology , Loudness Perception/physiology , Perceptual Masking , Acoustic Stimulation/methods , Adult , Humans , Male , Noise , Sound
12.
J Neurophysiol ; 120(3): 920-925, 2018 09 01.
Article in English | MEDLINE | ID: mdl-29742032

ABSTRACT

Loud sounds have been demonstrated to increase motor cortex excitability when transcranial magnetic stimulation (TMS) is synchronized with auditory evoked N100 potential measured from electroencephalography (EEG). The N100 potential is generated by an afferent response to sound onset and feature analysis, and upon novel sound it is also related to the arousal reaction. The arousal reaction is known to originate from the ascending reticular activating system of the brain stem and to modulate neuronal activity throughout the central nervous system. In this study we investigated the difference in motor evoked potentials (MEPs) when deviant and novelty stimuli were randomly interspersed in a train of standard tones. Twelve healthy subjects participated in this study. Three types of sound stimuli were used: 1) standard stimuli (800 Hz), 2) deviant stimuli (560 Hz), and 3) novelty stimuli (12 different sounds). In each stimulus sequence 600 stimuli were given. Of these, 90 were deviant stimuli randomly placed between the standard stimuli. Each of 12 novel sounds was presented once in pseudorandomized order. TMS was randomly mixed with the sound stimuli so that it was either synchronized with the individual N100 or trailed the sound onset by 200 ms. All sounds elicited an increase in motor cortex excitability. The type of sound had no significant effect. We also demonstrated that TMS timed at 200-ms intervals caused a significant increment of MEPs. This contradicted our hypothesis that MEP amplitudes to TMS synchronized with N100 would be greater than those to TMS at 200 ms after a sound and remains unexplained. NEW & NOTEWORTHY We demonstrated modulation of motor cortical excitability with parallel auditory stimulus by combining navigated transcranial magnetic stimulation (TMS) with auditory stimuli. TMS was synchronized with auditory evoked potentials considered to be generated by the unconscious attention call process in the auditory system.


Subject(s)
Acoustic Stimulation/psychology , Evoked Potentials, Auditory/physiology , Evoked Potentials, Motor/physiology , Loudness Perception/physiology , Motor Cortex/physiology , Adult , Arousal/physiology , Electroencephalography , Female , Finland , Hospitals, University , Humans , Linear Models , Male , Transcranial Magnetic Stimulation , Young Adult
13.
Eur J Neurosci ; 48(2): 1743-1764, 2018 07.
Article in English | MEDLINE | ID: mdl-29888410

ABSTRACT

Tinnitus is the perception of a phantom sound characterized behaviorally by a loudness and a distress component. Although a wealth of information is available about the relationship between these behavioral correlates and changes in static functional connectivity, its relationship with dynamic changes in network connectivity is yet unexplored. The aim of this study was thus to investigate changes in the flexibility and stability of temporal variability in tinnitus and its relation to loudness and distress using continuous resting-state EEG. We observe an increase in temporal variability at the whole-brain level in tinnitus, which is spatiotemporally distributed at the nodal level. Behaviorally, we observe changes in the relationship between temporal variability and loudness and distress depending on the amount of distress experienced. In patients with low distress, there is no relationship between temporal variability and loudness or distress, demonstrating a resilience in dynamic connectivity of the brain. However, patients with high distress exhibit a direct relationship with increasing loudness in the primary auditory cortex and parahippocampus, and an inverse relationship with increasing distress in the parahippocampus. In tinnitus, the specific sensory (loudness) component related to increased temporal variability possibly reflects a Bayesian search for updating deafferentation-based missing information. On the other hand, the decreased temporal variability related to the nonspecific distress component possibly reflects a more hard-wired or less adaptive contextual processing. Therefore, our findings may reveal a way to understand the changes in network dynamics not just in tinnitus, but also in other brain disorders.


Subject(s)
Anxiety/physiopathology , Brain Waves/physiology , Cerebral Cortex/physiopathology , Connectome/methods , Electroencephalography/methods , Loudness Perception/physiology , Nerve Net/physiopathology , Tinnitus/physiopathology , Adult , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiopathology , Cerebral Cortex/diagnostic imaging , Cortical Synchronization/physiology , Female , Humans , Male , Middle Aged , Nerve Net/diagnostic imaging , Parahippocampal Gyrus/diagnostic imaging , Parahippocampal Gyrus/physiopathology , Tinnitus/diagnostic imaging
14.
J Acoust Soc Am ; 144(3): EL236, 2018 09.
Article in English | MEDLINE | ID: mdl-30424669

ABSTRACT

Normal hearing listeners judged loudness differences between two complex speech sounds, one consisting of "n" consonant-vowel (CV) pairs each spoken by a different talker and one consisting of "2n" CV pairs. When n was less than four, listeners' judgments of loudness differences between the two sounds was based on the level of the individual CVs within each sound, not the overall level of the sounds. When n was four or more, listeners' judgments of loudness differences between the two sounds was based on the overall level of the two sounds consisting of n or 2n CVs.


Subject(s)
Acoustic Stimulation/methods , Loudness Perception/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Auditory Perception/physiology , Female , Humans , Male , Phonetics , Young Adult
15.
J Acoust Soc Am ; 143(5): 2697, 2018 05.
Article in English | MEDLINE | ID: mdl-29857716

ABSTRACT

This paper develops a model to predict if listeners would be likely to complain due to annoyance when exposed to a certain noise signal with a prominent tone, such as those commonly produced by heating, ventilation, and air-conditioning systems. Twenty participants completed digit span tasks while exposed in a controlled lab to noise signals with differing levels of tones, ranging from 125 to 1000 Hz, and overall loudness. After completing the digit span tasks under each noise signal, from which task accuracy and speed of completion were captured, subjects were asked to rate level of annoyance and indicate the likelihood of complaining about the noise. Results show that greater tonality in noise has statistically significant effects on task performance by increasing the time it takes for participants to complete the digit span task; no statistically significant effects were found on task accuracy. A logistic regression model was developed to relate the subjective annoyance responses to two noise metrics, the stationary Loudness and Tonal Audibility, selected for the model due to high correlations with annoyance responses. The percentage of complaints model showed better performance and reliability over the percentage of highly annoyed or annoyed.


Subject(s)
Loudness Perception/physiology , Noise/adverse effects , Psychomotor Performance/physiology , Adult , Female , Forecasting , Humans , Male , Young Adult
16.
J Acoust Soc Am ; 143(5): 2994, 2018 05.
Article in English | MEDLINE | ID: mdl-29857738

ABSTRACT

Loudness depends on both the intensity and spectrum of a sound. Listeners with normal hearing perceive a broadband sound as being louder than an equal-level narrowband sound because loudness grows nonlinearly with level and is then summed across frequency bands. This difference in loudness as a function of bandwidth is reduced in listeners with sensorineural hearing loss (SNHL). Suppression, the reduction in the cochlear response to one sound by the simultaneous presentation of another sound, is also reduced in listeners with SNHL. Hearing-aid gain that is based on loudness measurements with pure tones may fail to restore normal loudness growth for broadband sounds. This study investigated whether hearing-aid amplification that mimics suppression can improve loudness summation for listeners with SNHL. Estimates of loudness summation were obtained using measurements of categorical loudness scaling (CLS). Stimuli were bandpass-filtered noises centered at 2 kHz with bandwidths in the range of 0.1-6.4 kHz. Gain was selected to restore normal loudness based on CLS measurements with pure tones. Gain that accounts for both compression and suppression resulted in better restoration of loudness summation, compared to compression alone. However, restoration was imperfect, suggesting that additional refinements to the signal processing and gain-prescription algorithms are needed.


Subject(s)
Acoustic Stimulation/methods , Hearing Aids , Hearing Loss/physiopathology , Hearing Loss/therapy , Loudness Perception/physiology , Adult , Aged , Aged, 80 and over , Auditory Perception/physiology , Female , Hearing Loss/diagnosis , Humans , Male , Middle Aged , Young Adult
17.
J Acoust Soc Am ; 143(4): 2119, 2018 04.
Article in English | MEDLINE | ID: mdl-29716301

ABSTRACT

Differences in individual listening patterns are reported for a dichotic sample discrimination task. Seven tones were drawn from normal distributions with means of 1000 or 1100 Hz on each trial. Even-numbered tones (2, 4, and 6) and odd-numbered tones (1, 3, 5, and 7) were drawn, respectively, from distributions with a 50-Hz and 200-Hz standard deviation. Task difficulty was manipulated by presenting odd and even tones at different intensities. In easy conditions, high and low informative tones were presented at 70 dB and 50 dB, respectively. In difficult conditions, high informative and low informative tones were presented at 50 dB and 70 dB, respectively. Participants judged whether the sample was from high- or low-mean distribution. Decision weights, efficiency, and sensitivity showed a range of abilities to attend to high informative tones, with d' from 2.4-0.7. Most listeners showed a left-ear advantage, while no listeners showed a right ear advantage. Some listeners, but not all, showed no loudness dominance effect with the ability to selectively attend to quiet tones in difficult conditions. These findings show that the influence of an attentional strategy in dichotic listening can overcome the loudness dominance effect for some listeners.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Dichotic Listening Tests/methods , Discrimination, Psychological/physiology , Functional Laterality , Loudness Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Bayes Theorem , Humans , Young Adult
18.
J Acoust Soc Am ; 144(5): 2751, 2018 11.
Article in English | MEDLINE | ID: mdl-30522299

ABSTRACT

The symmetric biphasic pulses used in contemporary cochlear implants (CIs) consist of both cathodic and anodic currents, which may stimulate different sites on spiral ganglion neurons and, potentially, interact with each other. The effect on the order of anodic and cathodic stimulation on loudness at short inter-pulse intervals (IPIs; 0-800 µs) is investigated. Pairs of opposite-polarity pseudomonophasic (PS) pulses were used and the amplitude of each pulse was manipulated independently. In experiment 1 the two PS pulses differed in their current level in order to elicit the same loudness when presented separately. Six users of the Advanced Bionics CI (Valencia, CA) loudness-ranked trains of the pulse pairs using a midpoint-comparison procedure. Stimuli with anodic-leading polarity were louder than those with cathodic-leading polarity for IPIs shorter than 400 µs. This effect was small-about 0.3 dB-but consistent across listeners. When the same procedure was repeated with both PS pulses having the same current level (experiment 2), anodic-leading stimuli were still louder than cathodic-leading stimuli at very short intervals. However, when using symmetric biphasic pulses (experiment 3) the effect disappeared at short intervals and reversed at long intervals. Possible peripheral sources of such polarity interactions are discussed.


Subject(s)
Auditory Perception/physiology , Cochlear Implants/adverse effects , Loudness Perception/physiology , Spiral Ganglion/physiopathology , Acoustic Stimulation , Aged , Cochlear Implantation/methods , Cochlear Implants/statistics & numerical data , Electric Stimulation/adverse effects , Electrodes, Implanted/standards , Humans , Middle Aged , Pitch Discrimination/physiology , Prosthesis Design , Spiral Ganglion/surgery
19.
Proc Natl Acad Sci U S A ; 111(22): E2339-48, 2014 Jun 03.
Article in English | MEDLINE | ID: mdl-24843153

ABSTRACT

Neurons in the medial superior olive (MSO) and lateral superior olive (LSO) of the auditory brainstem code for sound-source location in the horizontal plane, extracting interaural time differences (ITDs) from the stimulus fine structure and interaural level differences (ILDs) from the stimulus envelope. Here, we demonstrate a postsynaptic gradient in temporal processing properties across the presumed tonotopic axis; neurons in the MSO and the low-frequency limb of the LSO exhibit fast intrinsic electrical resonances and low input impedances, consistent with their processing of ITDs in the temporal fine structure. Neurons in the high-frequency limb of the LSO show low-pass electrical properties, indicating they are better suited to extracting information from the slower, modulated envelopes of sounds. Using a modeling approach, we assess ITD and ILD sensitivity of the neural filters to natural sounds, demonstrating that the transformation in temporal processing along the tonotopic axis contributes to efficient extraction of auditory spatial cues.


Subject(s)
Auditory Pathways/physiology , Cochlear Implants , Models, Neurological , Olivary Nucleus/physiology , Sound Localization/physiology , Acoustic Stimulation , Animals , Auditory Pathways/cytology , Cues , Guinea Pigs , Loudness Perception/physiology , Noise , Olivary Nucleus/cytology , Patch-Clamp Techniques , Rats , Space Perception/physiology
20.
Neuroimage ; 124(Pt A): 906-917, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26436490

ABSTRACT

The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.


Subject(s)
Attention/physiology , Loudness Perception/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Algorithms , Auditory Perception/physiology , Environment , Female , Humans , Magnetoencephalography , Male , Models, Neurological , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL