Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 185
Filter
1.
J Acoust Soc Am ; 156(1): 326-340, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38990035

ABSTRACT

Humans are adept at identifying spectral patterns, such as vowels, in different rooms, at different sound levels, or produced by different talkers. How this feat is achieved remains poorly understood. Two psychoacoustic analogs of spectral pattern recognition are spectral profile analysis and spectrotemporal ripple direction discrimination. This study tested whether pattern-recognition abilities observed previously at low frequencies are also observed at extended high frequencies. At low frequencies (center frequency ∼500 Hz), listeners were able to achieve accurate profile-analysis thresholds, consistent with prior literature. However, at extended high frequencies (center frequency ∼10 kHz), listeners' profile-analysis thresholds were either unmeasurable or could not be distinguished from performance based on overall loudness cues. A similar pattern of results was observed with spectral ripple discrimination, where performance was again considerably better at low than at high frequencies. Collectively, these results suggest a severe deficit in listeners' ability to analyze patterns of intensity across frequency in the extended high-frequency region that cannot be accounted for by cochlear frequency selectivity. One interpretation is that the auditory system is not optimized to analyze such fine-grained across-frequency profiles at extended high frequencies, as they are not typically informative for everyday sounds.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Psychoacoustics , Humans , Young Adult , Female , Male , Adult , Cues , Speech Perception/physiology , Sound Spectrography , Loudness Perception , Pattern Recognition, Physiological
2.
J Acoust Soc Am ; 155(6): R11-R12, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38829158

ABSTRACT

The Reflections series takes a look back on historical articles from The Journal of the Acoustical Society of America that have had a significant impact on the science and practice of acoustics.


Subject(s)
Acoustics , Cochlea , History, 20th Century , Humans , Cochlea/physiology , Animals , Sound
3.
Biology (Basel) ; 12(12)2023 Dec 13.
Article in English | MEDLINE | ID: mdl-38132348

ABSTRACT

Harmonic complex tones are easier to detect in noise than inharmonic complex tones, providing a potential perceptual advantage in complex auditory environments. Here, we explored whether the harmonic advantage extends to other auditory tasks that are important for navigating a noisy auditory environment, such as amplitude- and frequency-modulation detection. Sixty young normal-hearing listeners were tested, divided into two equal groups with and without musical training. Consistent with earlier studies, harmonic tones were easier to detect in noise than inharmonic tones, with a signal-to-noise ratio (SNR) advantage of about 2.5 dB, and the pitch discrimination of the harmonic tones was more accurate than that of inharmonic tones, even after differences in audibility were accounted for. In contrast, neither amplitude- nor frequency-modulation detection was superior with harmonic tones once differences in audibility were accounted for. Musical training was associated with better performance only in pitch-discrimination and frequency-modulation-detection tasks. The results confirm a detection and pitch-perception advantage for harmonic tones but reveal that the harmonic benefits do not extend to suprathreshold tasks that do not rely on extracting the fundamental frequency. A general theory is proposed that may account for the effects of both noise and memory on pitch-discrimination differences between harmonic and inharmonic tones.

4.
J Acoust Soc Am ; 154(6): 3821-3832, 2023 12 01.
Article in English | MEDLINE | ID: mdl-38109406

ABSTRACT

Auditory enhancement is a spectral contrast aftereffect that can facilitate the detection of novel events in an ongoing background. A single-interval paradigm combined with roved frequency content between trials can yield as much as 20 dB enhancement in young normal-hearing listeners. This study compared such enhancement in 15 listeners with sensorineural hearing loss with that in 15 age-matched adults and 15 young adults with normal audiograms. All groups were presented with stimulus levels of 70 dB sound pressure level (SPL) per component. The two groups with normal hearing were also tested at 45 dB SPL per component. The hearing-impaired listeners showed very little enhancement overall. However, when tested at the same high (70-dB) level, both young and age-matched normal-hearing listeners also showed substantially reduced enhancement, relative to that found at 45 dB SPL. Some differences in enhancement emerged between young and older normal-hearing listeners at the lower sound level. The results suggest that enhancement is highly level-dependent and may also decrease somewhat with age or slight hearing loss. Implications for hearing-impaired listeners may include a poorer ability to adapt to real-world acoustic variability, due in part to the higher levels at which sound must be presented to be audible.


Subject(s)
Deafness , Hearing Loss, Sensorineural , Speech Perception , Young Adult , Humans , Acoustic Stimulation , Hearing Loss, Sensorineural/diagnosis , Sound , Audiometry, Pure-Tone , Auditory Threshold
5.
JASA Express Lett ; 3(9)2023 09 01.
Article in English | MEDLINE | ID: mdl-37747320

ABSTRACT

The Mini Profile of Music Perception Skills (Mini-PROMS) is a rapid performance-based measure of musical perceptual competence. The present study was designed to determine the optimal way to evaluate and score the Mini-PROMS results. Two traditional methods for scoring the Mini-PROMS, the weighted composite score and the parametric sensitivity index (d'), were compared with nonparametric alternatives, also derived from signal detection theory. Performance estimates using the traditional methods were found to depend on response bias (e.g., confidence), making them suboptimal. The simple nonparametric alternatives provided unbiased and reliable performance estimates from the Mini-PROMS and are therefore recommended instead.


Subject(s)
Drama , Music , Bias , Perception
6.
J Neurosci ; 43(20): 3687-3695, 2023 05 17.
Article in English | MEDLINE | ID: mdl-37028932

ABSTRACT

Modulations in both amplitude and frequency are prevalent in natural sounds and are critical in defining their properties. Humans are exquisitely sensitive to frequency modulation (FM) at the slow modulation rates and low carrier frequencies that are common in speech and music. This enhanced sensitivity to slow-rate and low-frequency FM has been widely believed to reflect precise, stimulus-driven phase locking to temporal fine structure in the auditory nerve. At faster modulation rates and/or higher carrier frequencies, FM is instead thought to be coded by coarser frequency-to-place mapping, where FM is converted to amplitude modulation (AM) via cochlear filtering. Here, we show that patterns of human FM perception that have classically been explained by limits in peripheral temporal coding are instead better accounted for by constraints in the central processing of fundamental frequency (F0) or pitch. We measured FM detection in male and female humans using harmonic complex tones with an F0 within the range of musical pitch but with resolved harmonic components that were all above the putative limits of temporal phase locking (>8 kHz). Listeners were more sensitive to slow than fast FM rates, even though all components were beyond the limits of phase locking. In contrast, AM sensitivity remained better at faster than slower rates, regardless of carrier frequency. These findings demonstrate that classic trends in human FM sensitivity, previously attributed to auditory nerve phase locking, may instead reflect the constraints of a unitary code that operates at a more central level of processing.SIGNIFICANCE STATEMENT Natural sounds involve dynamic frequency and amplitude fluctuations. Humans are particularly sensitive to frequency modulation (FM) at slow rates and low carrier frequencies, which are prevalent in speech and music. This sensitivity has been ascribed to encoding of stimulus temporal fine structure (TFS) via phase-locked auditory nerve activity. To test this long-standing theory, we measured FM sensitivity using complex tones with a low F0 but only high-frequency harmonics beyond the limits of phase locking. Dissociating the F0 from TFS showed that FM sensitivity is limited not by peripheral encoding of TFS but rather by central processing of F0, or pitch. The results suggest a unitary code for FM detection limited by more central constraints.


Subject(s)
Cochlear Nerve , Music , Male , Humans , Female , Cochlear Nerve/physiology , Cochlea/physiology , Sound , Speech , Acoustic Stimulation
7.
J Cogn Neurosci ; 35(5): 765-780, 2023 05 01.
Article in English | MEDLINE | ID: mdl-36802367

ABSTRACT

Congenital amusia is a neurodevelopmental disorder characterized by difficulties in the perception and production of music, including the perception of consonance and dissonance, or the judgment of certain combinations of pitches as more pleasant than others. Two perceptual cues for dissonance are inharmonicity (the lack of a common fundamental frequency between components) and beating (amplitude fluctuations produced by close, interacting frequency components). Amusic individuals have previously been reported to be insensitive to inharmonicity, but to exhibit normal sensitivity to beats. In the present study, we measured adaptive discrimination thresholds in amusic participants and found elevated thresholds for both cues. We recorded EEG and measured the MMN in evoked potentials to consonance and dissonance deviants in an oddball paradigm. The amplitude of the MMN response was similar overall for amusic and control participants; however, in controls, there was a tendency toward larger MMNs for inharmonicity than for beating cues, whereas the opposite tendency was observed for the amusic participants. These findings suggest that initial encoding of consonance cues may be intact in amusia despite impaired behavioral performance, but that the relative weight of nonspectral (beating) cues may be increased for amusic individuals.


Subject(s)
Cues , Music , Humans , Acoustic Stimulation , Brain , Perception , Pitch Perception/physiology
8.
bioRxiv ; 2023 Dec 23.
Article in English | MEDLINE | ID: mdl-38187767

ABSTRACT

Objective: Cochlear implants (CIs) are auditory prostheses for individuals with severe to profound hearing loss, offering substantial but incomplete restoration of hearing function by stimulating the auditory nerve using electrodes. However, progress in CI performance and innovation has been constrained by the inability to rapidly test multiple sound processing strategies. Current research interfaces provided by major CI manufacturers have limitations in supporting a wide range of auditory experiments due to portability, programming difficulties, and the lack of direct comparison between sound processing algorithms. To address these limitations, we present the CompHEAR research platform, designed specifically for the Cochlear Implant Hackathon, enabling researchers to conduct diverse auditory experiments on a large scale. Study Design: Quasi-experimental. Setting: Virtual. Methods: CompHEAR is an open-source, user-friendly platform which offers flexibility and ease of customization, allowing researchers to set up a broad set of auditory experiments. CompHEAR employs a vocoder to simulate novel sound coding strategies for CIs. It facilitates even distribution of listening tasks among participants and delivers real-time metrics for evaluation. The software architecture underlies the platform's flexibility in experimental design and its wide range of applications in sound processing research. Results: Performance testing of the CompHEAR platform ensured that it could support at least 10,000 concurrent users. The CompHEAR platform was successfully implemented during the COVID-19 pandemic and enabled global collaboration for the CI Hackathon (www.cihackathon.com). Conclusion: The CompHEAR platform is a useful research tool that permits comparing diverse signal processing strategies across a variety of auditory tasks with crowdsourced judging. Its versatility, scalability, and ease of use can enable further research with the goal of promoting advancements in cochlear implant performance and improved patient outcomes.

9.
Curr Res Neurobiol ; 3: 100061, 2022.
Article in English | MEDLINE | ID: mdl-36386860

ABSTRACT

The auditory steady-state response (ASSR) has been traditionally recorded with few electrodes and is often measured as the voltage difference between mastoid and vertex electrodes (vertical montage). As high-density EEG recording systems have gained popularity, multi-channel analysis methods have been developed to integrate the ASSR signal across channels. The phases of ASSR across electrodes can be affected by factors including the stimulus modulation rate and re-referencing strategy, which will in turn affect the estimated ASSR strength. To explore the relationship between the classical vertical-montage ASSR and whole-scalp ASSR, we applied these two techniques to the same data to estimate the strength of ASSRs evoked by tones with sinusoidal amplitude modulation rates of around 40, 100, and 200 Hz. The whole-scalp methods evaluated in our study, with either linked-mastoid or common-average reference, included ones that assume equal phase across all channels, as well as ones that allow for different phase relationships. The performance of simple averaging was compared to that of more complex methods involving principal component analysis. Overall, the root-mean-square of the phase locking values (PLVs) across all channels provided the most efficient method to detect ASSR across the range of modulation rates tested here.

10.
J Acoust Soc Am ; 151(4): 2414, 2022 04.
Article in English | MEDLINE | ID: mdl-35461511

ABSTRACT

Absolute pitch (AP) possessors can identify musical notes without an external reference. Most AP studies have used musical instruments and pure tones for testing, rather than the human voice. However, the voice is crucial for human communication in both speech and music, and evidence for voice-specific neural processing mechanisms and brain regions suggests that AP processing of voice may be different. Here, musicians with AP or relative pitch (RP) completed online AP or RP note-naming tasks, respectively. Four synthetic sound categories were tested: voice, viola, simplified voice, and simplified viola. Simplified sounds had the same long-term spectral information but no temporal fluctuations (such as vibrato). The AP group was less accurate in judging the note names for voice than for viola in both the original and simplified conditions. A smaller, marginally significant effect was observed in the RP group. A voice disadvantage effect was also observed in a simple pitch discrimination task, even with simplified stimuli. To reconcile these results with voice-advantage effects in other domains, it is proposed that voices are processed in a way that voice- or speech-relevant features are facilitated at the expense of features that are less relevant to voice processing, such as fine-grained pitch information.


Subject(s)
Music , Speech Perception , Voice , Humans , Judgment , Pitch Discrimination , Pitch Perception
11.
Hear Res ; 420: 108500, 2022 07.
Article in English | MEDLINE | ID: mdl-35405591

ABSTRACT

Behavioral forward-masking thresholds with a spectrally notched-noise masker and a fixed low-level probe tone have been shown to provide accurate estimates of cochlear tuning. Estimates using simultaneous masking are similar but generally broader, presumably due to nonlinear cochlear suppression effects. So far, estimates with forward masking have been limited to frequencies of 1 kHz and above. This study used spectrally notched noise under forward and simultaneous masking to estimate frequency selectivity between 200 and 1000 Hz for young adult listeners with normal hearing. Estimates of filter tuning at 1000 Hz were in agreement with previous studies. Estimated tuning broadened below 1000 Hz, with the filter quality factor based on the equivalent rectangular bandwidth (QERB) decreasing more rapidly with decreasing frequency than predicted by previous equations, in line with earlier predictions based on otoacoustic-emission latencies. Estimates from simultaneous masking remained broader than those from forward masking by approximately the same ratio. The new data provide a way to compare human cochlear tuning estimates with auditory-nerve tuning curves from other species across most of the auditory frequency range.


Subject(s)
Cochlea , Perceptual Masking , Auditory Threshold , Cochlea/physiology , Cochlear Nerve , Humans , Noise/adverse effects , Perceptual Masking/physiology , Young Adult
12.
PLoS Comput Biol ; 18(3): e1009889, 2022 03.
Article in English | MEDLINE | ID: mdl-35239639

ABSTRACT

Accurate pitch perception of harmonic complex tones is widely believed to rely on temporal fine structure information conveyed by the precise phase-locked responses of auditory-nerve fibers. However, accurate pitch perception remains possible even when spectrally resolved harmonics are presented at frequencies beyond the putative limits of neural phase locking, and it is unclear whether residual temporal information, or a coarser rate-place code, underlies this ability. We addressed this question by measuring human pitch discrimination at low and high frequencies for harmonic complex tones, presented either in isolation or in the presence of concurrent complex-tone maskers. We found that concurrent complex-tone maskers impaired performance at both low and high frequencies, although the impairment introduced by adding maskers at high frequencies relative to low frequencies differed between the tested masker types. We then combined simulated auditory-nerve responses to our stimuli with ideal-observer analysis to quantify the extent to which performance was limited by peripheral factors. We found that the worsening of both frequency discrimination and F0 discrimination at high frequencies could be well accounted for (in relative terms) by optimal decoding of all available information at the level of the auditory nerve. A Python package is provided to reproduce these results, and to simulate responses to acoustic stimuli from the three previously published models of the human auditory nerve used in our analyses.


Subject(s)
Cochlear Nerve , Pitch Perception , Acoustic Stimulation/methods , Cochlear Nerve/physiology , Humans , Pitch Perception/physiology
13.
JASA Express Lett ; 2(8)2022 08 01.
Article in English | MEDLINE | ID: mdl-37311192

ABSTRACT

At very high frequencies, fundamental-frequency difference limens (F0DLs) for five-component harmonic complex tones can be better than predicted by optimal integration of information, assuming performance is limited by noise at the peripheral level, but are in line with predictions based on more central sources of noise. This study investigates whether there is a minimum number of harmonic components needed for such super-optimal integration effects and if harmonic range or inharmonicity affects this super-optimal integration. Results show super-optimal integration, even with two harmonic components and for most combinations of consecutive harmonic, but not inharmonic, components.


Subject(s)
Caffeine , Pitch Discrimination , Differential Threshold , Niacinamide
14.
Article in English | MEDLINE | ID: mdl-37465203

ABSTRACT

Humans and many other animals can hear a wide range of sounds. We can hear low and high notes and both quiet and loud sounds. We are also very good at telling the difference between sounds that are similar, like the speech sounds "argh" and "ah," and picking apart sounds that are mixed together, like when an orchestra is playing. But how do human hearing abilities compare to those of other animals? In this article, we discover how the inner ear determines hearing abilities. Many other mammals can hear very high notes that we cannot, and some can hear quiet sounds that we cannot. However, humans may be better than any other species at distinguishing similar sounds. We know this because, milliseconds after the sounds around us go into our ears, other sounds come out: sounds that are actually produced by those same ears!

15.
Ear Hear ; 43(2): 310-322, 2022.
Article in English | MEDLINE | ID: mdl-34291758

ABSTRACT

OBJECTIVES: This study tested whether speech perception and spatial acuity improved in people with single-sided deafness and a cochlear implant (SSD+CI) when the frequency allocation table (FAT) of the CI was adjusted to optimize frequency-dependent sensitivity to binaural disparities. DESIGN: Nine SSD+CI listeners with at least 6 months of CI listening experience participated. Individual experimental FATs were created to best match the frequency-to-place mapping across ears using either sensitivity to binaural temporal-envelope disparities or estimated insertion depth. Spatial localization ability was measured, along with speech perception in spatially collocated or separated noise, first with the clinical FATs and then with the experimental FATs acutely and at 2-month intervals for 6 months. Listeners then returned to the clinical FATs and were retested acutely and after 1 month to control for long-term learning effects. RESULTS: The experimental FAT varied between listeners, differing by an average of 0.15 octaves from the clinical FAT. No significant differences in performance were observed in any of the measures between the experimental FAT after 6 months and the clinical FAT one month later, and no clear relationship was found between the size of the frequency-allocation shift and perceptual changes. CONCLUSION: Adjusting the FAT to optimize sensitivity to interaural temporal-envelope disparities did not improve localization or speech perception. The clinical frequency-to-place alignment may already be sufficient, given the inherently poor spectral resolution of CIs. Alternatively, other factors, such as temporal misalignment between the two ears, may need to be addressed before any benefits of spectral alignment can be observed.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Speech Perception , Deafness/surgery , Hearing , Humans
16.
J Neurosci ; 42(3): 416-434, 2022 01 19.
Article in English | MEDLINE | ID: mdl-34799415

ABSTRACT

Frequency-to-place mapping, or tonotopy, is a fundamental organizing principle throughout the auditory system, from the earliest stages of auditory processing in the cochlea to subcortical and cortical regions. Although cortical maps are referred to as tonotopic, it is unclear whether they simply reflect a mapping of physical frequency inherited from the cochlea, a computation of pitch based on the fundamental frequency, or a mixture of these two features. We used high-resolution functional magnetic resonance imaging (fMRI) to measure BOLD responses as male and female human participants listened to pure tones that varied in frequency or complex tones that varied in either spectral content (brightness) or fundamental frequency (pitch). Our results reveal evidence for pitch tuning in bilateral regions that partially overlap with the traditional tonotopic maps of spectral content. In general, primary regions within Heschl's gyri (HGs) exhibited more tuning to spectral content, whereas areas surrounding HGs exhibited more tuning to pitch.SIGNIFICANCE STATEMENT Tonotopy, an orderly mapping of frequency, is observed throughout the auditory system. However, it is not known whether the tonotopy observed in the cortex simply reflects the frequency spectrum (as in the ear) or instead represents the higher-level feature of fundamental frequency, or pitch. Using carefully controlled stimuli and high-resolution functional magnetic resonance imaging (fMRI), we separated these features to study their cortical representations. Our results suggest that tonotopy in primary cortical regions is driven predominantly by frequency, but also reveal evidence for tuning to pitch in regions that partially overlap with the tonotopic gradients but extend into nonprimary cortical areas. In addition to resolving ambiguities surrounding cortical tonotopy, our findings provide evidence that selectivity for pitch is distributed bilaterally throughout auditory cortex.


Subject(s)
Auditory Cortex/diagnostic imaging , Auditory Perception/physiology , Pitch Perception/physiology , Acoustic Stimulation , Adult , Auditory Cortex/physiology , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Pitch Discrimination/physiology , Young Adult
17.
Front Neurosci ; 16: 1074752, 2022.
Article in English | MEDLINE | ID: mdl-36699531

ABSTRACT

Pitch is a fundamental aspect of auditory perception that plays an important role in our ability to understand speech, appreciate music, and attend to one sound while ignoring others. The questions surrounding how pitch is represented in the auditory system, and how our percept relates to the underlying acoustic waveform, have been a topic of inquiry and debate for well over a century. New findings and technological innovations have led to challenges of some long-standing assumptions and have raised new questions. This article reviews some recent developments in the study of pitch coding and perception and focuses on the topic of how pitch information is extracted from peripheral representations based on frequency-to-place mapping (tonotopy), stimulus-driven auditory-nerve spike timing (phase locking), or a combination of both. Although a definitive resolution has proved elusive, the answers to these questions have potentially important implications for mitigating the effects of hearing loss via devices such as cochlear implants.

18.
J Assoc Res Otolaryngol ; 22(6): 693-702, 2021 12.
Article in English | MEDLINE | ID: mdl-34519951

ABSTRACT

Adult listeners perceive pitch with fine precision, with many adults capable of discriminating less than a 1 % change in fundamental frequency (F0). Although there is variability across individuals, this precise pitch perception is an ability ascribed to cortical functions that are also important for speech and music perception. Infants display neural immaturity in the auditory cortex, suggesting that pitch discrimination may improve throughout infancy. In two experiments, we tested the limits of F0 (pitch) and spectral centroid (timbre) perception in 66 infants and 31 adults. Contrary to expectations, we found that infants at both 3 and 7 months were able to reliably detect small changes in F0 in the presence of random variations in spectral content, and vice versa, to the extent that their performance matched that of adults with musical training and exceeded that of adults without musical training. The results indicate high fidelity of F0 and spectral-envelope coding in infants, implying that fully mature cortical processing is not necessary for accurate discrimination of these features. The surprising difference in performance between infants and musically untrained adults may reflect a developmental trajectory for learning natural statistical covariations between pitch and timbre that improves coding efficiency but results in degraded performance in adults without musical training when expectations for such covariations are violated.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Pitch Discrimination , Pitch Perception/physiology , Timbre Perception , Child , Female , Humans , Infant , Male , Music
19.
Proc Natl Acad Sci U S A ; 118(29)2021 07 20.
Article in English | MEDLINE | ID: mdl-34266949

ABSTRACT

The perception of sensory events can be enhanced or suppressed by the surrounding spatial and temporal context in ways that facilitate the detection of novel objects and contribute to the perceptual constancy of those objects under variable conditions. In the auditory system, the phenomenon known as auditory enhancement reflects a general principle of contrast enhancement, in which a target sound embedded within a background sound becomes perceptually more salient if the background is presented first by itself. This effect is highly robust, producing an effective enhancement of the target of up to 25 dB (more than two orders of magnitude in intensity), depending on the task. Despite the importance of the effect, neural correlates of auditory contrast enhancement have yet to be identified in humans. Here, we used the auditory steady-state response to probe the neural representation of a target sound under conditions of enhancement. The probe was simultaneously modulated in amplitude with two modulation frequencies to distinguish cortical from subcortical responses. We found robust correlates for neural enhancement in the auditory cortical, but not subcortical, responses. Our findings provide empirical support for a previously unverified theory of auditory enhancement based on neural adaptation of inhibition and point to approaches for improving sensory prostheses for hearing loss, such as hearing aids and cochlear implants.


Subject(s)
Auditory Cortex/physiology , Auditory Perception , Acoustic Stimulation , Adolescent , Adult , Auditory Threshold , Behavior , Electroencephalography , Female , Hearing , Humans , Male , Sound , Young Adult
20.
PLoS One ; 16(4): e0249654, 2021.
Article in English | MEDLINE | ID: mdl-33826663

ABSTRACT

Differences in fundamental frequency (F0) or pitch between competing voices facilitate our ability to segregate a target voice from interferers, thereby enhancing speech intelligibility. Although lower-numbered harmonics elicit a stronger and more accurate pitch sensation than higher-numbered harmonics, it is unclear whether the stronger pitch leads to an increased benefit of pitch differences when segregating competing talkers. To answer this question, sentence recognition was tested in young normal-hearing listeners in the presence of a single competing talker. The stimuli were presented in a broadband condition or were highpass or lowpass filtered to manipulate the pitch accuracy of the voicing, while maintaining roughly equal speech intelligibility in the highpass and lowpass regions. Performance was measured with average F0 differences (ΔF0) between the target and single-talker masker of 0, 2, and 4 semitones. Pitch discrimination abilities were also measured to confirm that the lowpass-filtered stimuli elicited greater pitch accuracy than the highpass-filtered stimuli. No interaction was found between filter type and ΔF0 in the sentence recognition task, suggesting little or no effect of harmonic rank or pitch accuracy on the ability to use F0 to segregate natural voices, even when the average ΔF0 is relatively small. The results suggest that listeners are able to obtain some benefit of pitch differences between competing voices, even when pitch salience and accuracy is low.


Subject(s)
Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Auditory Perception/physiology , Cochlear Implants , Female , Humans , Male , Noise , Perceptual Masking/physiology , Pitch Discrimination/physiology , Recognition, Psychology/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL