Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 109
Filter
1.
Int Arch Otorhinolaryngol ; 28(3): e415-e423, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38974630

ABSTRACT

Introduction When cases of idiopathic sudden sensorineural hearing loss (SSNHL) are treated successfully, most clinicians assume the normality and symmetry of the auditory processing. This assumption is based on the recovery of the detection ability on the part of the patients, but the auditory processing involves much more than detection alone. Since certain studies have suggested a possible involvement of the central auditory system during the acute phase of sudden hearing loss, the present study hypothesized that auditory processing would be asymmetric in people who have experienced sudden hearing loss. Objective To assess the physiologic and electrophysiological conditions of the cochlea and central auditory system, as well as behavioral discrimination, of three primary aspects of sound (intensity, frequency, and time) in subjects with normal ears and ears treated successfully for SSNHL. Methods The study included 19 SSNHL patients whose normal and treated ears were assessed for otoacoustic emissions, speech auditory brainstem response, intensity and pitch discrimination, and temporal resolution in a within-subject design. Results The otoacoustic emissions were poorer in the treated ears compared to the normal ears. Ear- and sex-dependent differences were observed regarding otoacoustic emissions and pitch discrimination. Conclusion The asymmetrical processing observed in the present study was not consistent with the hearing threshold values, which might suggest that the central auditory system would be affected regardless of the status of the peripheral hearing. Further experiments with larger samples, different recovery scenarios after treatment, and other assessments are required.

2.
Hear Res ; 448: 109026, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38776706

ABSTRACT

Cochlear implants are medical devices that have restored hearing to approximately one million people around the world. Outcomes are impressive and most recipients attain excellent speech comprehension in quiet without relying on lip-reading cues, but pitch resolution is poor compared to normal hearing. Amplitude modulation of electrical stimulation is a primary cue for pitch perception in cochlear implant users. The experiments described in this article focus on the relationship between sensitivity to amplitude modulations and pitch resolution based on changes in the frequency of amplitude modulations. In the first experiment, modulation sensitivity and pitch resolution were measured in adults with no known hearing loss and in cochlear implant users with sounds presented to and processed by their clinical devices. Stimuli were amplitude-modulated sinusoids and amplitude-modulated narrow-band noises. Modulation detection and modulation frequency discrimination were measured for modulation frequencies centered on 110, 220, and 440 Hz. Pitch resolution based on changes in modulation frequency was measured for modulation depths of 25 %, 50 %, 100 %, and for a half-waved rectified modulator. Results revealed a strong linear relationship between modulation sensitivity and pitch resolution for cochlear implant users and peers with no known hearing loss. In the second experiment, cochlear implant users took part in analogous procedures of modulation sensitivity and pitch resolution but bypassing clinical sound processing using single-electrode stimulation. Results indicated that modulation sensitivity and pitch resolution was better conveyed by single-electrode stimulation than by clinical processors. Results at 440 Hz were worse, but also not well conveyed by clinical sound processing, so it remains unclear whether the 300 Hz perceptual limit described in the literature is a technological or biological limitation. These results highlight modulation depth and sensitivity as critical factors for pitch resolution in cochlear implant users and characterize the relationship that should inform the design of modulation enhancement algorithms for cochlear implants.


Subject(s)
Acoustic Stimulation , Cochlear Implantation , Cochlear Implants , Electric Stimulation , Pitch Perception , Humans , Middle Aged , Adult , Aged , Male , Female , Cochlear Implantation/instrumentation , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Cues , Young Adult , Speech Perception , Pitch Discrimination , Auditory Threshold , Correction of Hearing Impairment/instrumentation , Hearing
3.
Behav Sci (Basel) ; 14(2)2024 Feb 19.
Article in English | MEDLINE | ID: mdl-38392498

ABSTRACT

Attentional blink (AB) is a phenomenon in which the perception of a second target is impaired when it appears within 200-500 ms after the first target. Sound affects an AB and is accompanied by the appearance of an asymmetry during audiovisual integration, but it is not known whether this is related to the tonal representation of sound. The aim of the present study was to investigate the effect of audiovisual asymmetry on attentional blink and whether the presentation of pitch improves the ability to detect a target during an AB that is accompanied by audiovisual asymmetry. The results showed that as the lag increased, the subject's target recognition improved and the pitch produced further improvements. These improvements exhibited a significant asymmetry across the audiovisual channel. Our findings could contribute to better utilizations of audiovisual integration resources to improve attentional transients and auditory recognition decline, which could be useful in areas such as driving and education.

4.
Hum Brain Mapp ; 45(2): e26583, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38339902

ABSTRACT

Although it has been established that cross-modal activations occur in the occipital cortex during auditory processing among congenitally and early blind listeners, it remains uncertain whether these activations in various occipital regions reflect sensory analysis of specific sound properties, non-perceptual cognitive operations associated with active tasks, or the interplay between sensory analysis and cognitive operations. This fMRI study aimed to investigate cross-modal responses in occipital regions, specifically V5/MT and V1, during passive and active pitch perception by early blind individuals compared to sighted individuals. The data showed that V5/MT was responsive to pitch during passive perception, and its activations increased with task complexity. By contrast, widespread occipital regions, including V1, were only recruited during two active perception tasks, and their activations were also modulated by task complexity. These fMRI results from blind individuals suggest that while V5/MT activations are both stimulus-responsive and task-modulated, activations in other occipital regions, including V1, are dependent on the task, indicating similarities and differences between various visual areas during auditory processing.


Subject(s)
Occipital Lobe , Pitch Perception , Humans , Occipital Lobe/diagnostic imaging , Pitch Perception/physiology , Auditory Perception/physiology , Blindness/diagnostic imaging , Magnetic Resonance Imaging/methods , Brain Mapping/methods
5.
Eur Arch Otorhinolaryngol ; 281(7): 3475-3482, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38194096

ABSTRACT

PURPOSE: This study aimed to investigate the effects of low frequency (LF) pitch perception on speech-in-noise and music perception performance by children with cochlear implants (CIC) and typical hearing (THC). Moreover, the relationships between speech-in-noise and music perception as well as the effects of demographic and audiological factors on present research outcomes were studied. METHODS: The sample consisted of 22 CIC and 20 THC (7-10 years). Harmonic intonation (HI) and disharmonic intonation (DI) tests were used to assess LF pitch perception. Speech perception in quiet (WRSq)/noise (WRSn + 10) were tested with the Italian bisyllabic words for pediatric populations. The Gordon test was used to evaluate music perception (rhythm, melody, harmony, and overall). RESULTS: CIC/THC performance comparisons for LF pitch, speech-in-noise, and all music measures except harmony revealed statistically significant differences with large effect sizes. For the CI group, HI showed statistically significant correlations with melody discrimination. Melody/total Gordon scores were significantly correlated with WRSn + 10. For the overall group, HI/DI showed significant correlations with all music perception measures and WRSn + 10. Hearing thresholds showed significant effects on HI/DI scores. Hearing thresholds and WRSn + 10 scores were significantly correlated; both revealed significant effects on all music perception scores. CI age had significant effects on WRSn + 10, harmony, and total Gordon scores (p < 0.05). CONCLUSION: Such findings confirmed the significant effects of LF pitch perception on complex listening performance. Significant speech-in-noise and music perception correlations were as promising as results from recent studies indicating significant positive effects of music training on speech-in-noise recognition in CIC.


Subject(s)
Cochlear Implants , Music , Noise , Pitch Perception , Speech Perception , Humans , Child , Male , Female , Speech Perception/physiology , Pitch Perception/physiology , Cochlear Implantation
6.
Q J Exp Psychol (Hove) ; : 17470218231211722, 2023 Nov 25.
Article in English | MEDLINE | ID: mdl-37873972

ABSTRACT

While the literature is well represented in accounting for how aging influences segmental properties of speech, less is known about its influences on suprasegmental properties such as lexical tones. In addition, foreign language learning is increasingly endorsed as being a potential intervention to boost cognitive reserve and overall well-being in older adults. Empirical studies on young learners learning lexical tones are aplenty in comparison with older learners. Challenges in this domain for older learners might be different due to aging and other learner-internal factors. This review consolidates behavioural and neuroscientific research related to lexical tone, speech perception, factors characterising learner groups, and other variables that would influence lexical tone perception and learning in older adults. Factors commonly identified to influence tone learning in younger adult populations, such as musical experience, language background, and motivation in learning a new language, are discussed in relation to older learner groups and recommendations to boost lexical tone learning in older age are provided based on existing studies.

7.
Brain Sci ; 13(10)2023 Sep 29.
Article in English | MEDLINE | ID: mdl-37891759

ABSTRACT

Music is a complex phenomenon with multiple brain areas and neural connections being implicated. Centuries ago, music was discovered as an efficient modality for psychological status enrichment and even for the treatment of multiple pathologies. Modern research investigations give a new avenue for music perception and the understanding of the underlying neurological mechanisms, using neuroimaging, especially magnetic resonance imaging. Multiple brain areas were depicted in the last decades as being of high value for music processing, and further analyses in the neuropsychology field uncover the implications in emotional and cognitive activities. Music listening improves cognitive functions such as memory, attention span, and behavioral augmentation. In rehabilitation, music-based therapies have a high rate of success for the treatment of depression and anxiety and even in neurological disorders such as regaining the body integrity after a stroke episode. Our review focused on the neurological and psychological implications of music, as well as presenting the significant clinical relevance of therapies using music.

8.
PeerJ ; 11: e16053, 2023.
Article in English | MEDLINE | ID: mdl-37727688

ABSTRACT

Background: Most studies on pitch shift provoked by hearing loss have been conducted using pure tones. However, many sounds encountered in everyday life are harmonic complex tones. In the present study, psychoacoustic experiments using complex tones were performed on healthy participants, and the possible mechanisms that cause pitch shift due to hearing loss are discussed. Methods: Two experiments were performed in this study. In experiment 1, two tones were presented, and the participants were asked to select the tone that was higher in pitch. Partials with frequencies less than 250, 500, 750, or 1,000 Hz were eliminated from the harmonic complex tones and used as test tones to simulate low-tone hearing loss. Each tone pair was constructed such that the tone with a lower fundamental frequency (F0) was higher in terms of the frequency of the lowest partial. Furthermore, partials whose frequencies were greater than 1,300 or 1,600 Hz were also eliminated from these test tones to simulate high-tone hearing loss or modified sounds that patients may hear in everyday life. When a tone with a lower F0 was perceived as higher in pitch, it was considered a pitch shift from the expected tone. In experiment 2, tonal sequences were constructed to create a passage of the song "Lightly Row." Similar to experiment 1, partials of harmonic complex tones were eliminated from the tones. After listening to these tonal sequences, the participants were asked if the sequences sounded correct based on the melody or off-key. Results: The results showed that pitch shifts and the melody sound off-key when lower partials are eliminated from complex tones, especially when a greater number of high-frequency components are eliminated. Conclusion: Considering that these experiments were performed on healthy participants, the results suggest that the pitch shifts from the expected tone when patients with hearing loss hear certain complex tones, regardless of the underlying etiology of the hearing loss.


Subject(s)
Deafness , Hearing Loss , Humans , Hearing , Computer Simulation , Niacinamide
9.
Eur J Neurosci ; 58(7): 3686-3704, 2023 10.
Article in English | MEDLINE | ID: mdl-37752605

ABSTRACT

Human listeners prefer octave intervals slightly above the exact 2:1 frequency ratio. To study the neural underpinnings of this subjective preference, called the octave enlargement phenomenon, we compared neural responses between exact, slightly enlarged, oversized, and compressed octaves (or their multiples). The first experiment (n = 20) focused on the N1 and P2 event-related potentials (ERPs) elicited in EEG 50-250 ms after the second tone onset during passive listening of one-octave intervals. In the second experiment (n = 20) applying four-octave intervals, musician participants actively rated the different octave types as 'low', 'good' and 'high'. The preferred slightly enlarged octave was individually determined prior to the second experiment. In both experiments, N1-P2 peak-to-peak amplitudes attenuated for the exact and slightly enlarged octave intervals compared with compressed and oversized intervals, suggesting overlapping neural representations of tones an octave (or its multiples) apart. While there were no differences between the N1-P2 amplitudes to the exact and preferred enlarged octaves, ERP amplitudes differed after 500 ms from onset of the second tone of the pair. In the multivariate pattern analysis (MVPA) of the second experiment, the different octave types were distinguishable (spatial classification across electroencephalography [EEG] channels) 200 ms after second tone onset. Temporal classification within channels suggested two separate discrimination processes peaking around 300 and 700 ms. These findings appear to be related to active listening, as no multivariate results were found in the first, passive listening experiment. The present results suggest that the subjectively preferred octave size is resolved at the late stages of auditory processing.


Subject(s)
Evoked Potentials , Music , Humans , Psychoacoustics , Electroencephalography , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation
10.
Phonetica ; 80(5): 357-392, 2023 10 26.
Article in English | MEDLINE | ID: mdl-37534609

ABSTRACT

In previous studies comparing the intonation of questions and statements in German, greater f0 excursions of phrase-final rises have been associated with questions in both read speech and spontaneous speech. This holds for production studies as well as perception studies. However, a major question remains whether these differences are perceived categorically or continuously. Furthermore, we ask whether the differences in f0 scaling correspond to categorical linguistic functions or rather an attitudinal continuum. We conducted three different perception experiments: a classical categorical perception task, an imitation task, and a semantic evaluation task. The results suggest that f0 scaling in phrase-final rises is perceived as a phonetic continuum rather than in phonological categories. Furthermore, the gradual increase of the final rise is associated with a gradual increase in perceived questioning. Lastly, the phonetic cues to this degree of questioning are distinct from those to the other investigated meanings surprise and uncertainty. Accordingly, this study supports the assumption that questioning constitutes an attitudinal meaning in its own right.


Subject(s)
Language , Speech Perception , Humans , Semantics , Speech , Phonetics , Speech Acoustics
11.
Atten Percept Psychophys ; 85(6): 2083-2099, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37479873

ABSTRACT

Temporal envelope fluctuations of natural sounds convey critical information to speech and music processing. In particular, musical pitch perception is assumed to be primarily underlined by temporal envelope encoding. While increasing evidence demonstrates the importance of carrier fine structure to complex pitch perception, how carrier spectral information affects musical pitch perception is less clear. Here, transposed tones designed to convey identical envelope information across different carriers were used to assess the effects of carrier spectral composition to pitch discrimination and musical-interval and melody identifications. Results showed that pitch discrimination thresholds became lower (better) with increasing carrier frequencies from 1k to 10k Hz, with performance comparable to that of pure sinusoids. Musical interval and melody defined by the periodicity of sine- or harmonic complex envelopes across carriers were identified with greater than 85% accuracy even on a 10k-Hz carrier. Moreover, enhanced interval and melody identification performance was observed with increasing carrier frequency up to 6k Hz. Findings suggest a perceptual enhancement of temporal envelope information with increasing carrier spectral region in musical pitch processing, at least for frequencies up to 6k Hz. For carriers in the extended high-frequency region (8-20k Hz), the use of temporal envelope information to music pitch processing may vary depending on task requirement. Collectively, these results implicate the fidelity of temporal envelope information to musical pitch perception is more pronounced than previously considered, with ecological implications.


Subject(s)
Music , Humans , Acoustic Stimulation/methods , Pitch Perception , Pitch Discrimination
12.
Curr Biol ; 33(8): 1472-1486.e12, 2023 04 24.
Article in English | MEDLINE | ID: mdl-36958332

ABSTRACT

Speech and song have been transmitted orally for countless human generations, changing over time under the influence of biological, cognitive, and cultural pressures. Cross-cultural regularities and diversities in human song are thought to emerge from this transmission process, but testing how underlying mechanisms contribute to musical structures remains a key challenge. Here, we introduce an automatic online pipeline that streamlines large-scale cultural transmission experiments using a sophisticated and naturalistic modality: singing. We quantify the evolution of 3,424 melodies orally transmitted across 1,797 participants in the United States and India. This approach produces a high-resolution characterization of how oral transmission shapes melody, revealing the emergence of structures that are consistent with widespread musical features observed cross-culturally (small pitch sets, small pitch intervals, and arch-shaped melodic contours). We show how the emergence of these structures is constrained by individual biases in our participants-vocal constraints, working memory, and cultural exposure-which determine the size, shape, and complexity of evolving melodies. However, their ultimate effect on population-level structures depends on social dynamics taking place during cultural transmission. When participants recursively imitate their own productions (individual transmission), musical structures evolve slowly and heterogeneously, reflecting idiosyncratic musical biases. When participants instead imitate others' productions (social transmission), melodies rapidly shift toward homogeneous structures, reflecting shared structural biases that may underpin cross-cultural variation. These results provide the first quantitative characterization of the rich collection of biases that oral transmission imposes on music evolution, giving us a new understanding of how human song structures emerge via cultural transmission.


Subject(s)
Music , Singing , Voice , Humans , Memory, Short-Term , Speech
13.
Neuropsychologia ; 183: 108540, 2023 05 03.
Article in English | MEDLINE | ID: mdl-36913989

ABSTRACT

BACKGROUND: Acquired prosopagnosia is often associated with other deficits such as dyschromatopsia and topographagnosia, from damage to adjacent perceptual networks. A recent study showed that some subjects with developmental prosopagnosia also have congenital amusia, but problems with music perception have not been described with the acquired variant. OBJECTIVE: Our goal was to determine if music perception was also impaired in subjects with acquired prosopagnosia, and if so, its anatomic correlate. METHOD: We studied eight subjects with acquired prosopagnosia, all of whom had extensive neuropsychological and neuroimaging testing. They performed a battery of tests evaluating pitch and rhythm processing, including the Montréal Battery for the Evaluation of Amusia. RESULTS: At the group level, subjects with anterior temporal lesions were impaired in pitch perception relative to the control group, but not those with occipitotemporal lesions. Three of eight subjects with acquired prosopagnosia had impaired musical pitch perception while rhythm perception was spared. Two of the three also showed reduced musical memory. These three reported alterations in their emotional experience of music: one reported music anhedonia and aversion, while the remaining two had changes consistent with musicophilia. The lesions of these three subjects affected the right or bilateral temporal poles as well as the right amygdala and insula. None of the three prosopagnosic subjects with lesions limited to the inferior occipitotemporal cortex exhibited impaired pitch perception or musical memory, or reported changes in music appreciation. CONCLUSION: Together with the results of our previous studies of voice recognition, these findings indicate an anterior ventral syndrome that can include the amnestic variant of prosopagnosia, phonagnosia, and various alterations in music perception, including acquired amusia, reduced musical memory, and subjective reports of altered emotional experience of music.


Subject(s)
Auditory Perceptual Disorders , Music , Prosopagnosia , Humans , Prosopagnosia/psychology , Temporal Lobe/pathology , Auditory Perceptual Disorders/diagnostic imaging , Auditory Perceptual Disorders/etiology , Perception , Pitch Perception
14.
J Voice ; 2023 Jan 31.
Article in English | MEDLINE | ID: mdl-36732108

ABSTRACT

OBJECTIVES: Pitch perception distortion (PPD) is a novel term describing a phenomenon in which an amplified, accompanied singer's perception of their sung pitch relative to band or accompaniment becomes ambiguous, leading to one of two conditions: a) the singer believes they are out of tune with the accompaniment, but are in tune as perceived by a listener, or b) the singer believes they are in tune with the accompaniment, but are not. This pilot study aims to investigate the existence and incidence of PPD among amplified, accompanied performers and identify associated variables. DESIGN/METHODS: 115 singers were recruited to participate in an online survey, which collected information on musical training, performance environment, and PPD experience. RESULTS: Reported PPD incidence was 68%, with 92% of respondents indicating that PPD occurred rarely. The factors reported as most associated with PPD experiences included loud stage volume, poor song familiarity, singing outside one's habitual pitch range, and singing loudly. Contrary to previous studies and our hypotheses, no association was found between modality of auditory feedback (e.g., in-ears versus floor monitors) and incidence of PPD. Additionally, higher levels of training were found to be associated with higher incidence of PPD. CONCLUSIONS: The reported incidence supports that PPD exists beyond chance and anecdotal experience. In light of the highly trained sample, the data suggest that pitch accuracy in accompanied, amplified performance may be more associated with aural environment-specifically loud stage volume-and a highly trained singer's tuning strategy in response to that environment rather than a singer's mastery of vocal intonation skills in isolation. Loud stage volume was implicated as a primary factor associated with PPD, which may be related to the stapedius reflex. Future investigations will target attempted elicitation of PPD in trained singers after establishing baseline auditory reflex thresholds and objective measurements of intonation accuracy.

15.
J Voice ; 2023 Feb 06.
Article in English | MEDLINE | ID: mdl-36754684

ABSTRACT

PURPOSE: The purpose of this study was to investigate the relationship between pitch discrimination and fundamental frequency (fo) variation in running speech, with consideration of factors such as singing status and vocal hyperfunction (VH). METHOD: Female speakers (18-69 years) with typical voices (26 non-singers; 27 singers) and speakers with VH (22 non-singers; 30 singers) completed a pitch discrimination task and read the Rainbow Passage. The pitch discrimination task was a two-alternative forced choice procedure, in which participants determined whether tokens were the same or different. Tokens were a prerecorded sustained /ɑ/ of the participant's own voice and a pitch-shifted version of their sustained /ɑ/, such that the difference in fo was adaptively modified. Pitch discrimination and Rainbow Passage fo variation were calculated for each participant and compared via Pearson's correlations for each group. RESULTS: A significant strong correlation was found between pitch discrimination and fo variation for non-singers with typical voices. No significant correlations were found for the other three groups, with notable restrictions in the ranges of discrimination for both singer-groups and in the range of fo variation values for non-singers with VH. CONCLUSIONS: Speakers with worse pitch discrimination may increase their fo variation to produce self-salient intonational changes, which is in contrast to previous findings from articulatory investigations. The erosion of this relationship in groups with singing training and/or with VH may be explained by the known influence of musical training on pitch discrimination or the biomechanical changes associated with VH restricting speakers' abilities to change their fo.

16.
Neurosci Biobehav Rev ; 145: 105007, 2023 02.
Article in English | MEDLINE | ID: mdl-36535375

ABSTRACT

Listening to musical melodies is a complex task that engages perceptual and memoryrelated processes. The processes underlying melody cognition happen simultaneously on different timescales, ranging from milliseconds to minutes. Although attempts have been made, research on melody perception is yet to produce a unified framework of how melody processing is achieved in the brain. This may in part be due to the difficulty of integrating concepts such as perception, attention and memory, which pertain to different temporal scales. Recent theories on brain processing, which hold prediction as a fundamental principle, offer potential solutions to this problem and may provide a unifying framework for explaining the neural processes that enable melody perception on multiple temporal levels. In this article, we review empirical evidence for predictive coding on the levels of pitch formation, basic pitch-related auditory patterns,more complex regularity processing extracted from basic patterns and long-term expectations related to musical syntax. We also identify areas that would benefit from further inquiry and suggest future directions in research on musical melody perception.


Subject(s)
Music , Humans , Pitch Perception , Auditory Perception , Brain , Cognition , Acoustic Stimulation
17.
Cognition ; 232: 105327, 2023 03.
Article in English | MEDLINE | ID: mdl-36495710

ABSTRACT

Information in speech and music is often conveyed through changes in fundamental frequency (f0), perceived by humans as "relative pitch". Relative pitch judgments are complicated by two facts. First, sounds can simultaneously vary in timbre due to filtering imposed by a vocal tract or instrument body. Second, relative pitch can be extracted in two ways: by measuring changes in constituent frequency components from one sound to another, or by estimating the f0 of each sound and comparing the estimates. We examined the effects of timbral differences on relative pitch judgments, and whether any invariance to timbre depends on whether judgments are based on constituent frequencies or their f0. Listeners performed up/down and interval discrimination tasks with pairs of spoken vowels, instrument notes, or synthetic tones, synthesized to be either harmonic or inharmonic. Inharmonic sounds lack a well-defined f0, such that relative pitch must be extracted from changes in individual frequencies. Pitch judgments were less accurate when vowels/instruments were different compared to when they were the same, and were biased by the associated timbre differences. However, this bias was similar for harmonic and inharmonic sounds, and was observed even in conditions where judgments of harmonic sounds were based on f0 representations. Relative pitch judgments are thus not invariant to timbre, even when timbral variation is naturalistic, and when such judgments are based on representations of f0.


Subject(s)
Music , Pitch Perception , Humans , Pitch Discrimination , Acoustic Stimulation
18.
J Assoc Res Otolaryngol ; 24(1): 47-65, 2023 02.
Article in English | MEDLINE | ID: mdl-36471208

ABSTRACT

To obtain combined behavioural and electrophysiological measures of pitch perception, we presented harmonic complexes, bandpass filtered to contain only high-numbered harmonics, to normal-hearing listeners. These stimuli resemble bandlimited pulse trains and convey pitch using a purely temporal code. A core set of conditions consisted of six stimuli with baseline pulse rates of 94, 188 and 280 pps, filtered into a HIGH (3365-4755 Hz) or VHIGH (7800-10,800 Hz) region, alternating with a 36% higher pulse rate. Brainstem and cortical processing were measured using the frequency following response (FFR) and auditory change complex (ACC), respectively. Behavioural rate change difference limens (DLs) were measured by requiring participants to discriminate between a stimulus that changed rate twice (up-down or down-up) during its 750-ms presentation from a constant-rate pulse train. FFRs revealed robust brainstem phase locking whose amplitude decreased with increasing rate. Moderate-sized but reliable ACCs were obtained in response to changes in purely temporal pitch and, like the psychophysical DLs, did not depend consistently on the direction of rate change or on the pulse rate for baseline rates between 94 and 280 pps. ACCs were larger and DLs lower for stimuli in the HIGH than in the VHGH region. We argue that the ACC may be a useful surrogate for behavioural measures of rate discrimination, both for normal-hearing listeners and for cochlear-implant users. We also showed that rate DLs increased markedly when the baseline rate was reduced to 48 pps, and compared the behavioural and electrophysiological findings to recent cat data obtained with similar stimuli and methods.


Subject(s)
Cochlear Implantation , Cochlear Implants , Pitch Perception/physiology , Cochlear Implantation/methods , Brain Stem , Hearing , Pitch Discrimination/physiology
19.
Psychophysiology ; 60(2): e14170, 2023 02.
Article in English | MEDLINE | ID: mdl-36094011

ABSTRACT

Absolute pitch (AP) refers to the naming of musical tone without external reference. The influential two-component model states that AP is limited by the late-emerging pitch labeling process only and not the earlier perceptual and memory processes. Over the years, however, support for this model at the neural level has been mixed with various methodological limitations. Here, the electroencephalography responses of 27 AP possessors and 27 non-AP possessors were recorded. During both name verification and passive listening, event-related potential analyses showed a difference between AP and non-AP possessors at about 200 ms in their response toward tones compared with noise stimuli. Multivariate pattern analyses suggested that pitch naming was subserved by a series of transient processes for the first 250 ms, followed by a stage-like process for both AP and non-AP possessors with no group differences between them. These findings are inconsistent with the predictions of the two-component model, and instead suggest the existence of an early perceptual locus of AP.


Subject(s)
Auditory Perception , Music , Humans , Auditory Perception/physiology , Memory , Electroencephalography , Multivariate Analysis , Acoustic Stimulation
20.
Front Neurosci ; 16: 1006185, 2022.
Article in English | MEDLINE | ID: mdl-36161171

ABSTRACT

Both hearing and touch are sensitive to the frequency of mechanical oscillations-sound waves and tactile vibrations, respectively. The mounting evidence of parallels in temporal frequency processing between the two sensory systems led us to directly address the question of perceptual frequency equivalence between touch and hearing using stimuli of simple and more complex temporal features. In a cross-modal psychophysical paradigm, subjects compared the perceived frequency of pulsatile mechanical vibrations to that elicited by pulsatile acoustic (click) trains, and vice versa. Non-invasive pulsatile stimulation designed to excite a fixed population of afferents was used to induce desired temporal spike trains at frequencies spanning flutter up to vibratory hum (>50 Hz). The cross-modal perceived frequency for regular test pulse trains of either modality was a close match to the presented stimulus physical frequency up to 100 Hz. We then tested whether the recently discovered "burst gap" temporal code for frequency, that is shared by the two senses, renders an equivalent cross-modal frequency perception. When subjects compared trains comprising pairs of pulses (bursts) in one modality against regular trains in the other, the cross-sensory equivalent perceptual frequency best corresponded to the silent interval between the successive bursts in both auditory and tactile test stimuli. These findings suggest that identical acoustic and vibrotactile pulse trains, regardless of pattern, elicit equivalent frequencies, and imply analogous temporal frequency computation strategies in both modalities. This perceptual correspondence raises the possibility of employing a cross-modal comparison as a robust standard to overcome the prevailing methodological limitations in psychophysical investigations and strongly encourages cross-modal approaches for transmitting sensory information such as translating pitch into a similar pattern of vibration on the skin.

SELECTION OF CITATIONS
SEARCH DETAIL
...