Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.049
Filter
Add more filters

Publication year range
1.
Cell ; 184(18): 4626-4639.e13, 2021 09 02.
Article in English | MEDLINE | ID: mdl-34411517

ABSTRACT

Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.


Subject(s)
Auditory Cortex/physiology , Speech/physiology , Audiometry, Pure-Tone , Electrodes , Electronic Data Processing , Humans , Phonetics , Pitch Perception , Reaction Time/physiology , Temporal Lobe/physiology
2.
Proc Natl Acad Sci U S A ; 120(5): e2216146120, 2023 01 31.
Article in English | MEDLINE | ID: mdl-36693091

ABSTRACT

Some people, entirely untrained in music, can listen to a song and replicate it on a piano with unnerving accuracy. What enables some to "hear" music so much better than others? Long-standing research confirms that part of the answer is undoubtedly neurological and can be improved with training. However, are there structural, physical, or engineering attributes of the human hearing mechanism apparatus (i.e., the hair cells of the internal ear) that render one human innately superior to another in terms of propensity to listen to music? In this work, we investigate a physics-based model of the electromechanics of the hair cells in the inner ear to understand why a person might be physiologically better poised to distinguish musical sounds. A key feature of the model is that we avoid a "black-box" systems-type approach. All parameters are well-defined physical quantities, including membrane thickness, bending modulus, electromechanical properties, and geometrical features, among others. Using the two-tone interference problem as a proxy for musical perception, our model allows us to establish the basis for exploring the effect of external factors such as medicine or environment. As an example of the insights we obtain, we conclude that the reduction in bending modulus of the cell membranes (which for instance may be caused by the usage of a certain class of analgesic drugs) or an increase in the flexoelectricity of the hair cell membrane can interfere with the perception of two-tone excitation.


Subject(s)
Music , Speech Perception , Humans , Auditory Perception , Hearing , Physics , Speech Perception/physiology , Pitch Perception/physiology
3.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38566511

ABSTRACT

This study investigates neural processes in infant speech processing, with a focus on left frontal brain regions and hemispheric lateralization in Mandarin-speaking infants' acquisition of native tonal categories. We tested 2- to 6-month-old Mandarin learners to explore age-related improvements in tone discrimination, the role of inferior frontal regions in abstract speech category representation, and left hemisphere lateralization during tone processing. Using a block design, we presented four Mandarin tones via [ta] and measured oxygenated hemoglobin concentration with functional near-infrared spectroscopy. Results showed age-related improvements in tone discrimination, greater involvement of frontal regions in older infants indicating abstract tonal representation development and increased bilateral activation mirroring native adult Mandarin speakers. These findings contribute to our broader understanding of the relationship between native speech acquisition and infant brain development during the critical period of early language learning.


Subject(s)
Speech Perception , Speech , Adult , Infant , Humans , Aged , Speech Perception/physiology , Pitch Perception/physiology , Language Development , Brain/diagnostic imaging , Brain/physiology
4.
J Cogn Neurosci ; 36(6): 1099-1122, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38358004

ABSTRACT

This article investigates the processing of intonational rises and falls when presented unexpectedly in a stream of repetitive auditory stimuli. It examines the neurophysiological correlates (ERPs) of attention to these unexpected stimuli through the use of an oddball paradigm where sequences of repetitive stimuli are occasionally interspersed with a deviant stimulus, allowing for elicitation of an MMN. Whereas previous oddball studies on attention toward unexpected sounds involving pitch rises were conducted on nonlinguistic stimuli, the present study uses as stimuli lexical items in German with naturalistic intonation contours. Results indicate that rising intonation plays a special role in attention orienting at a pre-attentive processing stage, whereas contextual meaning (here a list of items) is essential for activating attentional resources at a conscious processing stage. This is reflected in the activation of distinct brain responses: Rising intonation evokes the largest MMN, whereas falling intonation elicits a less pronounced MMN followed by a P3 (reflecting a conscious processing stage). Subsequently, we also find a complex interplay between the phonological status (i.e., accent/head marking vs. boundary/edge marking) and the direction of pitch change in their contribution to attention orienting: Attention is not oriented necessarily toward a specific position in prosodic structure (head or edge). Rather, we find that the intonation contour itself and the appropriateness of the contour in the linguistic context are the primary cues to two core mechanisms of attention orienting, pre-attentive and conscious orientation respectively, whereas the phonological status of the pitch event plays only a supplementary role.


Subject(s)
Acoustic Stimulation , Attention , Electroencephalography , Evoked Potentials, Auditory , Humans , Female , Male , Attention/physiology , Adult , Young Adult , Evoked Potentials, Auditory/physiology , Orientation/physiology , Germany , Language , Reaction Time/physiology , Pitch Perception/physiology , Speech Perception/physiology , Auditory Perception/physiology
5.
Hum Brain Mapp ; 45(2): e26583, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38339902

ABSTRACT

Although it has been established that cross-modal activations occur in the occipital cortex during auditory processing among congenitally and early blind listeners, it remains uncertain whether these activations in various occipital regions reflect sensory analysis of specific sound properties, non-perceptual cognitive operations associated with active tasks, or the interplay between sensory analysis and cognitive operations. This fMRI study aimed to investigate cross-modal responses in occipital regions, specifically V5/MT and V1, during passive and active pitch perception by early blind individuals compared to sighted individuals. The data showed that V5/MT was responsive to pitch during passive perception, and its activations increased with task complexity. By contrast, widespread occipital regions, including V1, were only recruited during two active perception tasks, and their activations were also modulated by task complexity. These fMRI results from blind individuals suggest that while V5/MT activations are both stimulus-responsive and task-modulated, activations in other occipital regions, including V1, are dependent on the task, indicating similarities and differences between various visual areas during auditory processing.


Subject(s)
Occipital Lobe , Pitch Perception , Humans , Occipital Lobe/diagnostic imaging , Pitch Perception/physiology , Auditory Perception/physiology , Blindness/diagnostic imaging , Magnetic Resonance Imaging/methods , Brain Mapping/methods
6.
Cerebellum ; 23(1): 172-180, 2024 Feb.
Article in English | MEDLINE | ID: mdl-36715818

ABSTRACT

Brainstem degeneration is a prominent feature of spinocerebellar ataxia type 3 (SCA3), involving structures that execute binaural synchronization with microsecond precision. As a consequence, auditory processing may deteriorate during the course of disease. We tested whether the binaural "Huggins pitch" effect is suitable to study the temporal precision of brainstem functioning in SCA3 mutation carriers. We expected that they would have difficulties perceiving Huggins pitch at high frequencies, and that they would show attenuated neuromagnetic responses to Huggins pitch. The upper limit of Huggins pitch perception was psychoacoustically determined in 18 pre-ataxic and ataxic SCA3 mutation carriers and in 18 age-matched healthy controls. Moreover, the cortical N100 response following Huggins pitch onset was acquired by means of magnetoencephalography (MEG). MEG recordings were analyzed using dipole source modeling and comprised a monaural pitch condition and a no-pitch condition with simple binaural correlation changes. Compared with age-matched controls, ataxic but not pre-ataxic SCA3 mutation carriers had significantly lower frequency limits up to which Huggins pitch could be heard. Listeners with lower frequency limits also showed diminished MEG responses to Huggins pitch, but not in the two control conditions. Huggins pitch is a promising tool to assess brainstem functioning in ataxic SCA3 patients. Future studies should refine the psychophysiological setup to capture possible performance decrements also in pre-ataxic mutation carriers. Longitudinal observations will be needed to prove the potential of the assessment of Huggins pitch as a biomarker to track brainstem functioning during the disease course in SCA3.


Subject(s)
Machado-Joseph Disease , Humans , Machado-Joseph Disease/genetics , Hearing , Pitch Perception/physiology , Magnetoencephalography , Mutation/genetics
7.
Anim Cogn ; 27(1): 38, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38750339

ABSTRACT

This study investigates the musical perception skills of dogs through playback experiments. Dogs were trained to distinguish between two different target locations based on a sequence of four ascending or descending notes. A total of 16 dogs of different breeds, age, and sex, but all of them with at least basic training, were recruited for the study. Dogs received training from their respective owners in a suitable environment within their familiar home settings. The training sequence consisted of notes [Do-Mi-Sol#-Do (C7-E7-G7#-C8; Hz frequency: 2093, 2639, 3322, 4186)] digitally generated as pure sinusoidal tones. The training protocol comprised 3 sequential training levels, with each level consisting of 4 sessions with a minimum of 10 trials per session. In the test phase, the sequence was transposed to evaluate whether dogs used relative pitch when identifying the sequences. A correct response by the dog was recorded as 1, while an incorrect response, occurring when the dog chose the opposite zone of the bowl, was marked as 0. Statistical analyses were performed using a binomial test. Among 16 dogs, only two consistently performed above the chance level, demonstrating the ability to recognize relative pitch, even with transposed sequences. This study suggests that dogs may have the ability to attend to relative pitch, a critical aspect of human musicality.


Subject(s)
Music , Dogs , Animals , Male , Female , Auditory Perception , Pitch Perception , Acoustic Stimulation
8.
PLoS Comput Biol ; 19(1): e1010307, 2023 01.
Article in English | MEDLINE | ID: mdl-36634121

ABSTRACT

Changes in the frequency content of sounds over time are arguably the most basic form of information about the behavior of sound-emitting objects. In perceptual studies, such changes have mostly been investigated separately, as aspects of either pitch or timbre. Here, we propose a unitary account of "up" and "down" subjective judgments of frequency change, based on a model combining auditory correlates of acoustic cues in a sound-specific and listener-specific manner. To do so, we introduce a generalized version of so-called Shepard tones, allowing symmetric manipulations of spectral information on a fine scale, usually associated to pitch (spectral fine structure, SFS), and on a coarse scale, usually associated timbre (spectral envelope, SE). In a series of behavioral experiments, listeners reported "up" or "down" shifts across pairs of generalized Shepard tones that differed in SFS, in SE, or in both. We observed the classic properties of Shepard tones for either SFS or SE shifts: subjective judgements followed the smallest log-frequency change direction, with cases of ambiguity and circularity. Interestingly, when both SFS and SE changes were applied concurrently (synergistically or antagonistically), we observed a trade-off between cues. Listeners were encouraged to report when they perceived "both" directions of change concurrently, but this rarely happened, suggesting a unitary percept. A computational model could accurately fit the behavioral data by combining different cues reflecting frequency changes after auditory filtering. The model revealed that cue weighting depended on the nature of the sound. When presented with harmonic sounds, listeners put more weight on SFS-related cues, whereas inharmonic sounds led to more weight on SE-related cues. Moreover, these stimulus-based factors were modulated by inter-individual differences, revealing variability across listeners in the detailed recipe for "up" and "down" judgments. We argue that frequency changes are tracked perceptually via the adaptive combination of a diverse set of cues, in a manner that is in fact similar to the derivation of other basic auditory dimensions such as spatial location.


Subject(s)
Acoustics , Auditory Perception , Acoustic Stimulation , Cues , Judgment , Pitch Perception
9.
Exp Brain Res ; 242(1): 225-239, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37999725

ABSTRACT

The present study examined opposing and following vocal responses to altered auditory feedback (AAF) to determine how damage to left-hemisphere brain networks impairs the internal forward model and feedback mechanisms in post-stroke aphasia. Forty-nine subjects with aphasia and sixty age-matched controls performed speech vowel production tasks while their auditory feedback was altered using randomized ± 100 cents upward and downward pitch-shift stimuli. Data analysis revealed that when vocal responses were averaged across all trials (i.e., opposing and following), the overall magnitude of vocal compensation was significantly reduced in the aphasia group compared with controls. In addition, when vocal responses were analyzed separately for opposing and following trials, subjects in the aphasia group showed a significantly lower percentage of opposing and higher percentage of following vocal response trials compared with controls, particularly for the upward pitch-shift stimuli. However, there was no significant difference in the magnitude of opposing and following vocal responses between the two groups. These findings further support previous evidence on the impairment of vocal sensorimotor control in aphasia and provide new insights into the distinctive impact of left-hemisphere stroke on the internal forward model and feedback mechanisms. In this context, we propose that the lower percentage of opposing responses in aphasia may be accounted for by deficits in feedback-dependent mechanisms of audio-vocal integration and motor control. In addition, the higher percentage of following responses may reflect aberrantly increased reliance of the speech system on the internal forward model for generating sensory predictions during vocal error detection and motor control.


Subject(s)
Aphasia , Voice , Humans , Feedback , Pitch Perception/physiology , Voice/physiology , Speech/physiology , Feedback, Sensory/physiology , Aphasia/etiology
10.
Dev Sci ; 27(3): e13459, 2024 May.
Article in English | MEDLINE | ID: mdl-37987377

ABSTRACT

We report the findings of a multi-language and multi-lab investigation of young infants' ability to discriminate lexical tones as a function of their native language, age and language experience, as well as of tone properties. Given the high prevalence of lexical tones across human languages, understanding lexical tone acquisition is fundamental for comprehensive theories of language learning. While there are some similarities between the developmental course of lexical tone perception and that of vowels and consonants, findings for lexical tones tend to vary greatly across different laboratories. To reconcile these differences and to assess the developmental trajectory of native and non-native perception of tone contrasts, this study employed a single experimental paradigm with the same two pairs of Cantonese tone contrasts (perceptually similar vs. distinct) across 13 laboratories in Asia-Pacific, Europe and North-America testing 5-, 10- and 17-month-old monolingual (tone, pitch-accent, non-tone) and bilingual (tone/non-tone, non-tone/non-tone) infants. Across the age range and language backgrounds, infants who were not exposed to Cantonese showed robust discrimination of the two non-native lexical tone contrasts. Contrary to this overall finding, the statistical model assessing native discrimination by Cantonese-learning infants failed to yield significant effects. These findings indicate that lexical tone sensitivity is maintained from 5 to 17 months in infants acquiring tone and non-tone languages, challenging the generalisability of the existing theoretical accounts of perceptual narrowing in the first months of life. RESEARCH HIGHLIGHTS: This is a multi-language and multi-lab investigation of young infants' ability to discriminate lexical tones. This study included data from 13 laboratories testing 5-, 10-, and 17-month-old monolingual (tone, pitch-accent, non-tone) and bilingual (tone/non-tone, non-tone/non-tone) infants. Overall, infants discriminated a perceptually similar and a distinct non-native tone contrast, although there was no evidence of a native tone-language advantage in discrimination. These results demonstrate maintenance of tone discrimination throughout development.


Subject(s)
Pitch Perception , Speech Perception , Infant , Humans , Laboratories , Phonetics , Timbre Perception
11.
Cereb Cortex ; 33(10): 6465-6473, 2023 05 09.
Article in English | MEDLINE | ID: mdl-36702477

ABSTRACT

Absolute pitch (AP) is the ability to rapidly label pitch without an external reference. The speed of AP labeling may be related to faster sensory processing. We compared time needed for auditory processing in AP musicians, non-AP musicians, and nonmusicians (NM) using high-density electroencephalographic recording. Participants responded to pure tones and sung voice. Stimuli evoked a negative deflection peaking at ~100 ms (N1) post-stimulus onset, followed by a positive deflection peaking at ~200 ms (P2). N1 latency was shortest in AP, intermediate in non-AP musicians, and longest in NM. Source analyses showed decreased auditory cortex and increased frontal cortex contributions to N1 for complex tones compared with pure tones. Compared with NM, AP musicians had weaker source currents in left auditory cortex but stronger currents in left inferior frontal gyrus (IFG) during N1, and stronger currents in left IFG during P2. Compared with non-AP musicians, AP musicians exhibited stronger source currents in right insula and left IFG during N1, and stronger currents in left IFG during P2. Non-AP musicians had stronger N1 currents in right auditory cortex than nonmusicians. Currents in left IFG and left auditory cortex were correlated to response times exclusively in AP. Findings suggest a left frontotemporal network supports rapid pitch labeling in AP.


Subject(s)
Music , Pitch Perception , Humans , Pitch Perception/physiology , Auditory Perception , Prefrontal Cortex , Reaction Time/physiology , Electroencephalography , Acoustic Stimulation , Pitch Discrimination/physiology , Evoked Potentials, Auditory/physiology
12.
Cereb Cortex ; 33(14): 9105-9116, 2023 07 05.
Article in English | MEDLINE | ID: mdl-37246155

ABSTRACT

The perception of pitch is a fundamental percept, which is mediated by the auditory system, requiring the abstraction of stimulus properties related to the spectro-temporal structure of sound. Despite its importance, there is still debate as to the precise areas responsible for its encoding, which may be due to species differences or differences in the recording measures and choices of stimuli used in previous studies. Moreover, it was unknown whether the human brain contains pitch neurons and how distributed such neurons might be. Here, we present the first study to measure multiunit neural activity in response to pitch stimuli in the auditory cortex of intracranially implanted humans. The stimulus sets were regular-interval noise with a pitch strength that is related to the temporal regularity and a pitch value determined by the repetition rate and harmonic complexes. Specifically, we demonstrate reliable responses to these different pitch-inducing paradigms that are distributed throughout Heschl's gyrus, rather than being localized to a particular region, and this finding was evident regardless of the stimulus presented. These data provide a bridge across animal and human studies and aid our understanding of the processing of a critical percept associated with acoustic stimuli.


Subject(s)
Auditory Cortex , Animals , Humans , Auditory Cortex/physiology , Pitch Perception/physiology , Acoustic Stimulation , Brain Mapping , Evoked Potentials, Auditory/physiology , Auditory Perception
13.
Cereb Cortex ; 33(9): 5625-5635, 2023 04 25.
Article in English | MEDLINE | ID: mdl-36376991

ABSTRACT

Current models of speech motor control propose a role for the left inferior frontal gyrus (IFG) in feedforward control of speech production. There is evidence, however, that has implicated the functional relevance of the left IFG for the neuromotor processing of vocal feedback errors. The present event-related potential (ERP) study examined whether the left IFG is causally linked to auditory feedback control of vocal production with high-definition transcranial alternating current stimulation (HD-tACS). After receiving active or sham HD-tACS over the left IFG at 6 or 70 Hz, 20 healthy adults vocalized the vowel sounds while hearing their voice unexpectedly pitch-shifted by ±200 cents. The results showed that 6 or 70 Hz HD-tACS over the left IFG led to larger magnitudes and longer latencies of vocal compensations for pitch perturbations paralleled by larger ERP P2 responses than sham HD-tACS. Moreover, there was a lack of frequency specificity that showed no significant differences between 6 and 70 Hz HD-tACS. These findings provide first causal evidence linking the left IFG to vocal pitch regulation, suggesting that the left IFG is an important part of the feedback control network that mediates vocal compensations for auditory feedback errors.


Subject(s)
Electroencephalography , Transcranial Direct Current Stimulation , Adult , Humans , Feedback , Pitch Perception/physiology , Acoustic Stimulation , Prefrontal Cortex , Feedback, Sensory/physiology
14.
J Exp Child Psychol ; 242: 105883, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38412568

ABSTRACT

Most languages of the world use lexical tones to contrast words. Thus, understanding how individuals process tones when learning new words is fundamental for a better understanding of the mechanisms underlying word learning. The current study asked how tonal information is integrated during word learning. We investigated whether variability in tonal information during learning can interfere with the learning of new words and whether this is language and age dependent. Cantonese- and French-learning 30-month-olds (N = 97) and Cantonese- and French-speaking adults (N = 50) were tested with an eye-tracking task on their ability to learn phonetically different pairs of novel words in two learning conditions: a 1-tone condition in which each object was named with a single label and a 3-tone condition in which each object was named with three different labels varying in tone. We predicted learning in all groups in the 1-tone condition. For the 3-tone condition, because tones are part of the phonological system of Cantonese but not of French, we expected the Cantonese groups to either fail (toddlers) or show lower performance than in the 1-tone condition (adults), whereas the French groups might show less sensitivity to this manipulation. The results show that all participants learned in the 1-tone condition and were sensitive to tone variation to some extent. Learning in the 3-tone condition was impeded in both groups of toddlers. We argue that tonal interference in word learning likely comes from the phonological level in the Cantonese groups and from the acoustic level in the French groups.


Subject(s)
Pitch Perception , Speech Perception , Adult , Humans , Language , Verbal Learning , Linguistics
15.
J Exp Child Psychol ; 248: 106046, 2024 Dec.
Article in English | MEDLINE | ID: mdl-39241321

ABSTRACT

Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.


Subject(s)
Pitch Perception , Humans , Infant , Male , Female , Pitch Perception/physiology , Visual Perception/physiology , Learning/physiology , Child Development/physiology , Communication , Photic Stimulation , Acoustic Stimulation
16.
Psychol Res ; 88(5): 1602-1615, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38720089

ABSTRACT

For the auditory dimensions loudness and pitch a vertical SARC effect (Spatial Association of Response Codes) exists: When responding to loud (high) tones, participants are faster with top-sided responses compared to bottom-sided responses and vice versa for soft (low) tones. These effects are typically explained by two different spatial representations for both dimensions with pitch being represented on a helix structure and loudness being represented as spatially associated magnitude. Prior studies show incoherent results with regard to the question whether two SARC effects can occur at the same time as well as whether SARC effects interact with each other. Therefore, this study aimed to investigate the interrelation between the SARC effect for pitch and the SARC effect for loudness in a timbre discrimination task. Participants (N = 36) heard one tone per trial and had to decide whether the presented tone was a violin tone or an organ tone by pressing a top-sided or bottom-sided response key. Loudness and pitch were varied orthogonally. We tested the occurrence of SARC effects for pitch and loudness as well as their potential interaction by conducting a multiple linear regression with difference of reaction time (dRT) as dependent variable, and loudness and pitch as predictors. Frequentist and Bayesian analyses revealed that the regression coefficients of pitch and loudness were smaller than zero indicating the simultaneous occurrence of a SARC effects for both dimensions. In contrast, the interaction coefficient was not different from zero indicating an additive effect of both predictors.


Subject(s)
Loudness Perception , Pitch Perception , Humans , Male , Female , Adult , Loudness Perception/physiology , Young Adult , Pitch Perception/physiology , Reaction Time/physiology , Acoustic Stimulation
17.
Mem Cognit ; 52(5): 1142-1151, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38347258

ABSTRACT

Most individuals, regardless of formal musical training, have long-term absolute pitch memory (APM) for familiar musical recordings, though with varying levels of accuracy. The present study followed up on recent evidence suggesting an association between singing accuracy and APM (Halpern & Pfordresher, 2022, Attention, Perception, & Psychophysics, 84(1), 260-269), as well as tonal short-term memory (STM) and APM (Van Hedger et al., 2018, Quarterly Journal of Experimental Psychology, 71(4), 879-891). Participants from three research sites (n = 108) completed a battery of tasks including APM, tonal STM, singing accuracy, and self-reported auditory imagery. Both tonal STM and singing accuracy predicted APM, replicating prior results. Tonal STM also predicted singing accuracy, music training, and auditory imagery. Further tests suggested that the association between APM and singing accuracy was fully mediated by tonal STM. This pattern comports well with models of vocal pitch matching that include STM for pitch as a mechanism for sensorimotor translation.


Subject(s)
Memory, Short-Term , Music , Pitch Perception , Humans , Pitch Perception/physiology , Adult , Young Adult , Male , Female , Memory, Short-Term/physiology , Singing/physiology , Memory, Long-Term/physiology , Imagination/physiology , Adolescent
18.
Proc Natl Acad Sci U S A ; 118(36)2021 09 07.
Article in English | MEDLINE | ID: mdl-34475209

ABSTRACT

Adults can learn to identify nonnative speech sounds with training, albeit with substantial variability in learning behavior. Increases in behavioral accuracy are associated with increased separability for sound representations in cortical speech areas. However, it remains unclear whether individual auditory neural populations all show the same types of changes with learning, or whether there are heterogeneous encoding patterns. Here, we used high-resolution direct neural recordings to examine local population response patterns, while native English listeners learned to recognize unfamiliar vocal pitch patterns in Mandarin Chinese tones. We found a distributed set of neural populations in bilateral superior temporal gyrus and ventrolateral frontal cortex, where the encoding of Mandarin tones changed throughout training as a function of trial-by-trial accuracy ("learning effect"), including both increases and decreases in the separability of tones. These populations were distinct from populations that showed changes as a function of exposure to the stimuli regardless of trial-by-trial accuracy. These learning effects were driven in part by more variable neural responses to repeated presentations of acoustically identical stimuli. Finally, learning effects could be predicted from speech-evoked activity even before training, suggesting that intrinsic properties of these populations make them amenable to behavior-related changes. Together, these results demonstrate that nonnative speech sound learning involves a wide array of changes in neural representations across a distributed set of brain regions.


Subject(s)
Frontal Lobe/physiology , Learning/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Language , Male , Middle Aged , Phonetics , Pitch Perception/physiology , Speech/physiology , Speech Acoustics , Temporal Lobe/physiology
19.
Eur Arch Otorhinolaryngol ; 281(7): 3475-3482, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38194096

ABSTRACT

PURPOSE: This study aimed to investigate the effects of low frequency (LF) pitch perception on speech-in-noise and music perception performance by children with cochlear implants (CIC) and typical hearing (THC). Moreover, the relationships between speech-in-noise and music perception as well as the effects of demographic and audiological factors on present research outcomes were studied. METHODS: The sample consisted of 22 CIC and 20 THC (7-10 years). Harmonic intonation (HI) and disharmonic intonation (DI) tests were used to assess LF pitch perception. Speech perception in quiet (WRSq)/noise (WRSn + 10) were tested with the Italian bisyllabic words for pediatric populations. The Gordon test was used to evaluate music perception (rhythm, melody, harmony, and overall). RESULTS: CIC/THC performance comparisons for LF pitch, speech-in-noise, and all music measures except harmony revealed statistically significant differences with large effect sizes. For the CI group, HI showed statistically significant correlations with melody discrimination. Melody/total Gordon scores were significantly correlated with WRSn + 10. For the overall group, HI/DI showed significant correlations with all music perception measures and WRSn + 10. Hearing thresholds showed significant effects on HI/DI scores. Hearing thresholds and WRSn + 10 scores were significantly correlated; both revealed significant effects on all music perception scores. CI age had significant effects on WRSn + 10, harmony, and total Gordon scores (p < 0.05). CONCLUSION: Such findings confirmed the significant effects of LF pitch perception on complex listening performance. Significant speech-in-noise and music perception correlations were as promising as results from recent studies indicating significant positive effects of music training on speech-in-noise recognition in CIC.


Subject(s)
Cochlear Implants , Music , Noise , Pitch Perception , Speech Perception , Humans , Child , Male , Female , Speech Perception/physiology , Pitch Perception/physiology , Cochlear Implantation
20.
J Acoust Soc Am ; 155(2): 1451-1468, 2024 02 01.
Article in English | MEDLINE | ID: mdl-38364045

ABSTRACT

Theoretical accounts posit a close link between speech perception and production, but empirical findings on this relationship are mixed. To explain this apparent contradiction, a proposed view is that a perception-production relationship should be established through the use of critical perceptual cues. This study examines this view by using Mandarin tones as a test case because the perceptual cues for Mandarin tones consist of perceptually critical pitch direction and noncritical pitch height cues. The defining features of critical and noncritical perceptual cues and the perception-production relationship of each cue for each tone were investigated. The perceptual stimuli in the perception experiment were created by varying one critical and one noncritical perceptual cue orthogonally. The cues for tones produced by the same group of native Mandarin participants were measured. This study found that the critical status of perceptual cues primarily influenced within-category and between-category perception for nearly all tones. Using cross-domain bidirectional statistical modelling, a perception-production link was found for the critical perceptual cue only. A stronger link was obtained when within-category and between-category perception data were included in the models as compared to using between-category perception data alone, suggesting a phonetically and phonologically driven perception-production relationship.


Subject(s)
Pitch Perception , Speech Perception , Humans , Cues , Phonetics , Timbre Perception
SELECTION OF CITATIONS
SEARCH DETAIL