Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
Atten Percept Psychophys ; 85(1): 234-243, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36380148

ABSTRACT

The ability to recognize emotion in speech is a critical skill for social communication. Motivated by previous work that has shown that vocal emotion recognition accuracy varies by musical ability, the current study addressed this relationship using a behavioral measure of musical ability (i.e., singing) that relies on the same effector system used for vocal prosody production. In the current study, participants completed a musical production task that involved singing four-note novel melodies. To measure pitch perception, we used a simple pitch discrimination task in which participants indicated whether a target pitch was higher or lower than a comparison pitch. We also used self-report measures to address language and musical background. We report that singing ability, but not self-reported musical experience nor pitch discrimination ability, was a unique predictor of vocal emotion recognition accuracy. These results support a relationship between processes involved in vocal production and vocal perception, and suggest that sensorimotor processing of the vocal system is recruited for processing vocal prosody.


Subject(s)
Music , Singing , Speech Perception , Humans , Speech , Pitch Perception , Emotions
2.
J Exp Psychol Hum Percept Perform ; 49(10): 1296-1309, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37561528

ABSTRACT

Vocal imitation plays a critical function in the development and use of both language and music. Previous studies have reported more accurate imitation for sung pitch than spoken pitch, which might be attributed to the structural differences in acoustic signals and/or the distinct mental representations of pitch patterns across speech and music. The current study investigates the interaction between bottom-up (i.e., acoustic structure) and top-down (i.e., participants' language and musical background) factors on pitch imitation by comparing speech and song imitation accuracy across four groups: English and Mandarin speakers with or without musical training. Participants imitated pitch sequences that were characteristic of either song or speech, derived from pitch patterns in English and Mandarin spoken sentences. Overall, song imitation was more accurate than speech imitation, and this advantage was larger for English than Mandarin pitch sequences, regardless of participants' musical and language experiences. This effect likely reflects the perceptual salience of linguistic tones in Mandarin relative to English speech. Music and language knowledge were associated with optimal imitation of different acoustic features. Musicians were more accurate in matching absolute pitch across syllables and musical notes compared to nonmusicians. By contrast, Mandarin speakers were more accurate at imitating fine-grained changes within and across pitch events compared to English speakers. These results suggest that different top-down factors (i.e., language and musical background) influence pitch imitation ability for different dimensions of bottom-up features (i.e., absolute pitch and relative pitch patterns). (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Music , Speech Perception , Humans , Speech , Pitch Perception , Imitative Behavior , Language , Acoustic Stimulation
3.
Front Psychol ; 12: 611867, 2021.
Article in English | MEDLINE | ID: mdl-34135799

ABSTRACT

Individuals typically produce auditory sequences, such as speech or music, at a consistent spontaneous rate or tempo. We addressed whether spontaneous rates would show patterns of convergence across the domains of music and language production when the same participants spoke sentences and performed melodic phrases on a piano. Although timing plays a critical role in both domains, different communicative and motor constraints apply in each case and so it is not clear whether music and speech would display similar timing mechanisms. We report the results of two experiments in which adult participants produced sequences from memory at a comfortable spontaneous (uncued) rate. In Experiment 1, monolingual pianists in Buffalo, New York engaged in three production tasks: speaking sentences from memory, performing short melodies from memory, and tapping isochronously. In Experiment 2, English-French bilingual pianists in Montréal, Canada produced melodies on a piano as in Experiment 1, and spoke short rhythmically-structured phrases repeatedly. Both experiments led to the same pattern of results. Participants exhibited consistent spontaneous rates within each task. People who produced one spoken phrase rapidly were likely to produce another spoken phrase rapidly. This consistency across stimuli was also found for performance of different musical melodies. In general, spontaneous rates across speech and music tasks were not correlated, whereas rates of tapping and music were correlated. Speech rates (for syllables) were faster than music rates (for tones) and speech showed a smaller range of spontaneous rates across individuals than did music or tapping rates. Taken together, these results suggest that spontaneous rate reflects cumulative influences of endogenous rhythms (in consistent self-generated rates within domain), peripheral motor constraints (in finger movements across tapping and music), and communicative goals based on the cultural transmission of auditory information (slower rates for to-be-synchronized music than for speech).

4.
Front Hum Neurosci ; 14: 313, 2020.
Article in English | MEDLINE | ID: mdl-32973476

ABSTRACT

[This corrects the article DOI: 10.3389/fnhum.2019.00390.].

5.
Atten Percept Psychophys ; 81(7): 2473-2481, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31286436

ABSTRACT

Vocal imitation guides both music and language development. Despite the developmental significance of this behavior, a sizable minority of individuals are inaccurate at vocal pitch imitation. Although previous research suggested that inaccurate pitch imitation results from deficient sensorimotor associations between pitch perception and vocal motor planning, the cognitive processes involved in sensorimotor translation are not clearly defined. In the present research, we investigated the roles of basic cognitive processes in the vocal imitation of pitch, as well as the degree to which these processes rely on pitch-specific resources. In the present study, participants completed a battery of pitch and verbal tasks to measure pitch perception, pitch and verbal auditory imagery, pitch and verbal auditory short-term memory, and pitch imitation ability. Information on participants' music background was collected, as well. Pitch imagery, pitch short-term memory, pitch discrimination ability, and musical experience were unique predictors of pitch imitation ability. Furthermore, pitch imagery was a partial mediator of the relationship between pitch short-term memory and pitch imitation ability. These results indicate that vocal imitation recruits cognitive processes that rely on at least partially separate neural resources for pitch and verbal representations.


Subject(s)
Acoustic Stimulation/methods , Imitative Behavior/physiology , Music , Pitch Discrimination/physiology , Pitch Perception/physiology , Acoustic Stimulation/psychology , Adult , Female , Humans , Male , Memory, Short-Term/physiology , Music/psychology
6.
Front Hum Neurosci ; 13: 390, 2019.
Article in English | MEDLINE | ID: mdl-31798430

ABSTRACT

Phonological awareness skills in children with reading difficulty (RD) may reflect impaired automatic integration of orthographic and phonological representations. However, little is known about the underlying neural mechanisms involved in phonological awareness for children with RD. Eighteen children with RD, ages 9-13, participated in a functional magnetic resonance imaging (fMRI) study designed to assess the relationship of two constructs of phonological awareness, phoneme synthesis, and phoneme analysis, with crossmodal rhyme judgment. Participants completed a rhyme judgment task presented in two modality conditions; unimodal auditory only and crossmodal audiovisual. Measures of phonological awareness were correlated with unimodal, but not crossmodal, lexical processing. Moreover, these relationships were found only in unisensory brain regions, and not in multisensory brain areas. The results of this study suggest that children with RD rely on unimodal representations and unisensory brain areas, and provide insight into the role of phonemic awareness in mapping between auditory and visual modalities during literacy acquisition.

SELECTION OF CITATIONS
SEARCH DETAIL