Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Front Psychol ; 8: 2080, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29238318

RESUMEN

The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure.

2.
J Exp Psychol Hum Percept Perform ; 43(3): 487-498, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-27918184

RESUMEN

Melody recognition is an online process of evaluating incoming information and comparing this information to an existing internal corpus, thereby reducing prediction error. The predictive-coding model postulates top-down control on sensory processing accompanying reduction in prediction error. To investigate the relevancy of this model to melody processing, the current study examined early magnetoencephalogram (MEG) auditory responses to familiar and unfamiliar melodies in 25 participants. The familiar melodies followed and primed an octave-scrambled version of the same melody. The retrograde version of theses melodies served as the unfamiliar control condition. Octave-transposed melodies were included to examine the influence of pitch representation (pitch-height/pitch-chroma representation) on brain responses to melody recognition. Results demonstrate a reduction of the M100 auditory response to familiar, as compared with unfamiliar, melodies regardless of their form of presentation (condensed vs. octave-scrambled). This trend appeared to begin after the third tone of the melody. An additional behavioral study with the same melody corpus showed a similar trend-namely, a significant difference between familiarity rating for familiar and unfamiliar melodies, beginning with the third tone of the melody. These results may indicate a top-down inhibition of early auditory responses to melodies that is influenced by pitch representation. (PsycINFO Database Record


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Música , Reconocimiento en Psicología/fisiología , Adulto , Femenino , Humanos , Magnetoencefalografía , Masculino , Percepción de la Altura Tonal/fisiología , Adulto Joven
3.
Front Hum Neurosci ; 10: 255, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27375455

RESUMEN

Humans and oscine songbirds share the rare capacity for vocal learning. Songbirds have the ability to acquire songs and calls of various rhythms through imitation. In several species, birds can even coordinate the timing of their vocalizations with other individuals in duets that are synchronized with millisecond-accuracy. It is not known, however, if songbirds can perceive rhythms holistically nor if they are capable of spontaneous entrainment to complex rhythms, in a manner similar to humans. Here we review emerging evidence from studies of rhythm generation and vocal coordination across songbirds and humans. In particular, recently developed experimental methods have revealed neural mechanisms underlying the temporal structure of song and have allowed us to test birds' abilities to predict the timing of rhythmic social signals. Surprisingly, zebra finches can readily learn to anticipate the calls of a "vocal robot" partner and alter the timing of their answers to avoid jamming, even in reference to complex rhythmic patterns. This capacity resembles, to some extent, human predictive motor response to an external beat. In songbirds, this is driven, at least in part, by the forebrain song system, which controls song timing and is essential for vocal learning. Building upon previous evidence for spontaneous entrainment in human and non-human vocal learners, we propose a comparative framework for future studies aimed at identifying shared mechanism of rhythm production and perception across songbirds and humans.

4.
Curr Biol ; 26(3): 309-18, 2016 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-26774786

RESUMEN

The dichotomy between vocal learners and non-learners is a fundamental distinction in the study of animal communication. Male zebra finches (Taeniopygia guttata) are vocal learners that acquire a song resembling their tutors', whereas females can only produce innate calls. The acoustic structure of short calls, produced by both males and females, is not learned. However, these calls can be precisely coordinated across individuals. To examine how birds learn to synchronize their calls, we developed a vocal robot that exchanges calls with a partner bird. Because birds answer the robot with stereotyped latencies, we could program it to disrupt each bird's responses by producing calls that are likely to coincide with the bird's. Within minutes, the birds learned to avoid this disruptive masking (jamming) by adjusting the timing of their responses. Notably, females exhibited greater adaptive timing plasticity than males. Further, when challenged with complex rhythms containing jamming elements, birds dynamically adjusted the timing of their calls in anticipation of jamming. Blocking the song system cortical output dramatically reduced the precision of birds' response timing and abolished their ability to avoid jamming. Surprisingly, we observed this effect in both males and females, indicating that the female song system is functional rather than vestigial. We suggest that descending forebrain projections, including the song-production pathway, function as a general-purpose sensorimotor communication system. In the case of calls, it enables plasticity in vocal timing to facilitate social interactions, whereas in the case of songs, plasticity extends to developmental changes in vocal structure.


Asunto(s)
Pinzones/fisiología , Aprendizaje , Prosencéfalo/fisiología , Vocalización Animal , Animales , Femenino , Masculino , Tiempo de Reacción
5.
Autism Res ; 8(2): 153-63, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25428545

RESUMEN

Prosody is an important tool of human communication, carrying both affective and pragmatic messages in speech. Prosody recognition relies on processing of acoustic cues, such as the fundamental frequency of the voice signal, and their interpretation according to acquired socioemotional scripts. Individuals with autism spectrum disorders (ASD) show deficiencies in affective prosody recognition. These deficiencies have been mostly associated with general difficulties in emotion recognition. The current study explored an additional association between affective prosody recognition in ASD and auditory perceptual abilities. Twenty high-functioning male adults with ASD and 32 typically developing male adults, matched on age and verbal abilities undertook a battery of auditory tasks. These included affective and pragmatic prosody recognition tasks, two psychoacoustic tasks (pitch direction recognition and pitch discrimination), and a facial emotion recognition task, representing nonvocal emotion recognition. Compared with controls, the ASD group demonstrated poorer performance on both vocal and facial emotion recognition, but not on pragmatic prosody recognition or on any of the psychoacoustic tasks. Both groups showed strong associations between psychoacoustic abilities and prosody recognition, both affective and pragmatic, although these were more pronounced in the ASD group. Facial emotion recognition predicted vocal emotion recognition in the ASD group only. These findings suggest that auditory perceptual abilities, alongside general emotion recognition abilities, play a significant role in affective prosody recognition in ASD.


Asunto(s)
Trastorno del Espectro Autista/fisiopatología , Emociones/fisiología , Discriminación de la Altura Tonal/fisiología , Psicoacústica , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Adulto , Cognición/fisiología , Señales (Psicología) , Expresión Facial , Humanos , Masculino , Pruebas Neuropsicológicas , Conducta Social , Habla , Adulto Joven
6.
Atten Percept Psychophys ; 75(8): 1799-810, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-23893469

RESUMEN

Prosodic attributes of speech, such as intonation, influence our ability to recognize, comprehend, and produce affect, as well as semantic and pragmatic meaning, in vocal utterances. The present study examines associations between auditory perceptual abilities and the perception of prosody, both pragmatic and affective. This association has not been previously examined. Ninety-seven participants (49 female and 48 male participants) with normal hearing thresholds took part in two experiments, involving both prosody recognition and psychoacoustic tasks. The prosody recognition tasks included a vocal emotion recognition task and a focus perception task requiring recognition of an accented word in a spoken sentence. The psychoacoustic tasks included a task requiring pitch discrimination and three tasks also requiring pitch direction (i.e., high/low, rising/falling, changing/steady pitch). Results demonstrate that psychoacoustic thresholds can predict 31% and 38% of affective and pragmatic prosody recognition scores, respectively. Psychoacoustic tasks requiring pitch direction recognition were the only significant predictors of prosody recognition scores. These findings contribute to a better understanding of the mechanisms underlying prosody recognition and may have an impact on the assessment and rehabilitation of individuals suffering from deficient prosodic perception.


Asunto(s)
Emociones/fisiología , Discriminación de la Altura Tonal , Psicoacústica , Reconocimiento en Psicología , Percepción del Habla/fisiología , Voz , Adulto , Femenino , Humanos , Masculino , Adulto Joven
7.
Front Syst Neurosci ; 7: 35, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23914159
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA