Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Autism Dev Disord ; 2023 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-37140745

RESUMEN

PURPOSE: Processing real-world sounds requires acoustic and higher-order semantic information. We tested the theory that individuals with autism spectrum disorder (ASD) show enhanced processing of acoustic features and impaired processing of semantic information. METHODS: We used a change deafness task that required detection of speech and non-speech auditory objects being replaced and a speech-in-noise task using spoken sentences that must be comprehended in the presence of background speech to examine the extent to which 7-15 year old children with ASD (n = 27) rely on acoustic and semantic information, compared to age-matched (n = 27) and IQ-matched (n = 27) groups of typically developing (TD) children. Within a larger group of 7-15 year old TD children (n = 105) we correlated IQ, ASD symptoms, and the use of acoustic and semantic information. RESULTS: Children with ASD performed worse overall at the change deafness task relative to the age-matched TD controls, but they did not differ from IQ-matched controls. All groups utilized acoustic and semantic information similarly and displayed an attentional bias towards changes that involved the human voice. Similarly, for the speech-in-noise task, age-matched-but not IQ-matched-TD controls performed better overall than the ASD group. However, all groups used semantic context to a similar degree. Among TD children, neither IQ nor the presence of ASD symptoms predict the use of acoustic or semantic information. CONCLUSION: Children with and without ASD used acoustic and semantic information similarly during auditory change deafness and speech-in-noise tasks.

2.
Dev Psychol ; 59(5): 829-844, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36951723

RESUMEN

Sensitivity to auditory rhythmic structures in music and language is evident as early as infancy, but performance on beat perception tasks is often well below adult levels and improves gradually with age. While some research has suggested the ability to perceive musical beat develops early, even in infancy, it remains unclear whether adult-like perception of musical beat is present in children. The capacity to sustain an internal sense of the beat is critical for various rhythmic musical behaviors, yet very little is known about the development of this ability. In this study, 223 participants ranging in age from 4 to 23 years from the Las Vegas, Nevada, community completed a musical beat discrimination task, during which they first listened to a strongly metrical musical excerpt and then attempted to sustain their perception of the musical beat while listening to a repeated, beat-ambiguous rhythm for up to 14.4 s. They then indicated whether a drum probe matched or did not match the beat. Results suggested that the ability to identify the matching probe improved throughout middle childhood (8-9 years) and did not reach adult-like levels until adolescence (12-14 years). Furthermore, scores on the beat perception task were positively related to phonological processing, after accounting for age, short-term memory, and music and dance training. This study lends further support to the notion that children's capacity for beat perception is not fully developed until adolescence and suggests we should reconsider assumptions of musical beat mastery by infants and young children. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Música , Adulto , Adolescente , Humanos , Niño , Preescolar , Adulto Joven , Percepción Auditiva , Lingüística , Lenguaje
3.
Dev Sci ; 26(5): e13346, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36419407

RESUMEN

Music and language are two fundamental forms of human communication. Many studies examine the development of music- and language-specific knowledge, but few studies compare how listeners know they are listening to music or language. Although we readily differentiate these domains, how we distinguish music and language-and especially speech and song- is not obvious. In two studies, we asked how listeners categorize speech and song. Study 1 used online survey data to illustrate that 4- to 17-year-olds and adults have verbalizable distinctions for speech and song. At all ages, listeners described speech and song differences based on acoustic features, but compared with older children, 4- to 7-year-olds more often used volume to describe differences, suggesting that they are still learning to identify the features most useful for differentiating speech from song. Study 2 used a perceptual categorization task to demonstrate that 4-8-year-olds and adults readily categorize speech and song, but this ability improves with age especially for identifying song. Despite generally rating song as more speech-like, 4- and 6-year-olds rated ambiguous speech-song stimuli as more song-like than 8-year-olds and adults. Four acoustic features predicted song ratings: F0 instability, utterance duration, harmonicity, and spectral flux. However, 4- and 6-year-olds' song ratings were better predicted by F0 instability than by harmonicity and utterance duration. These studies characterize how children develop conceptual and perceptual understandings of speech and song and suggest that children under age 8 are still learning what features are important for categorizing utterances as speech or song. RESEARCH HIGHLIGHTS: Children and adults conceptually and perceptually categorize speech and song from age 4. Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song. Acoustic cue weighting changes with age, becoming adult-like at age 8 for perceptual categorization and at age 12 for conceptual differentiation. Young children are still learning to categorize speech and song, which leaves open the possibility that music- and language-specific skills are not so domain-specific.


Asunto(s)
Música , Percepción del Habla , Voz , Adulto , Niño , Humanos , Adolescente , Preescolar , Habla , Percepción Auditiva , Aprendizaje
4.
Front Psychol ; 13: 998321, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36467160

RESUMEN

Listening to groovy music is an enjoyable experience and a common human behavior in some cultures. Specifically, many listeners agree that songs they find to be more familiar and pleasurable are more likely to induce the experience of musical groove. While the pleasurable and dance-inducing effects of musical groove are omnipresent, we know less about how subjective feelings toward music, individual musical or dance experiences, or more objective musical perception abilities are correlated with the way we experience groove. Therefore, the present study aimed to evaluate how musical and dance sophistication relates to musical groove perception. One-hundred 24 participants completed an online study during which they rated 20 songs, considered high- or low-groove, and completed the Goldsmiths Musical Sophistication Index, the Goldsmiths Dance Sophistication Index, the Beat and Meter Sensitivity Task, and a modified short version of the Profile for Music Perception Skills. Our results reveal that measures of perceptual abilities, musical training, and social dancing predicted the difference in groove rating between high- and low-groove music. Overall, these findings support the notion that listeners' individual experiences and predispositions may shape their perception of musical groove, although other causal directions are also possible. This research helps elucidate the correlates and possible causes of musical groove perception in a wide range of listeners.

5.
Front Neurosci ; 16: 924806, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36213735

RESUMEN

Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia.

6.
Psychophysiology ; 59(2): e13963, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34743347

RESUMEN

Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research has suggested that listener's conscious perception of the musical structure (e.g., beat and meter) might be reflected in neural responses that follow the frequency of the beat. However, the extent to which these neural responses directly reflect concurrent, listener-reported perception of musical beat versus stimulus-driven activity is understudied. We investigated whether steady state-evoked potentials (SSEPs), measured using electroencephalography (EEG), reflect conscious perception of beat by holding the stimulus constant while contextually manipulating listeners' perception and measuring perceptual responses on every trial. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context phase), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies that matched the beat of the preceding context. Exploratory analyses investigated whether EEG amplitude at the beat-related SSEPs (steady state-evoked potentials) predicted performance on the beat induction task on a single-trial basis, but were inconclusive. Our findings substantiate the claim that auditory SSEPs reflect conscious perception of musical beat and not just stimulus features.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados/fisiología , Música , Percepción del Tiempo/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
7.
J Exp Psychol Hum Percept Perform ; 47(11): 1516-1542, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34843358

RESUMEN

Auditory perception of time is superior to visual perception, both for simple intervals and beat-based musical rhythms. To what extent does this auditory advantage characterize perception of different hierarchical levels of musical meter, and how is it related to lifelong experience with music? We paired musical excerpts with auditory and visual metronomes that matched or mismatched the musical meter at the beat level (faster) and measure level (slower) and obtained fit ratings from adults and children (5-10 years). Adults exhibited an auditory advantage in this task for the beat level, but not for the measure level. Children also displayed an auditory advantage that increased with age for the beat level. In both modalities, their overall sensitivity to beat increased with age, but they were not sensitive to measure-level matching at any age. More musical training was related to enhanced sensitivity in both auditory and visual modalities for measure-level matching in adults and beat-level matching in children. These findings provide evidence for auditory superiority of beat perception across development, and they suggest that beat and meter perception develop quite gradually and rely on lifelong acquisition of musical knowledge. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Música , Estimulación Acústica , Adulto , Percepción Auditiva , Niño , Humanos , Percepción Visual
8.
Behav Brain Sci ; 44: e74, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588027

RESUMEN

Both target papers cite evidence from infancy and early childhood to support the notion of human musicality as a somewhat static suite of capacities; however, in our view they do not adequately acknowledge the critical role of developmental timing, the acquisition process, or the dynamics of social learning, especially during later periods of development such as middle childhood.


Asunto(s)
Música , Evolución Biológica , Niño , Preescolar , Humanos
9.
J Exp Psychol Gen ; 150(2): 314-339, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32852978

RESUMEN

Most music is temporally organized within a metrical hierarchy, having nested periodic patterns that give rise to the experience of stronger (downbeat) and weaker (upbeat) events. Musical meter presumably makes it possible to dance, sing, and play instruments in synchrony with others. It is nevertheless unclear whether or not listeners perceive multiple levels of periodicity simultaneously, and if they do, when and how they learn to do this. We tested children, adolescents, and musically trained and untrained adults with a new meter perception task. We presented excerpts of human-performed music paired with metronomes that matched or mismatched the metrical structure of the music at 2 hierarchical levels (beat and measure), and asked listeners to provide a rating of fit of metronome and music. Fit ratings suggested that adults with and without musical training were sensitive to both levels of meter simultaneously, but ratings were more strongly influenced by beat-level than by measure-level synchrony. Sensitivity to two simultaneous levels of meter was not evident in children or adolescents. Sensitivity to the beat alone was apparent in the youngest children and increased with age, whereas sensitivity to the measure alone was not present in younger children (5- to 8-year-olds). These findings suggest a prolonged period of development and refinement of hierarchical beat perception and surprisingly weak overall ability to attend to 2 beat levels at the same time across all ages. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Percepción Auditiva/fisiología , Música , Estimulación Acústica , Adolescente , Adulto , Femenino , Humanos , Aprendizaje/fisiología , Masculino , Persona de Mediana Edad , Adulto Joven
10.
J Exp Child Psychol ; 159: 159-174, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28288412

RESUMEN

Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy.


Asunto(s)
Percepción Auditiva , Baile/psicología , Discriminación en Psicología , Percepción de Movimiento , Música/psicología , Psicología Infantil , Percepción Visual , Factores de Edad , Atención , Femenino , Humanos , Lactante , Masculino , Percepción del Tiempo
11.
Infancy ; 22(4): 421-435, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-31772509

RESUMEN

The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research-especially with infant participants-also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.

12.
Dev Psychol ; 52(11): 1867-1877, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27786530

RESUMEN

Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a change deafness paradigm and an object-encoding task to investigate how children (6, 8, and 10 years of age) and adults process auditory scenes composed of everyday sounds (e.g., human voices, animal calls, environmental sounds, and musical instruments). Results indicated that although change deafness was present and robust at all ages, listeners improved at detecting changes with age. All listeners were less sensitive to changes within the same semantic category than to small acoustic changes, suggesting that, regardless of age, listeners relied heavily on semantic category knowledge to detect changes. Furthermore, all listeners showed less change deafness when they correctly encoded change-relevant objects (i.e., when they remembered hearing the changing object during the task). Finally, we found that all listeners were better at encoding human voices and were more sensitive to detecting changes involving the human voice. Despite poorer overall performance compared with adults, children detect changes in complex auditory scenes much like adults, using high-level knowledge about auditory objects to guide processing, with special attention to the human voice. (PsycINFO Database Record


Asunto(s)
Percepción Auditiva/fisiología , Desarrollo Infantil/fisiología , Conocimiento , Semántica , Detección de Señal Psicológica/fisiología , Estimulación Acústica , Factores de Edad , Análisis de Varianza , Niño , Femenino , Humanos , Masculino , Psicoacústica , Estadística como Asunto
13.
Front Psychol ; 7: 939, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27445907

RESUMEN

The available evidence indicates that the music of a culture reflects the speech rhythm of the prevailing language. The normalized pairwise variability index (nPVI) is a measure of durational contrast between successive events that can be applied to vowels in speech and to notes in music. Music-language parallels may have implications for the acquisition of language and music, but it is unclear whether native-language rhythms are reflected in children's songs. In general, children's songs exhibit greater rhythmic regularity than adults' songs, in line with their caregiving goals and frequent coordination with rhythmic movement. Accordingly, one might expect lower nPVI values (i.e., lower variability) for such songs regardless of culture. In addition to their caregiving goals, children's songs may serve an intuitive didactic function by modeling culturally relevant content and structure for music and language. One might therefore expect pronounced rhythmic parallels between children's songs and language of origin. To evaluate these predictions, we analyzed a corpus of 269 English and French songs from folk and children's music anthologies. As in prior work, nPVI values were significantly higher for English than for French children's songs. For folk songs (i.e., songs not for children), the difference in nPVI for English and French songs was small and in the expected direction but non-significant. We subsequently collected ratings from American and French monolingual and bilingual adults, who rated their familiarity with each song, how much they liked it, and whether or not they thought it was a children's song. Listeners gave higher familiarity and liking ratings to songs from their own culture, and they gave higher familiarity and preference ratings to children's songs than to other songs. Although higher child-directedness ratings were given to children's than to folk songs, French listeners drove this effect, and their ratings were uniquely predicted by nPVI. Together, these findings suggest that language-based rhythmic structures are evident in children's songs, and that listeners expect exaggerated language-based rhythms in children's songs. The implications of these findings for enculturation processes and for the acquisition of music and language are discussed.

14.
Psychon Bull Rev ; 23(5): 1553-1558, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-26732385

RESUMEN

Numerous studies have shown that formal musical training is associated with sensory, motor, and cognitive advantages in individuals of various ages. However, the nature of the observed differences between musicians and nonmusicians is poorly understood, and little is known about the listening skills of individuals who engage in alternative types of everyday musical activities. Here, we show that people who have frequently played music video games outperform nonmusicians controls on a battery of music perception tests. These findings reveal that enhanced musical aptitude can be found among individuals who play music video games, raising the possibility that music video games could potentially enhance music perception skills in individuals across a broad spectrum of society who are otherwise unable to invest the time and/or money required to learn a musical instrument.


Asunto(s)
Aptitud , Percepción Auditiva , Música/psicología , Juegos de Video/psicología , Adolescente , Adulto , Anciano , Femenino , Humanos , Aprendizaje , Masculino , Persona de Mediana Edad , Personalidad , Adulto Joven
15.
Cognition ; 143: 135-40, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26151370

RESUMEN

Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differences in music and language processing arise from domain-specific knowledge, acoustic characteristics, or both. We controlled acoustic characteristics by using the speech-to-song illusion, which often results in a perceptual transformation to song after several repetitions of an utterance. Participants performed a same-different pitch discrimination task for the initial repetition (heard as speech) and the final repetition (heard as song). Better detection was observed for pitch changes that violated rather than conformed to Western musical scale structure, but only when utterances transformed to song, indicating that music-specific pitch representations were activated and influenced perception. This shows that music-specific processes can be activated when an utterance is heard as song, suggesting that the high-level status of a stimulus as either language or music can be behaviorally dissociated from low-level acoustic factors.


Asunto(s)
Música , Discriminación de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Adolescente , Adulto , Femenino , Humanos , Conocimiento , Masculino , Persona de Mediana Edad , Adulto Joven
16.
J Exp Psychol Gen ; 144(2): e43-9, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25688906

RESUMEN

Speech and song are readily differentiated from each other in everyday communication, yet sometimes listeners who have formal music training will hear a spoken utterance transform from speech to song when it is repeated (Deutsch, Henthorn, & Lapidis, 2011). It remains unclear whether music training is required to perceive this illusory transformation or whether implicit knowledge of musical structure is sufficient. The current study replicates Deutsch et al.'s findings with musicians and demonstrates the generalizability of this auditory illusion to casual music listeners with no formal training. We confirm that the illusory transformation is disrupted when the pitch height of each repetition of the utterance is transposed, and we find that raising the pitch height has a different effect on listeners' ratings than does lowering it. Auditory illusions such as this may offer unique opportunities to compare domain-specific and domain-general processing in the brain while holding acoustic characteristics constant.


Asunto(s)
Percepción Auditiva/fisiología , Ilusiones/psicología , Música/psicología , Habla , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Percepción de la Altura Tonal/fisiología , Adulto Joven
17.
PLoS One ; 9(7): e102962, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25075514

RESUMEN

Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians) or a complex meter (familiar only to Indians). A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase). When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after) the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters.


Asunto(s)
Comparación Transcultural , Música/psicología , Desempeño Psicomotor , Percepción Auditiva , Femenino , Humanos , India , Masculino , Nevada , Adulto Joven
18.
Front Syst Neurosci ; 7: 48, 2013 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-24027502

RESUMEN

The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

19.
Ann N Y Acad Sci ; 1252: 92-9, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22524345

RESUMEN

Rhythm and meter are fundamental components of music that are universal yet also culture specific. Although simple, isochronous meters are preferred and more readily discriminated than highly complex, nonisochronous meters, moderately complex nonisochronous meters do not pose a problem for listeners who are exposed to them from a young age. The present work uses a behavioral task to examine the ease with which listeners of various ages acquire knowledge of unfamiliar metrical structures from passive exposure. We examined perception of familiar (Western) rhythms with an isochronous meter and unfamiliar (Balkan) rhythms with a nonisochronous meter. We compared discrimination by American children (5 to 11 years) and adults before and after a 2-week period of at-home listening to nonisochronous meter music from Bulgaria. During the first session, listeners of all ages exhibited superior discrimination of isochronous than in nonisochronous melodies. Across sessions, this asymmetry declined for young children but not for older children and adults.


Asunto(s)
Percepción Auditiva/fisiología , Música , Adolescente , Adulto , Factores de Edad , Niño , Preescolar , Características Culturales , Femenino , Humanos , Aprendizaje , Masculino , Neurociencias , Adulto Joven
20.
J Exp Psychol Hum Percept Perform ; 38(3): 543-8, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22352419

RESUMEN

Despite the ubiquity of dancing and synchronized movement to music, relatively few studies have examined cognitive representations of musical rhythm and meter among listeners from contrasting cultures. We aimed to disentangle the contributions of culture-general and culture-specific influences by examining American and Turkish listeners' detection of temporal disruptions (varying in size from 50-250 ms in duration) to three types of stimuli: simple rhythms found in both American and Turkish music, complex rhythms found only in Turkish music, and highly complex rhythms that are rare in all cultures. Americans were most accurate when detecting disruptions to the simple rhythm. However, they performed less accurately but comparably in both the complex and highly complex conditions. By contrast, Turkish participants performed accurately and indistinguishably in both simple and complex conditions. However, they performed less accurately in the unfamiliar, highly complex condition. Together, these experiments implicate a crucial role of culture-specific listening experience and acquired musical knowledge in rhythmic pattern perception.


Asunto(s)
Percepción Auditiva , Comparación Transcultural , Música/psicología , Reconocimiento en Psicología , Adolescente , Adulto , Humanos , Masculino , Persona de Mediana Edad , Patrones de Reconocimiento Fisiológico , Turquía , Estados Unidos , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...