Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Philos Trans R Soc Lond B Biol Sci ; 379(1908): 20230253, 2024 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-39005036

RESUMEN

Misophonic experiences are common in the general population, and they may shed light on everyday emotional reactions to multi-modal stimuli. We performed an online study of a non-clinical sample to understand the extent to which adults who have misophonic reactions are generally reactive to a range of audio-visual emotion-inducing stimuli. We also hypothesized that musicality might be predictive of one's emotional reactions to these stimuli because music is an activity that involves strong connections between sensory processing and meaningful emotional experiences. Participants completed self-report scales of misophonia and musicality. They also watched videos meant to induce misophonia, autonomous sensory meridian response (ASMR) and musical chills, and were asked to click a button whenever they had any emotional reaction to the video. They also rated the emotional valence and arousal of each video. Reactions to misophonia videos were predicted by reactions to ASMR and chills videos, which could indicate that the frequency with which individuals experience emotional responses varies similarly across both negative and positive emotional contexts. Musicality scores were not correlated with measures of misophonia. These findings could reflect a general phenotype of stronger emotional reactivity to meaningful sensory inputs. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.


Asunto(s)
Emociones , Música , Humanos , Adulto , Femenino , Masculino , Música/psicología , Adulto Joven , Persona de Mediana Edad , Adolescente , Percepción Auditiva , Nivel de Alerta/fisiología
2.
Skeletal Radiol ; 2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38695875

RESUMEN

PURPOSE: We wished to evaluate if an open-source artificial intelligence (AI) algorithm ( https://www.childfx.com ) could improve performance of (1) subspecialized musculoskeletal radiologists, (2) radiology residents, and (3) pediatric residents in detecting pediatric and young adult upper extremity fractures. MATERIALS AND METHODS: A set of evaluation radiographs drawn from throughout the upper extremity (elbow, hand/finger, humerus/shoulder/clavicle, wrist/forearm, and clavicle) from 240 unique patients at a single hospital was constructed (mean age 11.3 years, range 0-22 years, 37.9% female). Two fellowship-trained musculoskeletal radiologists, three radiology residents, and two pediatric residents were recruited as readers. Each reader interpreted each case initially without and then subsequently 3-4 weeks later with AI assistance and recorded if/where fracture was present. RESULTS: Access to AI significantly improved area under the receiver operator curve (AUC) of radiology residents (0.768 [0.730-0.806] without AI to 0.876 [0.845-0.908] with AI, P < 0.001) and pediatric residents (0.706 [0.659-0.753] without AI to 0.844 [0.805-0.883] with AI, P < 0.001) in identifying fracture, respectively. There was no evidence of improvement for subspecialized musculoskeletal radiology attendings in identifying fracture (AUC 0.867 [0.832-0.902] to 0.890 [0.856-0.924], P = 0.093). There was no evidence of difference between overall resident AUC with AI and subspecialist AUC without AI (resident with AI 0.863, attending without AI AUC 0.867, P = 0.856). Overall physician radiograph interpretation time was significantly lower with AI (38.9 s with AI vs. 52.1 s without AI, P = 0.030). CONCLUSION: An openly accessible AI model significantly improved radiology and pediatric resident accuracy in detecting pediatric upper extremity fractures.

3.
Trends Cogn Sci ; 28(6): 487-488, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38664158

RESUMEN

Jacoby and colleagues used an iterative rhythm reproduction paradigm with listeners from around the world to provide evidence for both rhythm universals (simple-integer ratios 1:1 and 2:1) and cross-cultural variation for specific rhythmic categories that can be linked to local music traditions in different regions of the world.


Asunto(s)
Música , Periodicidad , Humanos , Cultura , Comparación Transcultural , Percepción Auditiva/fisiología
4.
Cognition ; 242: 105634, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37820488

RESUMEN

Both humans and non-humans (e.g. birds and primates) preferentially produce and perceive auditory rhythms with simple integer ratios. In addition, these preferences (biases) tend to reflect specific integer-ratio rhythms that are common to one's cultural listening experience. To better understand the developmental trajectory of these biases, we estimated children's rhythm biases across the entire rhythm production space of simple (e.g., ratios of 1, 2, and 3) three-interval rhythms. North American children aged 6-11 years completed an iterative rhythm production task, in which they attempted to tap in synchrony with repeating three-interval rhythms chosen randomly from the space. For each rhythm, the child's produced rhythm was presented back to them as the stimulus, and over the course of 5 such iterations we used their final reproductions to estimate their rhythmic biases or priors. Results suggest that regardless of the initial rhythm, after 5 iterations, children's tapping converged on rhythms with (nearly) simple integer ratios, indicating that, like adults, their rhythmic priors consist of rhythms with simple-integer ratios. Furthermore, the relative weights (or prominence of different rhythmic priors) observed in children were highly correlated with those of adults. However, we also observed some age-related changes, especially for the ratio types that vary most across cultures. In an additional rhythm perception task, children were better at detecting rhythmic disruptions to a culturally familiar rhythm (in 4/4 m with 2:1:1 ratio pattern) than to a culturally unfamiliar rhythm (7/8 m with 3:2:2 ratios), and performance in this task was correlated with tapping variability in the iterative task. Taken together, our findings provide evidence that children as young as 6-years-old exhibit simple integer-ratio categorical rhythm priors in their rhythm production that closely resemble those of adults in the same culture.


Asunto(s)
Percepción Auditiva , Música , Adulto , Animales , Humanos , Lenguaje , Estimulación Acústica/métodos
5.
J Autism Dev Disord ; 2023 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-37140745

RESUMEN

PURPOSE: Processing real-world sounds requires acoustic and higher-order semantic information. We tested the theory that individuals with autism spectrum disorder (ASD) show enhanced processing of acoustic features and impaired processing of semantic information. METHODS: We used a change deafness task that required detection of speech and non-speech auditory objects being replaced and a speech-in-noise task using spoken sentences that must be comprehended in the presence of background speech to examine the extent to which 7-15 year old children with ASD (n = 27) rely on acoustic and semantic information, compared to age-matched (n = 27) and IQ-matched (n = 27) groups of typically developing (TD) children. Within a larger group of 7-15 year old TD children (n = 105) we correlated IQ, ASD symptoms, and the use of acoustic and semantic information. RESULTS: Children with ASD performed worse overall at the change deafness task relative to the age-matched TD controls, but they did not differ from IQ-matched controls. All groups utilized acoustic and semantic information similarly and displayed an attentional bias towards changes that involved the human voice. Similarly, for the speech-in-noise task, age-matched-but not IQ-matched-TD controls performed better overall than the ASD group. However, all groups used semantic context to a similar degree. Among TD children, neither IQ nor the presence of ASD symptoms predict the use of acoustic or semantic information. CONCLUSION: Children with and without ASD used acoustic and semantic information similarly during auditory change deafness and speech-in-noise tasks.

6.
Dev Psychol ; 59(5): 829-844, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36951723

RESUMEN

Sensitivity to auditory rhythmic structures in music and language is evident as early as infancy, but performance on beat perception tasks is often well below adult levels and improves gradually with age. While some research has suggested the ability to perceive musical beat develops early, even in infancy, it remains unclear whether adult-like perception of musical beat is present in children. The capacity to sustain an internal sense of the beat is critical for various rhythmic musical behaviors, yet very little is known about the development of this ability. In this study, 223 participants ranging in age from 4 to 23 years from the Las Vegas, Nevada, community completed a musical beat discrimination task, during which they first listened to a strongly metrical musical excerpt and then attempted to sustain their perception of the musical beat while listening to a repeated, beat-ambiguous rhythm for up to 14.4 s. They then indicated whether a drum probe matched or did not match the beat. Results suggested that the ability to identify the matching probe improved throughout middle childhood (8-9 years) and did not reach adult-like levels until adolescence (12-14 years). Furthermore, scores on the beat perception task were positively related to phonological processing, after accounting for age, short-term memory, and music and dance training. This study lends further support to the notion that children's capacity for beat perception is not fully developed until adolescence and suggests we should reconsider assumptions of musical beat mastery by infants and young children. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Música , Adulto , Adolescente , Humanos , Niño , Preescolar , Adulto Joven , Percepción Auditiva , Lingüística , Lenguaje
7.
Dev Sci ; 26(5): e13346, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36419407

RESUMEN

Music and language are two fundamental forms of human communication. Many studies examine the development of music- and language-specific knowledge, but few studies compare how listeners know they are listening to music or language. Although we readily differentiate these domains, how we distinguish music and language-and especially speech and song- is not obvious. In two studies, we asked how listeners categorize speech and song. Study 1 used online survey data to illustrate that 4- to 17-year-olds and adults have verbalizable distinctions for speech and song. At all ages, listeners described speech and song differences based on acoustic features, but compared with older children, 4- to 7-year-olds more often used volume to describe differences, suggesting that they are still learning to identify the features most useful for differentiating speech from song. Study 2 used a perceptual categorization task to demonstrate that 4-8-year-olds and adults readily categorize speech and song, but this ability improves with age especially for identifying song. Despite generally rating song as more speech-like, 4- and 6-year-olds rated ambiguous speech-song stimuli as more song-like than 8-year-olds and adults. Four acoustic features predicted song ratings: F0 instability, utterance duration, harmonicity, and spectral flux. However, 4- and 6-year-olds' song ratings were better predicted by F0 instability than by harmonicity and utterance duration. These studies characterize how children develop conceptual and perceptual understandings of speech and song and suggest that children under age 8 are still learning what features are important for categorizing utterances as speech or song. RESEARCH HIGHLIGHTS: Children and adults conceptually and perceptually categorize speech and song from age 4. Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song. Acoustic cue weighting changes with age, becoming adult-like at age 8 for perceptual categorization and at age 12 for conceptual differentiation. Young children are still learning to categorize speech and song, which leaves open the possibility that music- and language-specific skills are not so domain-specific.


Asunto(s)
Música , Percepción del Habla , Voz , Adulto , Niño , Humanos , Adolescente , Preescolar , Habla , Percepción Auditiva , Aprendizaje
8.
Front Psychol ; 13: 998321, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36467160

RESUMEN

Listening to groovy music is an enjoyable experience and a common human behavior in some cultures. Specifically, many listeners agree that songs they find to be more familiar and pleasurable are more likely to induce the experience of musical groove. While the pleasurable and dance-inducing effects of musical groove are omnipresent, we know less about how subjective feelings toward music, individual musical or dance experiences, or more objective musical perception abilities are correlated with the way we experience groove. Therefore, the present study aimed to evaluate how musical and dance sophistication relates to musical groove perception. One-hundred 24 participants completed an online study during which they rated 20 songs, considered high- or low-groove, and completed the Goldsmiths Musical Sophistication Index, the Goldsmiths Dance Sophistication Index, the Beat and Meter Sensitivity Task, and a modified short version of the Profile for Music Perception Skills. Our results reveal that measures of perceptual abilities, musical training, and social dancing predicted the difference in groove rating between high- and low-groove music. Overall, these findings support the notion that listeners' individual experiences and predispositions may shape their perception of musical groove, although other causal directions are also possible. This research helps elucidate the correlates and possible causes of musical groove perception in a wide range of listeners.

9.
Front Neurosci ; 16: 924806, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36213735

RESUMEN

Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia.

10.
Psychophysiology ; 59(2): e13963, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34743347

RESUMEN

Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research has suggested that listener's conscious perception of the musical structure (e.g., beat and meter) might be reflected in neural responses that follow the frequency of the beat. However, the extent to which these neural responses directly reflect concurrent, listener-reported perception of musical beat versus stimulus-driven activity is understudied. We investigated whether steady state-evoked potentials (SSEPs), measured using electroencephalography (EEG), reflect conscious perception of beat by holding the stimulus constant while contextually manipulating listeners' perception and measuring perceptual responses on every trial. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context phase), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies that matched the beat of the preceding context. Exploratory analyses investigated whether EEG amplitude at the beat-related SSEPs (steady state-evoked potentials) predicted performance on the beat induction task on a single-trial basis, but were inconclusive. Our findings substantiate the claim that auditory SSEPs reflect conscious perception of musical beat and not just stimulus features.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados/fisiología , Música , Percepción del Tiempo/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
11.
J Exp Psychol Hum Percept Perform ; 47(11): 1516-1542, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34843358

RESUMEN

Auditory perception of time is superior to visual perception, both for simple intervals and beat-based musical rhythms. To what extent does this auditory advantage characterize perception of different hierarchical levels of musical meter, and how is it related to lifelong experience with music? We paired musical excerpts with auditory and visual metronomes that matched or mismatched the musical meter at the beat level (faster) and measure level (slower) and obtained fit ratings from adults and children (5-10 years). Adults exhibited an auditory advantage in this task for the beat level, but not for the measure level. Children also displayed an auditory advantage that increased with age for the beat level. In both modalities, their overall sensitivity to beat increased with age, but they were not sensitive to measure-level matching at any age. More musical training was related to enhanced sensitivity in both auditory and visual modalities for measure-level matching in adults and beat-level matching in children. These findings provide evidence for auditory superiority of beat perception across development, and they suggest that beat and meter perception develop quite gradually and rely on lifelong acquisition of musical knowledge. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Música , Estimulación Acústica , Adulto , Percepción Auditiva , Niño , Humanos , Percepción Visual
12.
Behav Brain Sci ; 44: e74, 2021 09 30.
Artículo en Inglés | MEDLINE | ID: mdl-34588027

RESUMEN

Both target papers cite evidence from infancy and early childhood to support the notion of human musicality as a somewhat static suite of capacities; however, in our view they do not adequately acknowledge the critical role of developmental timing, the acquisition process, or the dynamics of social learning, especially during later periods of development such as middle childhood.


Asunto(s)
Música , Evolución Biológica , Niño , Preescolar , Humanos
13.
J Exp Psychol Gen ; 150(2): 314-339, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32852978

RESUMEN

Most music is temporally organized within a metrical hierarchy, having nested periodic patterns that give rise to the experience of stronger (downbeat) and weaker (upbeat) events. Musical meter presumably makes it possible to dance, sing, and play instruments in synchrony with others. It is nevertheless unclear whether or not listeners perceive multiple levels of periodicity simultaneously, and if they do, when and how they learn to do this. We tested children, adolescents, and musically trained and untrained adults with a new meter perception task. We presented excerpts of human-performed music paired with metronomes that matched or mismatched the metrical structure of the music at 2 hierarchical levels (beat and measure), and asked listeners to provide a rating of fit of metronome and music. Fit ratings suggested that adults with and without musical training were sensitive to both levels of meter simultaneously, but ratings were more strongly influenced by beat-level than by measure-level synchrony. Sensitivity to two simultaneous levels of meter was not evident in children or adolescents. Sensitivity to the beat alone was apparent in the youngest children and increased with age, whereas sensitivity to the measure alone was not present in younger children (5- to 8-year-olds). These findings suggest a prolonged period of development and refinement of hierarchical beat perception and surprisingly weak overall ability to attend to 2 beat levels at the same time across all ages. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Percepción Auditiva/fisiología , Música , Estimulación Acústica , Adolescente , Adulto , Femenino , Humanos , Aprendizaje/fisiología , Masculino , Persona de Mediana Edad , Adulto Joven
14.
Music Percept ; 37(3): 185-195, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36936548

RESUMEN

Many foundational questions in the psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of "music" and "culture."

15.
J Exp Child Psychol ; 159: 159-174, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28288412

RESUMEN

Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy.


Asunto(s)
Percepción Auditiva , Baile/psicología , Discriminación en Psicología , Percepción de Movimiento , Música/psicología , Psicología Infantil , Percepción Visual , Factores de Edad , Atención , Femenino , Humanos , Lactante , Masculino , Percepción del Tiempo
16.
Infancy ; 22(4): 421-435, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-31772509

RESUMEN

The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research-especially with infant participants-also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.

17.
Dev Psychol ; 52(11): 1867-1877, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27786530

RESUMEN

Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a change deafness paradigm and an object-encoding task to investigate how children (6, 8, and 10 years of age) and adults process auditory scenes composed of everyday sounds (e.g., human voices, animal calls, environmental sounds, and musical instruments). Results indicated that although change deafness was present and robust at all ages, listeners improved at detecting changes with age. All listeners were less sensitive to changes within the same semantic category than to small acoustic changes, suggesting that, regardless of age, listeners relied heavily on semantic category knowledge to detect changes. Furthermore, all listeners showed less change deafness when they correctly encoded change-relevant objects (i.e., when they remembered hearing the changing object during the task). Finally, we found that all listeners were better at encoding human voices and were more sensitive to detecting changes involving the human voice. Despite poorer overall performance compared with adults, children detect changes in complex auditory scenes much like adults, using high-level knowledge about auditory objects to guide processing, with special attention to the human voice. (PsycINFO Database Record


Asunto(s)
Percepción Auditiva/fisiología , Desarrollo Infantil/fisiología , Conocimiento , Semántica , Detección de Señal Psicológica/fisiología , Estimulación Acústica , Factores de Edad , Análisis de Varianza , Niño , Femenino , Humanos , Masculino , Psicoacústica , Estadística como Asunto
18.
Front Psychol ; 7: 939, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27445907

RESUMEN

The available evidence indicates that the music of a culture reflects the speech rhythm of the prevailing language. The normalized pairwise variability index (nPVI) is a measure of durational contrast between successive events that can be applied to vowels in speech and to notes in music. Music-language parallels may have implications for the acquisition of language and music, but it is unclear whether native-language rhythms are reflected in children's songs. In general, children's songs exhibit greater rhythmic regularity than adults' songs, in line with their caregiving goals and frequent coordination with rhythmic movement. Accordingly, one might expect lower nPVI values (i.e., lower variability) for such songs regardless of culture. In addition to their caregiving goals, children's songs may serve an intuitive didactic function by modeling culturally relevant content and structure for music and language. One might therefore expect pronounced rhythmic parallels between children's songs and language of origin. To evaluate these predictions, we analyzed a corpus of 269 English and French songs from folk and children's music anthologies. As in prior work, nPVI values were significantly higher for English than for French children's songs. For folk songs (i.e., songs not for children), the difference in nPVI for English and French songs was small and in the expected direction but non-significant. We subsequently collected ratings from American and French monolingual and bilingual adults, who rated their familiarity with each song, how much they liked it, and whether or not they thought it was a children's song. Listeners gave higher familiarity and liking ratings to songs from their own culture, and they gave higher familiarity and preference ratings to children's songs than to other songs. Although higher child-directedness ratings were given to children's than to folk songs, French listeners drove this effect, and their ratings were uniquely predicted by nPVI. Together, these findings suggest that language-based rhythmic structures are evident in children's songs, and that listeners expect exaggerated language-based rhythms in children's songs. The implications of these findings for enculturation processes and for the acquisition of music and language are discussed.

19.
Psychon Bull Rev ; 23(5): 1553-1558, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-26732385

RESUMEN

Numerous studies have shown that formal musical training is associated with sensory, motor, and cognitive advantages in individuals of various ages. However, the nature of the observed differences between musicians and nonmusicians is poorly understood, and little is known about the listening skills of individuals who engage in alternative types of everyday musical activities. Here, we show that people who have frequently played music video games outperform nonmusicians controls on a battery of music perception tests. These findings reveal that enhanced musical aptitude can be found among individuals who play music video games, raising the possibility that music video games could potentially enhance music perception skills in individuals across a broad spectrum of society who are otherwise unable to invest the time and/or money required to learn a musical instrument.


Asunto(s)
Aptitud , Percepción Auditiva , Música/psicología , Juegos de Video/psicología , Adolescente , Adulto , Anciano , Femenino , Humanos , Aprendizaje , Masculino , Persona de Mediana Edad , Personalidad , Adulto Joven
20.
Cognition ; 143: 135-40, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26151370

RESUMEN

Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differences in music and language processing arise from domain-specific knowledge, acoustic characteristics, or both. We controlled acoustic characteristics by using the speech-to-song illusion, which often results in a perceptual transformation to song after several repetitions of an utterance. Participants performed a same-different pitch discrimination task for the initial repetition (heard as speech) and the final repetition (heard as song). Better detection was observed for pitch changes that violated rather than conformed to Western musical scale structure, but only when utterances transformed to song, indicating that music-specific pitch representations were activated and influenced perception. This shows that music-specific processes can be activated when an utterance is heard as song, suggesting that the high-level status of a stimulus as either language or music can be behaviorally dissociated from low-level acoustic factors.


Asunto(s)
Música , Discriminación de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Adolescente , Adulto , Femenino , Humanos , Conocimiento , Masculino , Persona de Mediana Edad , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...