Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 15: 588914, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33584187

RESUMO

Attentional limits make it difficult to comprehend concurrent speech streams. However, multiple musical streams are processed comparatively easily. Coherence may be a key difference between music and stimuli like speech, which does not rely on the integration of multiple streams for comprehension. The musical organization between melodies in a composition may provide a cognitive scaffold to overcome attentional limitations when perceiving multiple lines of music concurrently. We investigated how listeners attend to multi-voiced music, examining biological indices associated with processing structured versus unstructured music. We predicted that musical structure provides coherence across distinct musical lines, allowing listeners to attend to simultaneous melodies, and that a lack of organization causes simultaneous melodies to be heard as separate streams. Musician participants attended to melodies in a Coherent music condition featuring flute duets and a Jumbled condition where those duets were manipulated to eliminate coherence between the parts. Auditory-evoked cortical potentials were collected to a tone probe. Analysis focused on the N100 response which is primarily generated within the auditory cortex and is larger for attended versus ignored stimuli. Results suggest that participants did not attend to one line over the other when listening to Coherent music, instead perceptually integrating the streams. Yet, for the Jumbled music, effects indicate that participants attended to one line while ignoring the other, abandoning their integration. Our findings lend support for the theory that musical organization aids attention when perceiving multi-voiced music.

2.
Ear Hear ; 41(5): 1372-1382, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32149924

RESUMO

OBJECTIVES: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adolescente , Adulto , Criança , Emoções , Feminino , Humanos , Masculino , Fala , Adulto Jovem
3.
Neuroimage ; 209: 116496, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-31899286

RESUMO

Improvisation is sometimes described as instant composition and offers a glimpse into real-time musical creativity. Over the last decade, researchers have built up our understanding of the core neural activity patterns associated with musical improvisation by investigating cohorts of professional musicians. However, since creative behavior calls on the unique individuality of an artist, averaging data across musicians may dilute important aspects of the creative process. By performing case study investigations of world-class artists, we may gain insight into their unique creative abilities and achieve a deeper understanding of the biological basis of musical creativity. In this experiment, functional magnetic resonance imaging and functional connectivity were used to study the neural correlates of improvisation in famed Classical music performer and improviser, Gabriela Montero. GM completed two control tasks of varying musical complexity; for the Scale condition she repeatedly played a chromatic scale and for the Memory condition she performed a given composition by memory. For the experimental improvisation condition, she performed improvisations. Thus, we were able to compare the neural activity that underlies a generative musical task like improvisation to 'rote' musical tasks of playing pre-learned and pre-memorized music. In GM, improvisation was largely associated with activation of auditory, frontal/cognitive, motor, parietal, occipital, and limbic areas, suggesting that improvisation is a multimodal activity for her. Functional connectivity analysis suggests that the visual network, default mode network, and subcortical networks are involved in improvisation as well. While these findings should not be generalized to other samples or populations, results here shed insight into the brain activity that underlies GM's unique abilities to perform Classical-style musical improvisations.


Assuntos
Córtex Cerebral/fisiologia , Conectoma , Criatividade , Sistema Límbico/fisiologia , Música , Rede Nervosa/fisiologia , Desempenho Psicomotor/fisiologia , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Sistema Límbico/diagnóstico por imagem , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem
4.
PLoS One ; 10(8): e0134725, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26308092

RESUMO

Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT's at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy's "Gangnam Style" in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied familiarity with dance choreography may facilitate meter awareness. Results shed light on the processing of multimedia environments.


Assuntos
Percepção Auditiva , Dança/fisiologia , Música , Feminino , Humanos , Masculino , Movimento , Tempo de Reação , Adulto Jovem
5.
Front Psychol ; 4: 713, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24137142

RESUMO

What makes a musician? In this review, we discuss innate and experience-dependent factors that mold the musician brain in addition to presenting new data in children that indicate that some neural enhancements in musicians unfold with continued training over development. We begin by addressing effects of training on musical expertise, presenting neural, perceptual, and cognitive evidence to support the claim that musicians are shaped by their musical training regimes. For example, many musician-advantages in the neural encoding of sound, auditory perception, and auditory-cognitive skills correlate with their extent of musical training, are not observed in young children just initiating musical training, and differ based on the type of training pursued. Even amidst innate characteristics that contribute to the biological building blocks that make up the musician, musicians demonstrate further training-related enhancements through extensive education and practice. We conclude by reviewing evidence from neurobiological and epigenetic approaches to frame biological markers of musicianship in the context of interactions between genetic and experience-related factors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...