Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 185: 96-101, 2019 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-30336253

RESUMO

Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.


Assuntos
Córtex Cerebral/fisiologia , Música , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Adulto , Mapeamento Encefálico/métodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Adulto Jovem
2.
Sci Rep ; 14(1): 1135, 2024 01 11.
Artigo em Inglês | MEDLINE | ID: mdl-38212632

RESUMO

Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.


Assuntos
Dança , Música , Humanos , Percepção Auditiva , Som
3.
JMIR Res Protoc ; 12: e40034, 2023 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-36897643

RESUMO

BACKGROUND: Postoperative patients who were previously engaged in the live musical intervention Meaningful Music in Healthcare reported significantly reduced perception of pain than patients without the intervention. This encouraging finding indicates a potential for postsurgical musical interventions to have a place in standard care as therapeutic pain relief. However, live music is logistically complex in hospital settings, and previous studies have reported the more cost-effective recorded music to serve as a similar pain-reducing function in postsurgical patients. Moreover, little is known about the potential underlying physiological mechanisms that may be responsible for the reduced pain perceived by patients after the live music intervention. OBJECTIVE: The primary objective is to see whether a live music intervention can significantly lower perceived postoperative pain compared to a recorded music intervention and do-nothing control. The secondary objective is to explore the neuroinflammatory underpinnings of postoperative pain and the potential role of a music intervention in mitigating neuroinflammation. METHODS: This intervention study will compare subjective postsurgical pain ratings among 3 groups: live music intervention, recorded music intervention, and standard care control. The design will take the form of an on-off nonrandomized controlled trial. Adult patients undergoing elective surgery will be invited to participate. The intervention is a daily music session of up to 30 minutes for a maximum of 5 days. The live music intervention group is visited by professional musicians once a day for 15 minutes and will be asked to interact. The recorded music active control intervention group receives 15 minutes of preselected music over headphones. The do-nothing group receives typical postsurgical care that does not include music. RESULTS: At study completion, we will have an empirical indication of whether live music or recorded music has a significant impact on postoperative perceived pain. We hypothesize that the live music intervention will have more impact than recorded music but that both will reduce the perceived pain more than care-as-usual. We will moreover have the preliminary evidence of the physiological underpinnings responsible for reducing the perceived pain during a music intervention, from which hypotheses for future research may be derived. CONCLUSIONS: Live music can provide relief from pain experienced by patients recovering from surgery; however, it is not known to what degree live music improves the patients' pain experience than the logistically simpler alternative of recorded music. Upon completion, this study will be able to statistically compare live versus recorded music. This study will moreover be able to provide insight into the neurophysiological mechanisms involved in reduced pain perception as a result of postoperative music listening. TRIAL REGISTRATION: The Netherlands Central Commission on Human Research NL76900.042.21; https://www.toetsingonline.nl/to/ccmo_search.nsf/fABRpop?readform&unids=F2CA4A88E6040A45C1258791001AEA44. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/40034.

4.
Trends Hear ; 27: 23312165221141142, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36628512

RESUMO

While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.


Assuntos
Implante Coclear , Implantes Cocleares , Música , Humanos , Percepção Auditiva , Emoções
5.
Front Aging Neurosci ; 14: 806439, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35645774

RESUMO

During the normal course of aging, perception of speech-on-speech or "cocktail party" speech and use of working memory (WM) abilities change. Musical training, which is a complex activity that integrates multiple sensory modalities and higher-order cognitive functions, reportedly benefits both WM performance and speech-on-speech perception in older adults. This mini-review explores the relationship between musical training, WM and speech-on-speech perception in older age (> 65 years) through the lens of the Ease of Language Understanding (ELU) model. Linking neural-oscillation literature associating speech-on-speech perception and WM with alpha-theta oscillatory activity, we propose that two stages of speech-on-speech processing in the ELU are underpinned by WM-related alpha-theta oscillatory activity, and that effects of musical training on speech-on-speech perception may be reflected in these frequency bands among older adults.

6.
Front Psychol ; 10: 1357, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31275197

RESUMO

Relative clauses modify a preceding element, but as this element can be flexibly located, the point of attachment is sometimes ambiguous. Preference for this attachment can vary within languages such as German, yet explanations for differences in attachment preference related to cognitive strategies or constraints have been conflicting in the current literature. The present study aimed to assess the preference for relative clause attachment among German listeners and whether these preferences could be explained by strategy or individual differences in working memory or musical rhythm ability. We performed a sentence completion experiment, conducted post hoc interviews, and measured working memory and rhythm abilities with diagnostic tests. German listeners had no homogeneous attachment preference, although participants consistently completed individual sentences across trials according to the general preference that they reported offline. Differences in attachment preference were moreover not linked to individual differences in either working memory or musical rhythm ability. However, the pragmatic content of individual sentences sometimes overrode the general syntactic preference in participants with lower rhythm ability. Our study makes an important contribution to the field of psycholinguistics by validating offline self-reports as a reliable diagnostic for an individual's online relative clause attachment preference. The link between pragmatic strategy and rhythm ability is an interesting direction for future research.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA