Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 207
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38679480

RESUMO

Existing neuroimaging studies on neural correlates of musical familiarity often employ a familiar vs. unfamiliar contrast analysis. This singular analytical approach reveals associations between explicit musical memory and musical familiarity. However, is the neural activity associated with musical familiarity solely related to explicit musical memory, or could it also be related to implicit musical memory? To address this, we presented 130 song excerpts of varying familiarity to 21 participants. While acquiring their brain activity using functional magnetic resonance imaging (fMRI), we asked the participants to rate the familiarity of each song on a five-point scale. To comprehensively analyze the neural correlates of musical familiarity, we examined it from four perspectives: the intensity of local neural activity, patterns of local neural activity, global neural activity patterns, and functional connectivity. The results from these four approaches were consistent and revealed that musical familiarity is related to the activity of both explicit and implicit musical memory networks. Our findings suggest that: (1) musical familiarity is also associated with implicit musical memory, and (2) there is a cooperative and competitive interaction between the two types of musical memory in the perception of music.


Assuntos
Mapeamento Encefálico , Encéfalo , Imageamento por Ressonância Magnética , Música , Reconhecimento Psicológico , Humanos , Música/psicologia , Reconhecimento Psicológico/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos
2.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39001584

RESUMO

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Música , Humanos , Feminino , Masculino , Percepção Auditiva/fisiologia , Recém-Nascido , Canto/fisiologia , Recém-Nascido Prematuro/fisiologia , Mapeamento Encefálico , Estimulação Acústica , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Voz/fisiologia
3.
Exp Brain Res ; 242(9): 2207-2217, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39012473

RESUMO

Music is based on various regularities, ranging from the repetition of physical sounds to theoretically organized harmony and counterpoint. How are multidimensional regularities processed when we listen to music? The present study focuses on the redundant signals effect (RSE) as a novel approach to untangling the relationship between these regularities in music. The RSE refers to the occurrence of a shorter reaction time (RT) when two or three signals are presented simultaneously than when only one of these signals is presented, and provides evidence that these signals are processed concurrently. In two experiments, chords that deviated from tonal (harmonic) and acoustic (intensity and timbre) regularities were presented occasionally in the final position of short chord sequences. The participants were asked to detect all deviant chords while withholding their responses to non-deviant chords (i.e., the Go/NoGo task). RSEs were observed in all double- and triple-deviant combinations, reflecting processing of multidimensional regularities. Further analyses suggested evidence of coactivation by separate perceptual modules in the combination of tonal and acoustic deviants, but not in the combination of two acoustic deviants. These results imply that tonal and acoustic regularities are different enough to be processed as two discrete pieces of information. Examining the underlying process of RSE may elucidate the relationship between multidimensional regularity processing in music.


Assuntos
Estimulação Acústica , Percepção Auditiva , Música , Tempo de Reação , Humanos , Feminino , Masculino , Tempo de Reação/fisiologia , Adulto Jovem , Adulto , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia
4.
Dev Sci ; 27(5): e13519, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38679927

RESUMO

The present longitudinal study investigated the hypothesis that early musical skills (as measured by melodic and rhythmic perception and memory) predict later literacy development via a mediating effect of phonology. We examined 130 French-speaking children, 31 of whom with a familial risk for developmental dyslexia (DD). Their abilities in the three domains were assessed longitudinally with a comprehensive battery of behavioral tests in kindergarten, first grade, and second grade. Using a structural equation modeling approach, we examined potential longitudinal effects from music to literacy via phonology. We then investigated how familial risk for DD may influence these relationships by testing whether atypical music processing is a risk factor for DD. Results showed that children with a familial risk for DD consistently underperformed children without familial risk in music, phonology, and literacy. A small effect of musical ability on literacy via phonology was observed, but may have been induced by differences in stability across domains over time. Furthermore, early musical skills did not add significant predictive power to later literacy difficulties beyond phonological skills and family risk status. These findings are consistent with the idea that certain key auditory skills are shared between music and speech processing, and between DD and congenital amusia. However, they do not support the notion that music perception and memory skills can serve as a reliable early marker of DD, nor as a valuable target for reading remediation. RESEARCH HIGHLIGHTS: Music, phonology, and literacy skills of 130 children, 31 of whom with a familial risk for dyslexia, were examined longitudinally. Children with a familial risk for dyslexia consistently underperformed children without familial risk in musical, phonological, and literacy skills. Structural equation models showed a small effect of musical ability in kindergarten on literacy in second grade, via phonology in first grade. However, early musical skills did not add significant predictive power to later literacy difficulties beyond phonological skills and family risk status.


Assuntos
Dislexia , Música , Humanos , Dislexia/genética , Dislexia/fisiopatologia , Estudos Longitudinais , Criança , Masculino , Feminino , Fatores de Risco , Leitura , Pré-Escolar , Percepção Auditiva/fisiologia
5.
Eur Arch Otorhinolaryngol ; 281(7): 3475-3482, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38194096

RESUMO

PURPOSE: This study aimed to investigate the effects of low frequency (LF) pitch perception on speech-in-noise and music perception performance by children with cochlear implants (CIC) and typical hearing (THC). Moreover, the relationships between speech-in-noise and music perception as well as the effects of demographic and audiological factors on present research outcomes were studied. METHODS: The sample consisted of 22 CIC and 20 THC (7-10 years). Harmonic intonation (HI) and disharmonic intonation (DI) tests were used to assess LF pitch perception. Speech perception in quiet (WRSq)/noise (WRSn + 10) were tested with the Italian bisyllabic words for pediatric populations. The Gordon test was used to evaluate music perception (rhythm, melody, harmony, and overall). RESULTS: CIC/THC performance comparisons for LF pitch, speech-in-noise, and all music measures except harmony revealed statistically significant differences with large effect sizes. For the CI group, HI showed statistically significant correlations with melody discrimination. Melody/total Gordon scores were significantly correlated with WRSn + 10. For the overall group, HI/DI showed significant correlations with all music perception measures and WRSn + 10. Hearing thresholds showed significant effects on HI/DI scores. Hearing thresholds and WRSn + 10 scores were significantly correlated; both revealed significant effects on all music perception scores. CI age had significant effects on WRSn + 10, harmony, and total Gordon scores (p < 0.05). CONCLUSION: Such findings confirmed the significant effects of LF pitch perception on complex listening performance. Significant speech-in-noise and music perception correlations were as promising as results from recent studies indicating significant positive effects of music training on speech-in-noise recognition in CIC.


Assuntos
Implantes Cocleares , Música , Ruído , Percepção da Altura Sonora , Percepção da Fala , Humanos , Criança , Masculino , Feminino , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Implante Coclear
6.
Behav Res Methods ; 56(3): 1968-1983, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37221344

RESUMO

We describe the development and validation of a test battery to assess musical ability that taps into a broad range of music perception skills and can be administered in 10 minutes or less. In Study 1, we derived four very brief versions from the Profile of Music Perception Skills (PROMS) and examined their properties in a sample of 280 participants. In Study 2 (N = 109), we administered the version retained from Study 1-termed Micro-PROMS-with the full-length PROMS, finding a short-to-long-form correlation of r = .72. In Study 3 (N = 198), we removed redundant trials and examined test-retest reliability as well as convergent, discriminant, and criterion validity. Results showed adequate internal consistency ( ω ¯ = .73) and test-retest reliability (ICC = .83). Findings supported convergent validity of the Micro-PROMS (r = .59 with the MET, p < .01) as well as discriminant validity with short-term and working memory (r ≲ .20). Criterion-related validity was evidenced by significant correlations of the Micro-PROMS with external indicators of musical proficiency ( r ¯ = .37, ps < .01), and with Gold-MSI General Musical Sophistication (r = .51, p<.01). In virtue of its brevity, psychometric qualities, and suitability for online administration, the battery fills a gap in the tools available to objectively assess musical ability.


Assuntos
Música , Humanos , Reprodutibilidade dos Testes , Confiabilidade dos Dados , Psicometria , Habilidades para Realização de Testes
7.
Dev Sci ; 26(5): e13378, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36876849

RESUMO

This study investigates infants' enculturation to music in a bicultural musical environment. We tested 49 12- to 30-month-old Korean infants on their preference for Korean or Western traditional songs played by haegeum and cello. Korean infants have access to both Korean and Western music in their environment as captured on a survey of infants' daily exposure to music at home. Our results show that infants with less daily exposure to any kind of music at home listened longer to all music types. The infants' overall listening time did not differ between Korean and Western music and instruments. Rather, those with high exposure to Western music listened longer to Korean music played with haegeum. Moreover, older toddlers (aged 24-30 months) maintained a longer interest in songs of an origin with which they are less familiar, indicating an emerging orientation towards novelty. Early orientation of Korean infants toward the novel experience of music listening is likely driven by perceptual curiosity, which drives exploratory behavior that diminishes with continued exposure. On the other hand, older infants' orientation towards novel stimuli is led by epistemic curiosity, which motivates an infant to acquire new knowledge. Korean infants' lack of differential listening likely reflects their protracted period of enculturation to ambient music due to complex input. Further, older infants' novelty-orientation is consistent with findings in bilingual infants' orientation towards novel information. Additional analysis showed a long-term effect of music exposure on infants' vocabulary development. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=Kllt0KA1tJk RESEARCH HIGHLIGHTS: Korean infants showed novelty-oriented attention to music such that infants with less daily exposure to music at home showed longer listening times to music. 12- to 30-month-old Korean infants did not show differential listening to Korean versus Western music or instruments, suggesting a protracted period of perceptual openness. 24- to 30-month-old Korean toddlers' listening behavior indicated emerging novelty-preference, exhibiting delayed enculturation to ambient music compared to Western infants reported in earlier research. 18-month-old Korean infants with a greater weekly exposure to music had higher CDI scores a year later, consistent with the well-known music-to-language transfer effect.


Assuntos
Música , Humanos , Lactente , Pré-Escolar , Percepção Auditiva/fisiologia , Desenvolvimento Infantil/fisiologia , Idioma , República da Coreia
8.
Cogn Emot ; 37(2): 284-302, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36592153

RESUMO

ABSTRACTThe Musical Emotion Discrimination Task (MEDT) is a short, non-adaptive test of the ability to discriminate emotions in music. Test-takers hear two performances of the same melody, both played by the same performer but each trying to communicate a different basic emotion, and are asked to determine which one is "happier", for example. The goal of the current study was to construct a new version of the MEDT using a larger set of shorter, more diverse music clips and an adaptive framework to expand the ability range for which the test can deliver measurements. The first study analysed responses from a large sample of participants (N = 624) to determine how musical features contributed to item difficulty, which resulted in a quantitative model of musical emotion discrimination ability rooted in Item Response Theory (IRT). This model informed the construction of the adaptive MEDT. A second study contributed preliminary evidence for the validity and reliability of the adaptive MEDT, and demonstrated that the new version of the test is suitable for a wider range of abilities. This paper therefore presents the first adaptive musical emotion discrimination test, a new resource for investigating emotion processing which is freely available for research use.


Assuntos
Música , Humanos , Música/psicologia , Percepção Auditiva/fisiologia , Reprodutibilidade dos Testes , Emoções/fisiologia , Felicidade
9.
Behav Res Methods ; 2023 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-37957432

RESUMO

Auditory scene analysis (ASA) is the process through which the auditory system makes sense of complex acoustic environments by organising sound mixtures into meaningful events and streams. Although music psychology has acknowledged the fundamental role of ASA in shaping music perception, no efficient test to quantify listeners' ASA abilities in realistic musical scenarios has yet been published. This study presents a new tool for testing ASA abilities in the context of music, suitable for both normal-hearing (NH) and hearing-impaired (HI) individuals: the adaptive Musical Scene Analysis (MSA) test. The test uses a simple 'yes-no' task paradigm to determine whether the sound from a single target instrument is heard in a mixture of popular music. During the online calibration phase, 525 NH and 131 HI listeners were recruited. The level ratio between the target instrument and the mixture, choice of target instrument, and number of instruments in the mixture were found to be important factors affecting item difficulty, whereas the influence of the stereo width (induced by inter-aural level differences) only had a minor effect. Based on a Bayesian logistic mixed-effects model, an adaptive version of the MSA test was developed. In a subsequent validation experiment with 74 listeners (20 HI), MSA scores showed acceptable test-retest reliability and moderate correlations with other music-related tests, pure-tone-average audiograms, age, musical sophistication, and working memory capacities. The MSA test is a user-friendly and efficient open-source tool for evaluating musical ASA abilities and is suitable for profiling the effects of hearing impairment on music perception.

10.
Int J Psychol ; 58(5): 465-475, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37248624

RESUMO

Musical stimuli are widely used in emotion research and intervention studies. However, reviews have repeatedly noted that a lack of pre-evaluated musical stimuli is stalling progress in our understanding of specific effects of varying music. Musical stimuli vary along a plethora of dimensions. Of particular interest are emotional valence and tempo. Thus, we aimed to evaluate the emotional valence of a set of slow and fast musical stimuli. N = 102 (mean age: 39.95, SD: 13.60, 61% female) participants rated the perceived emotional valence in 20 fast (>110 beats per minute [bmp]) and 20 slow (<90 bpm) stimuli. Moreover, we collected reports on subjective arousal for each stimulus to explore arousal's association with tempo and valence. Finally, participants completed questionnaires on demographics, mood (profile of mood states), personality (10-item personality index), musical sophistication (Gold-music sophistication index), and sound preferences and hearing habits (sound preference and hearing habits questionnaire). Using mixed-effect model estimates, we identified 19 stimuli that participants rated to have positive valence and 16 stimuli that they rated to have negative valence. Higher age predicted more positive valence ratings across stimuli. Higher tempo and more extreme valence ratings were each associated with higher arousal. Higher educational attainment was also associated with higher arousal reports. Pre-evaluated stimuli can be used in future musical research.


Assuntos
Música , Humanos , Feminino , Adulto , Masculino , Música/psicologia , Emoções , Nível de Alerta , Afeto , Percepção , Percepção Auditiva
11.
Adv Exp Med Biol ; 1378: 195-212, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35902473

RESUMO

The cerebellum is involved in almost all cognitive functions related to music perception and music production. This has been shown by functional imaging and by similar techniques. In addition, lesion studies (i.e. examining patients with cerebellar infarction or tumour) also give evidence of this involvement. Different parts of the cerebellum have been identified for different aspects of these processing tasks and their individual connections to the cerebral cortex as well as to the basal ganglia. It has been shown for example that cerebellar disorders impair music perception in particular in melody comparison and metrum tasks. First research approaches are trying to use the current knowledge on the role of the cerebellum in music perception for therapeutic processes in degenerative disorders such as Alzheimer's disease.


Assuntos
Isquemia Encefálica , Música , Gânglios da Base , Isquemia Encefálica/patologia , Cerebelo/patologia , Cognição , Humanos
12.
Eur Arch Otorhinolaryngol ; 279(8): 3821-3829, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34596714

RESUMO

OBJECTIVE: The goal of this study was to investigate the performance correlations between music perception and speech intelligibility in noise by Italian-speaking cochlear implant (CI) users. MATERIALS AND METHODS: Twenty postlingually deafened adults with unilateral CIs (mean age 65 years, range 46-92 years) were tested with a music quality questionnaire using three passages of music from Classical Music, Jazz, and Soul. Speech recognition in noise was assessed using two newly developed adaptive tests in Italian: The Sentence Test with Adaptive Randomized Roving levels (STARR) and Matrix tests. RESULTS: Median quality ratings for Classical, Jazz and Soul music were 63%, 58% and 58%, respectively. Median SRTs for the STARR and Matrix tests were 14.3 dB and 7.6 dB, respectively. STARR performance was significantly correlated with Classical music ratings (rs = - 0.49, p = 0.029), whereas Matrix performance was significantly correlated with both Classical (rs = - 0.48, p = 0.031) and Jazz music ratings (rs = - 0.56, p = 0.011). CONCLUSION: Speech with competitive noise and music are naturally present in everyday listening environments. Recent speech perception tests based on an adaptive paradigm and sentence materials in relation with music quality measures might be representative of everyday performance in CI users. The present data contribute to cross-language studies and suggest that improving music perception in CI users may yield everyday benefit in speech perception in noise and may hence enhance the quality of listening for CI users.


Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Adulto , Idoso , Idoso de 80 Anos ou mais , Humanos , Idioma , Pessoa de Meia-Idade , Inteligibilidade da Fala
13.
Int J Audiol ; 61(12): 1045-1053, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34894993

RESUMO

OBJECTIVE: This study aimed to adapt a method used in sound quality measurements named CI-MUSHRA (the multiple stimuli with hidden reference and anchor for cochlear implant users) to the Turkish language. The effect of low-frequency information and non-native musical stimuli on sound quality perception was investigated. DESIGN: Subjects completed the Turkish version of the MUSHRA test, called TR-MUSHRA, and the original CI-MUSHRA test. Participants also completed the Turkish monosyllabic word recognition test and the spectral temporal modulated ripple test (SMRT). STUDY SAMPLE: 19 cochlear implant (CI) users and 16 normal-hearing (NH) adults were included. RESULTS: CI users demonstrated a lack of ability to detect the sound quality differences between original stimuli and stimuli with omitted low-frequency information up to 600 Hz in both tests. There was no significant main effect of the test version on sound quality ratings for the two groups. No significant correlation was found between mean sound quality scores, SMRT, and speech recognition in quiet and noise conditions. CONCLUSIONS: Our study suggests that CI users perform poorly in discriminating high-pass filtered musical sounds regardless of the language of the musical stimuli. The TR-MUSHRA can be used as a reliable research tool to evaluate the perceived sound quality.


Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Adulto , Humanos , Som , Audição
14.
J Neurosci ; 40(10): 2108-2118, 2020 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-32001611

RESUMO

In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time sample, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an individual tone.SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the "tonal hierarchy" of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.


Assuntos
Córtex Cerebral/fisiologia , Modelos Neurológicos , Percepção da Altura Sonora/fisiologia , Adulto , Feminino , Humanos , Aprendizado de Máquina , Magnetoencefalografia , Masculino
15.
Brain Cogn ; 148: 105660, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33421942

RESUMO

Frontotemporal dementia (FTD) is a neurodegenerative disease that presents with profound changes in social cognition. Music might be a sensitive probe for social cognition abilities, but underlying neurobiological substrates are unclear. We performed a meta-analysis of voxel-based morphometry studies in FTD patients and functional MRI studies for music perception and social cognition tasks in cognitively normal controls to identify robust patterns of atrophy (FTD) or activation (music perception or social cognition). Conjunction analyses were performed to identify overlapping brain regions. In total 303 articles were included: 53 for FTD (n = 1153 patients, 42.5% female; 1337 controls, 53.8% female), 28 for music perception (n = 540, 51.8% female) and 222 for social cognition in controls (n = 5664, 50.2% female). We observed considerable overlap in atrophy patterns associated with FTD, and functional activation associated with music perception and social cognition, mostly encompassing the ventral language network. We further observed overlap across all three modalities in mesolimbic, basal forebrain and striatal regions. The results of our meta-analysis suggest that music perception and social cognition share neurobiological circuits that are affected in FTD. This supports the idea that music might be a sensitive probe for social cognition abilities with implications for diagnosis and monitoring.


Assuntos
Demência Frontotemporal , Música , Doenças Neurodegenerativas , Atrofia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Cognição Social
16.
Audiol Neurootol ; 26(6): 389-413, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33878756

RESUMO

BACKGROUND: Although many clinicians have attempted music training for the hearing-impaired children, no specific effects have yet been reported for individual music components. This paper seeks to discover specific music components that help in improving speech perception of children with cochlear implants (CI) and to identify the effective training periods and methods needed for each component. METHOD: While assessing 5 electronic databases, that is, ScienceDirect, Scopus, PubMed, CINAHL, and Web of Science, 1,638 articles were found initially. After the screening and eligibility assessment stage based on the Participants, Intervention, Comparisons, Outcome, and Study Design (PICOS) inclusion criteria, 18 of 1,449 articles were chosen. RESULTS: A total of 18 studies and 14 studies (209 participants) were analyzed using a systematic review and meta-analysis, respectively. No publication bias was detected based on an Egger's regression result even though the funnel plot was asymmetrical. The results of the meta-analysis revealed that the largest improvement was seen for rhythm perception, followed by the perception of pitch and harmony and smallest for timbre perception after the music training. The duration of training affected the rhythm, pitch, and harmony perception but not the timbre. Interestingly, musical activities, such as singing, produced the biggest effect size, implying that children with CI obtained the greatest benefits of music training by singing, followed by playing an instrument and achieved the smallest effect by only listening to musical stimuli. Significant improvement in pitch perception helped with the enhancement of prosody perception. CONCLUSION: Music training can improve the music perception of children with CI and enhance their speech prosody. Long training duration was shown to provide the largest training effect of the children's perception improvement. The children with CI learned rhythm and pitch better than they did with harmony and timbre. These results support the finding of past studies that with music training, both rhythm and pitch perception can be improved, and it also helps in the development of prosody perception.


Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Humanos , Percepção da Altura Sonora
17.
Neuroimage ; 214: 116559, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-31978543

RESUMO

The brain activity of multiple subjects has been shown to synchronize during salient moments of natural stimuli, suggesting that correlation of neural responses indexes a brain state operationally termed 'engagement'. While past electroencephalography (EEG) studies have considered both auditory and visual stimuli, the extent to which these results generalize to music-a temporally structured stimulus for which the brain has evolved specialized circuitry-is less understood. Here we investigated neural correlation during natural music listening by recording EEG responses from N=48 adult listeners as they heard real-world musical works, some of which were temporally disrupted through shuffling of short-term segments (measures), reversal, or randomization of phase spectra. We measured correlation between multiple neural responses (inter-subject correlation) and between neural responses and stimulus envelope fluctuations (stimulus-response correlation) in the time and frequency domains. Stimuli retaining basic musical features, such as rhythm and melody, elicited significantly higher behavioral ratings and neural correlation than did phase-scrambled controls. However, while unedited songs were self-reported as most pleasant, time-domain correlations were highest during measure-shuffled versions. Frequency-domain measures of correlation (coherence) peaked at frequencies related to the musical beat, although the magnitudes of these spectral peaks did not explain the observed temporal correlations. Our findings show that natural music evokes significant inter-subject and stimulus-response correlations, and suggest that the neural correlates of musical 'engagement' may be distinct from those of enjoyment.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Estimulação Acústica/métodos , Adolescente , Adulto , Mapeamento Encefálico/métodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Adulto Jovem
18.
Exp Brain Res ; 238(10): 2279-2291, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32725358

RESUMO

Finger-tapping tasks have been widely adopted to investigate auditory-motor synchronization, i.e., the coupling of movement with an external auditory rhythm. However, the discrete nature of these movements usually limits their application to the study of beat perception in the context of isochronous rhythms. The purpose of the present pilot study was to test an innovative task that allows investigating bodily responses to complex, non-isochronous rhythms. A conductor's baton was provided to 16 healthy subjects, divided into 2 different groups depending on the years of musical training they had received (musicians or non-musicians). Ad hoc-created melodies, including notes of different durations, were played to the subjects. Each subject was asked to move the baton up and down according to the changes in pitch contour. Software for video analysis and modelling (Tracker®) was used to track the movement of the baton tip. The main parameters used for the analysis were the velocity peaks in the vertical axis. In the musician group, the number of velocity peaks exactly matched the number of notes, while in the non-musician group, the number of velocity peaks exceeded the number of notes. An exploratory data analysis using Poincaré plots suggested a greater degree of coupling between hand-arm movements and melody in musicians both with isochronous and non-isochronous rhythms. The calculated root mean square error (RMSE) between the note onset times and the velocity peaks, and the analysis of the distribution of velocity peaks in relationship to note onset times confirmed the effect of musical training. Notwithstanding the small number of participants, these results suggest that this novel behavioural task could be used to investigate auditory-motor coupling in the context of music in an ecologically valid setting. Furthermore, the task may be used for rhythm training and rehabilitation in neurological patients with movement disorders.


Assuntos
Música , Estimulação Acústica , Humanos , Movimento , Projetos Piloto
19.
Brain ; 142(7): 1973-1987, 2019 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-31074775

RESUMO

Focal epilepsy is a unilateral brain network disorder, providing an ideal neuropathological model with which to study the effects of focal neural disruption on a range of cognitive processes. While language and memory functions have been extensively investigated in focal epilepsy, music cognition has received less attention, particularly in patients with music training or expertise. This represents a critical gap in the literature. A better understanding of the effects of epilepsy on music cognition may provide greater insight into the mechanisms behind disease- and training-related neuroplasticity, which may have implications for clinical practice. In this cross-sectional study, we comprehensively profiled music and non-music cognition in 107 participants; musicians with focal epilepsy (n = 35), non-musicians with focal epilepsy (n = 39), and healthy control musicians and non-musicians (n = 33). Parametric group comparisons revealed a specific impairment in verbal cognition in non-musicians with epilepsy but not musicians with epilepsy, compared to healthy musicians and non-musicians (P = 0.029). This suggests a possible neuroprotective effect of music training against the cognitive sequelae of focal epilepsy, and implicates potential training-related cognitive transfer that may be underpinned by enhancement of auditory processes primarily supported by temporo-frontal networks. Furthermore, our results showed that musicians with an earlier age of onset of music training performed better on a composite score of melodic learning and memory compared to non-musicians (P = 0.037), while late-onset musicians did not differ from non-musicians. For most composite scores of music cognition, although no significant group differences were observed, a similar trend was apparent. We discuss these key findings in the context of a proposed model of three interacting dimensions (disease status, music expertise, and cognitive domain), and their implications for clinical practice, music education, and music neuroscience research.


Assuntos
Percepção Auditiva/fisiologia , Cognição/fisiologia , Epilepsias Parciais/fisiopatologia , Música/psicologia , Fármacos Neuroprotetores , Comportamento Verbal/fisiologia , Estimulação Acústica , Adulto , Fatores Etários , Estudos de Casos e Controles , Estudos Transversais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
20.
Sensors (Basel) ; 20(16)2020 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-32823519

RESUMO

Music has been shown to be capable of improving runners' performance in treadmill and laboratory-based experiments. This paper evaluates a generative music system, namely HEARTBEATS, designed to create biosignal synchronous music in real-time according to an individual athlete's heartrate or cadence (steps per minute). The tempo, melody, and timbral features of the generated music are modulated according to biosensor input from each runner using a combination of PPG (Photoplethysmography) and GPS (Global Positioning System) from a wearable sensor, synchronized via Bluetooth. We compare the relative performance of athletes listening to music with heartrate and cadence synchronous tempos, across a randomized trial (N = 54) on a trail course with 76 ft of elevation. Participants were instructed to continue until their self-reported perceived effort went beyond an 18 using the Borg rating of perceived exertion. We found that cadence-synchronous music improved performance and decreased perceived effort in male runners. For female runners, cadence synchronous music improved performance but it was heartrate synchronous music which significantly reduced perceived effort and allowed them to run the longest of all groups tested. This work has implications for the future design and implementation of novel portable music systems and in music-assisted coaching.


Assuntos
Monitorização Fisiológica , Música , Corrida , Atletas , Teste de Esforço , Feminino , Frequência Cardíaca , Humanos , Masculino , Dispositivos Eletrônicos Vestíveis
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa