Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 18.368
Filtrar
1.
Sci Rep ; 14(1): 21313, 2024 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-39266561

RESUMO

Extensive research with musicians has shown that instrumental musical training can have a profound impact on how acoustic features are processed in the brain. However, less is known about the influence of singing training on neural activity during voice perception, particularly in response to salient acoustic features, such as the vocal vibrato in operatic singing. To address this gap, the present study employed functional magnetic resonance imaging (fMRI) to measure brain responses in trained opera singers and musically untrained controls listening to recordings of opera singers performing in two distinct styles: a full operatic voice with vibrato, and a straight voice without vibrato. Results indicated that for opera singers, perception of operatic voice led to differential fMRI activations in bilateral auditory cortical regions and the default mode network. In contrast, musically untrained controls exhibited differences only in bilateral auditory cortex. These results suggest that operatic singing training triggers experience-dependent neural changes in the brain that activate self-referential networks, possibly through embodiment of acoustic features associated with one's own singing style.


Assuntos
Imageamento por Ressonância Magnética , Canto , Humanos , Canto/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Música , Rede de Modo Padrão/fisiologia , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Voz/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
2.
Sensors (Basel) ; 24(17)2024 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-39275592

RESUMO

Most existing intelligent editing tools for music and video rely on the cross-modal matching technology of the affective consistency or the similarity of feature representations. However, these methods are not fully applicable to complex audiovisual matching scenarios, resulting in low matching accuracy and suboptimal audience perceptual effects due to ambiguous matching rules and associated factors. To address these limitations, this paper focuses on both the similarity and integration of affective distribution for the artistic audiovisual works of movie and television video and music. Based on the rich emotional perception elements, we propose a hybrid matching model based on feature canonical correlation analysis (CCA) and fine-grained affective similarity. The model refines KCCA fusion features by analyzing both matched and unmatched music-video pairs. Subsequently, the model employs XGBoost to predict relevance and to compute similarity by considering fine-grained affective semantic distance as well as affective factor distance. Ultimately, the matching prediction values are obtained through weight allocation. Experimental results on a self-built dataset demonstrate that the proposed affective matching model balances feature parameters and affective semantic cognitions, yielding relatively high prediction accuracy and better subjective experience of audiovisual association. This paper is crucial for exploring the affective association mechanisms of audiovisual objects from a sensory perspective and improving related intelligent tools, thereby offering a novel technical approach to retrieval and matching in music-video editing.


Assuntos
Emoções , Música , Humanos , Emoções/fisiologia , Algoritmos
3.
Sensors (Basel) ; 24(17)2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39275694

RESUMO

Over the last few decades, a growing number of studies have used wearable technologies, such as inertial and pressure sensors, to investigate various domains of music experience, from performance to education. In this paper, we systematically review this body of literature using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. The initial search yielded a total of 359 records. After removing duplicates and screening for content, 23 records were deemed fully eligible for further analysis. Studies were grouped into four categories based on their main objective, namely performance-oriented systems, measuring physiological parameters, gesture recognition, and sensory mapping. The reviewed literature demonstrated the various ways in which wearable systems impact musical contexts, from the design of multi-sensory instruments to systems monitoring key learning parameters. Limitations also emerged, mostly related to the technology's comfort and usability, and directions for future research in wearables and music are outlined.


Assuntos
Música , Dispositivos Eletrônicos Vestíveis , Humanos
4.
JMIR Res Protoc ; 13: e55738, 2024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39269750

RESUMO

BACKGROUND: The practice of dental surgery requires a few different skills, including mental rotation of an object, precision of movement with good hand-eye coordination, and speed of technical movement. Learning these different skills begins during the preclinical phase of dental student training. Moreover, playing a musical instrument or video game seems to promote the early development of these skills. However, we found that studies specifically addressing this issue in the field of dental education are lacking. OBJECTIVE: The main aims of this study are to evaluate whether the ability to mentally represent a volume in 3D, the precision of gestures with their right and left hand, or the speed of gesture execution is better at baseline or progresses faster for players (video games or music or both). METHODS: A prospective monocentric controlled and longitudinal study will be conducted from September 2023 and will last until April 2025 in the Faculty of Dental Surgery of Nantes. Participants were students before starting their preclinical training. Different tests will be used such as Vandenberg and Kuse's mental rotation test, the modified Precision Manual Dexterity (PMD), and performing a pulpotomy on a permanent tooth. This protocol was approved by the Ethics, Deontology, and Scientific Integrity Committee of Nantes University (institutional review board approval number IORG0011023). RESULTS: A total of 86 second-year dental surgery students were enrolled to participate in the study in September 2023. They will take part in 4 iterations of the study, the last of which will take place in April 2025. CONCLUSIONS: Playing video games or a musical instrument or both could be a potential tool for initiating or facilitating the learning of certain technical skills in dental surgery. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/55738.


Assuntos
Música , Estudantes de Odontologia , Jogos de Vídeo , Humanos , Estudantes de Odontologia/psicologia , Estudos Prospectivos , Música/psicologia , Estudos Longitudinais , Educação em Odontologia/métodos , Competência Clínica , Feminino , Masculino
6.
Sci Rep ; 14(1): 20923, 2024 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251764

RESUMO

Does congruence between auditory and visual modalities affect aesthetic experience? While cross-modal correspondences between vision and hearing are well-documented, previous studies show conflicting results regarding whether audiovisual correspondence affects subjective aesthetic experience. Here, in collaboration with the Kentler International Drawing Space (NYC, USA), we depart from previous research by using music specifically composed to pair with visual art in the professionally-curated Music as Image and Metaphor exhibition. Our pre-registered online experiment consisted of 4 conditions: Audio, Visual, Audio-Visual-Intended (artist-intended pairing of art/music), and Audio-Visual-Random (random shuffling). Participants (N = 201) were presented with 16 pieces and could click to proceed to the next piece whenever they liked. We used time spent as an implicit index of aesthetic interest. Additionally, after each piece, participants were asked about their subjective experience (e.g., feeling moved). We found that participants spent significantly more time with Audio, followed by Audiovisual, followed by Visual pieces; however, they felt most moved in the Audiovisual (bi-modal) conditions. Ratings of audiovisual correspondence were significantly higher for the Audiovisual-Intended compared to Audiovisual-Random condition; interestingly, though, there were no significant differences between intended and random conditions on any other subjective rating scale, or for time spent. Collectively, these results call into question the relationship between cross-modal correspondence and aesthetic appreciation. Additionally, the results complicate the use of time spent as an implicit measure of aesthetic experience.


Assuntos
Percepção Auditiva , Estética , Música , Percepção Visual , Humanos , Música/psicologia , Feminino , Estética/psicologia , Masculino , Adulto , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Adulto Jovem , Arte , Estimulação Luminosa , Estimulação Acústica , Adolescente
7.
Pan Afr Med J ; 48: 24, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39220561

RESUMO

Introduction: the objective of the study was to find out the microstate map topographies and their parameters generated during the resting state and during listening to North Indian classical Music Raag 'the Raag Bilawal'. It was hypothesized that in the resting state and during listening to music conditions, there would be a difference in microstate parameters i.e. mean duration, global explained variance (GEV), and time coverage. Methods: a 128-channel electroencephalogram (EEG) was recorded for 12 Indian subjects (average age 26.1+1.4 years) while resting and listening to music using the EEG microstate investigation. Investigation and comparison of the microstate parameters were the mean duration, global explained variance (GEV), and time coverage between both conditions were performed. Results: seven microstate maps were found to represent the resting state and listening to music condition, four canonical and three novel maps. No statistically significant difference was found between the two conditions for time coverage and mean duration. The statistical significance levels of the map-1, map-2, map-3, map-4, map-5, map-6, and map-7 for the mean duration were 0.4, 0.6, 0.97, 0.34, 0.32, 0.69, and 0.29 respectively; and for time coverage were 0.92, 0.92, 0.96, 0.64, 0.78, 0.38, and 0.76 respectively. Map-1, map-4, and map-7 were the three novel maps we found in our study. Conclusion: similarities regarding stability and predominance of maps with small vulnerability exist in both conditions indicating that phonological, visual, and dorsal attention networks may be activated in both resting state and listening to music condition.


Assuntos
Eletroencefalografia , Música , Humanos , Adulto , Índia , Masculino , Feminino , Adulto Jovem , Percepção Auditiva/fisiologia , Fatores de Tempo , Encéfalo/fisiologia
10.
Brain Behav ; 14(9): e70007, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39236096

RESUMO

INTRODUCTION: Recent advances in artificial intelligence (AI) have been substantial. We investigated the effectiveness of an online meeting in which normal older adults (otokai) used a music-generative AI that transforms text to music (Music Trinity Generative Algorithm-Human Refined [MusicTGA-HR]). METHODS: One hundred eighteen community-dwelling, cognitively normal older adults were recruited through the internet (64 men, 54 women; mean age: 69.4 ± 4.4 years). Using MusicTGA-HR, the participants chose music that they thought was the most suitable to a given theme. We established 11 classes of 7-10 members and one instructor each. Each class held an online meeting once a week, and each participant presented the music they chose. The other participants and the instructor then commented on the music. Neuropsychological assessments were performed before and after the intervention for 6 months, and the results before and after the intervention were statistically analyzed. RESULTS: The category and letter word fluencies (WFs) were significantly improved (category WF: p = .003; letter WF: p = .036), and the time of the Trail-Making Test-B was also significantly shortened (p = .039). The Brain Assessment, an online cognitive test we developed, showed significant improvement in the memory of numbers (p < .001). CONCLUSION: The online meeting of the otokai, which used music-generative AI, improved the frontal lobe function and memory of independent normal older adults.


Assuntos
Inteligência Artificial , Lobo Frontal , Música , Humanos , Idoso , Feminino , Masculino , Lobo Frontal/fisiologia , Testes Neuropsicológicos , Pessoa de Meia-Idade
11.
Hum Exp Toxicol ; 43: 9603271241282584, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39240701

RESUMO

OBJECTIVE: Environmental factors such as noise and music can significantly impact physiological responses, including inflammation. This study explored how environmental factors like noise and music affect lipopolysaccharide (LPS)-induced inflammation, with a focus on systemic and organ-specific responses. MATERIALS AND METHODS: 24 Wistar rats were divided into four groups (n = 6 per group): Control group, LPS group, noise-exposed group, and music-exposed group. All rats, except for the Control group, received 10 mg/kg LPS intraperitoneally. The rats in the noise-exposed group were exposed to 95 dB noise, and the music-exposed group listened to Mozart's K. 448 music (65-75 dB) for 1 h daily over 7 days. An enzyme-linked immunosorbent assay was utilized to detect the levels of inflammatory cytokines, including tumor necrosis factor-α (TNF-α) and interleukin-1ß (IL-1ß), in serum and tissues (lung, liver, and kidney). Western blot examined the phosphorylation levels of nuclear factor-κB (NF-κB) p65 in organ tissues. RESULTS: Compared with the Control group, LPS-induced sepsis rats displayed a significant increase in the levels of TNF-α and IL-1ß in serum, lung, liver, and kidney tissues, as well as a remarkable elevation in the p-NF-κB p65 protein expression in lung, liver, and kidney tissues. Noise exposure further amplified these inflammatory markers, while music exposure reduced them in LPS-induced sepsis rats. CONCLUSION: Noise exposure exacerbates inflammation by activating the NF-κB pathway, leading to the up-regulation of inflammatory markers during sepsis. On the contrary, music exposure inhibits NF-κB signaling, indicating a potential therapeutic effect in reducing inflammation.


Assuntos
Lipopolissacarídeos , Música , Ruído , Ratos Wistar , Sepse , Animais , Lipopolissacarídeos/toxicidade , Sepse/imunologia , Sepse/complicações , Ruído/efeitos adversos , Masculino , Interleucina-1beta/sangue , Fator de Necrose Tumoral alfa/sangue , Fator de Necrose Tumoral alfa/metabolismo , Pulmão/imunologia , Pulmão/metabolismo , Inflamação , Fígado/metabolismo , Ratos , Rim/metabolismo , NF-kappa B/metabolismo , Citocinas/sangue , Citocinas/metabolismo
12.
JASA Express Lett ; 4(9)2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39235328

RESUMO

Music is an important signal class for hearing aids, and musical genre is often used as a descriptor for stimulus selection. However, little research has systematically investigated the acoustical properties of musical genres with respect to hearing aid amplification. Here, extracts from a combination of two comprehensive music databases were acoustically analyzed. Considerable overlap in acoustic descriptor space between genres emerged. By simulating hearing aid processing, it was shown that effects of amplification regarding dynamic range compression and spectral weighting differed across musical genres, underlining the critical role of systematic stimulus selection for research on music and hearing aids.


Assuntos
Auxiliares de Audição , Música , Música/psicologia , Humanos , Estimulação Acústica/métodos , Acústica
13.
PLoS One ; 19(9): e0309601, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39226287

RESUMO

Professionals and academics have recently placed a greater emphasis on comprehending the elements that go into improving students' psychological wellbeing. Students frequently face major obstacles as a result of the rigorous nature of academic life, which can result in problems like tension, depression and other psychological health concerns. These complications have a long-lasting influence on their future aspirations in addition to affecting their academic achievement. This study determined the effects of music learning on students' academic performance and psychological well-being. The mediating role of self-efficacy and self-esteem are also examined in this study. The data is collected from 326 students in Chinese universities and applied structural equation modeling for empirical analysis. The findings show that music education improves the students' psychological well-being, which in turn improves their academic performance. Additionally, psychological health is a major factor in improving the academic performance. There is significant mediating impact of self-efficacy and self-esteem in relationship between mental well-being and music education. To improve students' psychological health, it is suggested that policy makers should consider the integration of music education into academic settings.


Assuntos
Aprendizagem , Música , Autoimagem , Autoeficácia , Estudantes , Humanos , Masculino , Feminino , Estudantes/psicologia , Música/psicologia , Adulto Jovem , Universidades , Adulto , Desempenho Acadêmico/psicologia , Saúde Mental , Adolescente
14.
Hear Res ; 452: 109105, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39216335

RESUMO

Cochlear implant (CI) users experience diminished music enjoyment due to the technical limitations of the CI. Nonetheless, behavioral studies have reported that rhythmic features are well-transmitted through the CI. Still, the gradual improvement of rhythm perception after the CI switch-on has not yet been determined using neurophysiological measures. To fill this gap, we here reanalyzed the electroencephalographic responses of participants from two previous mismatch negativity studies. These studies included eight recently implanted CI users measured twice, within the first six weeks after CI switch-on and approximately three months later; thirteen experienced CI users with a median experience of 7 years; and fourteen normally hearing (NH) controls. All participants listened to a repetitive four-tone pattern (known in music as Alberti bass) for 35 min. Applying frequency tagging, we aimed to estimate the neural activity synchronized to the periodicities of the Alberti bass. We hypothesized that longer experience with the CI would be reflected in stronger frequency-tagged neural responses approaching the responses of NH controls. We found an increase in the frequency-tagged amplitudes after only 3 months of CI use. This increase in neural synchronization may reflect an early adaptation to the CI stimulation. Moreover, the frequency-tagged amplitudes of experienced CI users were significantly greater than those of recently implanted CI users, but still smaller than those of NH controls. The frequency-tagged neural responses did not just reflect spectrotemporal changes in the stimuli (i.e., intensity or spectral content fluctuating over time), but also showed non-linear transformations that seemed to enhance relevant periodicities of the Alberti bass. Our findings provide neurophysiological evidence indicating a gradual adaptation to the CI, which is noticeable already after three months, resulting in close to NH brain processing of spectrotemporal features of musical rhythms after extended CI use.


Assuntos
Estimulação Acústica , Implante Coclear , Implantes Cocleares , Eletroencefalografia , Música , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Adulto , Implante Coclear/instrumentação , Fatores de Tempo , Estudos de Casos e Controles , Potenciais Evocados Auditivos , Pessoas com Deficiência Auditiva/psicologia , Pessoas com Deficiência Auditiva/reabilitação , Idoso , Percepção Auditiva , Adaptação Fisiológica , Percepção da Altura Sonora
15.
Radiography (Lond) ; 30(5): 1451-1454, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39138028

RESUMO

BACKGROUND: Magnetic Resonance Imaging (MRI) continues to be an imaging technique that causes anxiety and concern for many patients. One longstanding approach that helps many patients is listening to music as a distraction technique during a scan. However, despite improvements in scanner technology itself, the means of providing music in this way has hardly changed, and so there may be opportunity to utilise emerging approaches in audio technology to further enhance the positive effect beyond simple distraction alone. OBJECTIVES: The aim of this technical note is to introduce spatial audio and its potential application in the MRI setting to create a sense of space within the confined environment of a scanner. FINDINGS: Initial feedback from a few patients indicate that spatial audio music may indeed enhance the perception of space within the MRI scanner, potentially mitigating or lowering feelings of claustrophobia and anxiety. CONCLUSION: Spatial audio could enhance the sense of space felt within an MRI scanner, but research is needed to evidence whether this would indeed have an improved effect compared to rot conventional audio. IMPLICATIONS FOR PRACTICE: This technical note sets the stage for further exploration of spatial audio technology within the setting of MRI to help adjust cognitive appraisal and concern over its confined nature. Providing music through this means could be another approach to help patients manage a scan and improve their experience of it.


Assuntos
Imageamento por Ressonância Magnética , Música , Humanos , Imageamento por Ressonância Magnética/métodos , Ansiedade/prevenção & controle , Percepção Espacial
16.
Behav Res Methods ; 56(7): 8038-8056, 2024 10.
Artigo em Inglês | MEDLINE | ID: mdl-39103597

RESUMO

Early home musical environments can significantly impact sensory, cognitive, and socioemotional development. While longitudinal studies may be resource-intensive, retrospective reports are a relatively quick and inexpensive way to examine associations between early home musical environments and adult outcomes. We present the Music@Home-Retrospective scale, derived partly from the Music@Home-Preschool scale (Politimou et al., 2018), to retrospectively assess the childhood home musical environment. In two studies (total n = 578), we conducted an exploratory factor analysis (Study 1) and confirmatory factor analysis (Study 2) on items, including many adapted from the Music@Home-Preschool scale. This revealed a 20-item solution with five subscales. Items retained for three subscales (Caregiver Beliefs, Caregiver Initiation of Singing, Child Engagement with Music) load identically to three in the Music@Home--Preschool Scale. We also identified two additional dimensions of the childhood home musical environment. The Attitude Toward Childhood Home Musical Environment subscale captures participants' current adult attitudes toward their childhood home musical environment, and the Social Listening Contexts subscale indexes the degree to which participants listened to music at home with others (i.e., friends, siblings, and caregivers). Music@Home-Retrospective scores were related to adult self-reports of musicality, performance on a melodic perception task, and self-reports of well-being, demonstrating utility in measuring the early home music environment as captured through this scale. The Music@Home-Retrospective scale is freely available to enable future investigations exploring how the early home musical environment relates to adult cognition, affect, and behavior.


Assuntos
Música , Humanos , Feminino , Masculino , Adulto , Estudos Retrospectivos , Pré-Escolar , Adulto Jovem , Análise Fatorial , Criança , Cuidadores/psicologia , Adolescente , Inquéritos e Questionários , Percepção Auditiva/fisiologia
17.
J Neurosci ; 44(37)2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39160067

RESUMO

During infancy and adolescence, language develops from a predominantly interhemispheric control-through the corpus callosum (CC)-to a predominantly intrahemispheric control, mainly subserved by the left arcuate fasciculus (AF). Using multimodal neuroimaging, we demonstrate that human left-handers (both male and female) with an atypical language lateralization show a rightward participation of language areas from the auditory cortex to the inferior frontal cortex when contrasting speech to tone perception and an enhanced interhemispheric anatomical and functional connectivity. Crucially, musicianship determines two different structural pathways to this outcome. Nonmusicians present a relation between atypical lateralization and intrahemispheric underdevelopment across the anterior AF, hinting at a dysregulation of the ontogenetic shift from an interhemispheric to an intrahemispheric brain. Musicians reveal an alternative pathway related to interhemispheric overdevelopment across the posterior CC and the auditory cortex. We discuss the heterogeneity in reaching atypical language lateralization and the relevance of early musical training in altering the normal development of language cognitive functions.


Assuntos
Lateralidade Funcional , Música , Humanos , Masculino , Feminino , Música/psicologia , Adulto , Lateralidade Funcional/fisiologia , Adulto Jovem , Idioma , Vias Neurais/fisiologia , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Corpo Caloso/fisiologia , Corpo Caloso/diagnóstico por imagem , Imageamento por Ressonância Magnética , Adolescente , Mapeamento Encefálico
18.
Proc Natl Acad Sci U S A ; 121(36): e2319459121, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39186645

RESUMO

The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.


Assuntos
Percepção Auditiva , Encéfalo , Imageamento por Ressonância Magnética , Música , Humanos , Música/psicologia , Percepção Auditiva/fisiologia , Masculino , Adulto , Feminino , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Adulto Jovem , Estimulação Acústica
19.
J Med Internet Res ; 26: e54186, 2024 08 27.
Artigo em Inglês | MEDLINE | ID: mdl-39190917

RESUMO

BACKGROUND: Music has long been identified as a nonpharmacological tool that can provide benefits for people with dementia, and there is considerable interest in designing technologies to support the use of music in dementia care. However, to ensure that music technologies are appropriately designed for supporting caregivers and people living with dementia, there remains a need to better understand how music is currently used in everyday dementia care at home. OBJECTIVE: This study aims to understand how people living with dementia and their caregivers use music and music technologies in everyday caring, as well as the challenges they experience using music and technology. METHODS: This study used a mixed methods design. First, a survey was administered to 13 people living with dementia and 64 caregivers to understand their use of music and technology. Subsequently, 18 survey respondents (family caregivers: n=12, 67%; people living with dementia: n=6, 33%) participated in focus groups regarding their experiences of using music and technology in care. Interview transcripts were analyzed using reflexive thematic analysis. RESULTS: Most of the survey respondents (people living with dementia: 9/13, 69%; family caregivers: 47/63, 75%) reported using music often or very often in their daily lives. Participants reported a range of technologies used for listening to music, such as CDs, radio, and streaming services. Focus groups highlighted the benefits and challenges of using music and music technologies in everyday care. Participants identified using music and music technologies to regulate mood, provide joy, facilitate social interaction and connection, encourage reminiscence, provide continuity of music use before and after the dementia diagnosis, and make caregiving easier. The challenges of using music technology in everyday caring included difficulties with staying up to date with evolving technology and low self-efficacy with technology for people living with dementia. CONCLUSIONS: This study shows that people with a dementia diagnosis and their caregivers already use music and music technologies to support their everyday care needs. The results suggest opportunities to design technologies that enable easier access to music and to support people living with dementia with recreational and therapeutic music listening as well as music-based activities.


Assuntos
Cuidadores , Demência , Grupos Focais , Música , Humanos , Demência/psicologia , Cuidadores/psicologia , Música/psicologia , Feminino , Masculino , Idoso , Inquéritos e Questionários , Musicoterapia/métodos , Pessoa de Meia-Idade , Idoso de 80 Anos ou mais
20.
Cereb Cortex ; 34(8)2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39110413

RESUMO

Music is a non-verbal human language, built on logical, hierarchical structures, that offers excellent opportunities to explore how the brain processes complex spatiotemporal auditory sequences. Using the high temporal resolution of magnetoencephalography, we investigated the unfolding brain dynamics of 70 participants during the recognition of previously memorized musical sequences compared to novel sequences matched in terms of entropy and information content. Measures of both whole-brain activity and functional connectivity revealed a widespread brain network underlying the recognition of the memorized auditory sequences, which comprised primary auditory cortex, superior temporal gyrus, insula, frontal operculum, cingulate gyrus, orbitofrontal cortex, basal ganglia, thalamus, and hippocampus. Furthermore, while the auditory cortex responded mainly to the first tones of the sequences, the activity of higher-order brain areas such as the cingulate gyrus, frontal operculum, hippocampus, and orbitofrontal cortex largely increased over time during the recognition of the memorized versus novel musical sequences. In conclusion, using a wide range of analytical techniques spanning from decoding to functional connectivity and building on previous works, our study provided new insights into the spatiotemporal whole-brain mechanisms for conscious recognition of auditory sequences.


Assuntos
Percepção Auditiva , Encéfalo , Magnetoencefalografia , Música , Humanos , Masculino , Feminino , Adulto , Magnetoencefalografia/métodos , Percepção Auditiva/fisiologia , Adulto Jovem , Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Mapeamento Encefálico/métodos , Rede Nervosa/fisiologia , Rede Nervosa/diagnóstico por imagem , Estimulação Acústica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA