Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
J Exp Child Psychol ; 191: 104711, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31770684

RESUMO

Effects of music on language processing have been reported separately for syntax and for semantics. Previous studies have shown that regular musical rhythms can facilitate syntax processing and that semantic features of musical excerpts can influence semantic processing of words. It remains unclear whether musical parameters, such as rhythm and sound texture, may specifically influence different components of linguistic processing. In the current study, two types of musical sequences (one focusing on rhythm and the other focusing on sound texture) were presented to children who were requested to perform a syntax or a semantic task thereafter. The results revealed that rhythmic and textural musical sequences differently influence syntax and semantic processing. For grammaticality judgments, children's performance was better after regular rhythmic sequences than after textural sound sequences. In the semantic evocation task, children produced more numerous and more various concepts after textural sound sequences than after regular rhythmic sequences. These results suggest that rhythm boosts perceptual and cognitive sequencing required in syntax processing, whereas texture promote verbalization and concept activation in verbal production. The findings have implications for the interpretation of musical priming effects and are discussed in the frameworks of dynamic attending and conceptual processing.


Assuntos
Percepção Auditiva/fisiologia , Música , Psicolinguística , Criança , Feminino , Humanos , Testes de Linguagem , Masculino , Semântica
2.
Pain Manag Nurs ; 16(5): 664-71, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26163741

RESUMO

In fibromyalgia, pain symptoms such as hyperalgesia and allodynia are associated with fatigue. Mechanisms underlying such symptoms can be modulated by listening to pleasant music. We expected that listening to music, because of its emotional impact, would have a greater modulating effect on the perception of pain and fatigue in patients with fibromyalgia than listening to nonmusical sounds. To investigate this hypothesis, we carried out a 4-week study in which patients with fibromyalgia listened to either preselected musical pieces or environmental sounds when they experienced pain in active (while carrying out a physical activity) or passive (at rest) situations. Concomitant changes of pain and fatigue levels were evaluated. When patients listened to music or environmental sounds at rest, pain and fatigue levels were significantly reduced after 20 minutes of listening, with no difference of effect magnitude between the two stimuli. This improvement persisted 10 minutes after the end of the listening session. In active situations, pain did not increase in presence of the two stimuli. Contrary to our expectations, music and environmental sounds produced a similar relieving effect on pain and fatigue, with no benefit gained by listening to pleasant music over environmental sounds.


Assuntos
Fadiga/terapia , Fibromialgia/terapia , Musicoterapia/métodos , Manejo da Dor , Dor , Som , Adulto , Meio Ambiente , Exercício Físico , Feminino , Humanos , Pessoa de Meia-Idade , Atividade Motora , Medição da Dor , Descanso , Terapias Sensoriais através das Artes/métodos , Resultado do Tratamento
3.
Neuropsychol Rehabil ; 24(6): 894-917, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24881953

RESUMO

Previous studies have suggested that presenting to-be-memorised lyrics in a singing mode, instead of a speaking mode, may facilitate learning and retention in normal adults. In this study, seven healthy older adults and eight participants with mild Alzheimer's disease (AD) learned and memorised lyrics that were either sung or spoken. We measured the percentage of words recalled from these lyrics immediately and after 10 minutes. Moreover, in AD participants, we tested the effect of successive learning episodes for one spoken and one sung excerpt, as well as long-term retention after a four week delay. Sung conditions did not influence lyrics recall in immediate recall but increased delayed recall for both groups. In AD, learning slopes for sung and spoken lyrics did not show a significant difference across successive learning episodes. However, sung lyrics showed a slight advantage over spoken ones after a four week delay. These results suggest that singing may increase the load of initial learning but improve long-term retention of newly acquired verbal information. We further propose some recommendations on how to maximise these effects and make them relevant for therapeutic applications.


Assuntos
Envelhecimento/psicologia , Doença de Alzheimer/psicologia , Rememoração Mental , Canto , Idoso , Idoso de 80 Anos ou mais , Humanos , Masculino
4.
J Clin Med ; 11(15)2022 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-35956042

RESUMO

The goal of this study was to evaluate the music perception of cochlear implantees with two different sound processing strategies. Methods: Twenty-one patients with unilateral or bilateral cochlear implants (Oticon Medical®) were included. A music trial evaluated emotions (sad versus happy based on tempo and/or minor versus major modes) with three tests of increasing difficulty. This was followed by a test evaluating the perception of musical dissonances (marked out of 10). A novel sound processing strategy reducing spectral distortions (CrystalisXDP, Oticon Medical) was compared to the standard strategy (main peak interleaved sampling). Each strategy was used one week before the music trial. Results: Total music score was higher with CrystalisXDP than with the standard strategy. Nine patients (21%) categorized music above the random level (>5) on test 3 only based on mode with either of the strategies. In this group, CrystalisXDP improved the performances. For dissonance detection, 17 patients (40%) scored above random level with either of the strategies. In this group, CrystalisXDP did not improve the performances. Conclusions: CrystalisXDP, which enhances spectral cues, seemed to improve the categorization of happy versus sad music. Spectral cues could participate in musical emotions in cochlear implantees and improve the quality of musical perception.

5.
Atten Percept Psychophys ; 84(4): 1370-1392, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35437703

RESUMO

Humans have a remarkable capacity for perceiving and producing rhythm. Rhythmic competence is often viewed as a single concept, with participants who perform more or less accurately on a single rhythm task. However, research is revealing numerous sub-processes and competencies involved in rhythm perception and production, which can be selectively impaired or enhanced. To investigate whether different patterns of performance emerge across tasks and individuals, we measured performance across a range of rhythm tasks from different test batteries. Distinct performance patterns could potentially reveal separable rhythmic competencies that may draw on distinct neural mechanisms. Participants completed nine rhythm perception and production tasks selected from the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA), the Beat Alignment Test (BAT), the Beat-Based Advantage task (BBA), and two tasks from the Burgundy best Musical Aptitude Test (BbMAT). Principal component analyses revealed clear separation of task performance along three main dimensions: production, beat-based rhythm perception, and sequence memory-based rhythm perception. Hierarchical cluster analyses supported these results, revealing clusters of participants who performed selectively more or less accurately along different dimensions. The current results support the hypothesis of divergence of rhythmic skills. Based on these results, we provide guidelines towards a comprehensive testing of rhythm abilities, including at least three short tasks measuring: (1) rhythm production (e.g., tapping to metronome/music), (2) beat-based rhythm perception (e.g., BAT), and (3) sequence memory-based rhythm processing (e.g., BBA). Implications for underlying neural mechanisms, future research, and potential directions for rehabilitation and training programs are discussed.


Assuntos
Percepção Auditiva , Música , Humanos , Memória , Análise e Desempenho de Tarefas
6.
Percept Mot Skills ; 112(3): 737-48, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21853763

RESUMO

The aim of this study was to identify the psycho-musical factors that govern time evaluation in Western music from baroque, classic, romantic, and modern repertoires. The excerpts were previously found to represent variability in musical properties and to induce four main categories of emotions. 48 participants (musicians and nonmusicians) freely listened to 16 musical excerpts (lasting 20 sec. each) and grouped those that seemed to have the same duration. Then, participants associated each group of excerpts to one of a set of sine wave tones varying in duration from 16 to 24 sec. Multidimensional scaling analysis generated a two-dimensional solution for these time judgments. Musical excerpts with high arousal produced an overestimation of time, and affective valence had little influence on time perception. The duration was also overestimated when tempo and loudness were higher, and to a lesser extent, timbre density. In contrast, musical tension had little influence.


Assuntos
Percepção Auditiva , Música , Percepção do Tempo , Adulto , Nível de Alerta , Atenção , Aprendizagem por Discriminação , Emoções , Feminino , Humanos , Ilusões , Julgamento , Masculino , Percepção da Altura Sonora , Competência Profissional , Psicoacústica , Adulto Jovem
7.
Front Psychol ; 12: 751248, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34925155

RESUMO

Multimodal perception is a key factor in obtaining a rich and meaningful representation of the world. However, how each stimulus combines to determine the overall percept remains a matter of research. The present work investigates the effect of sound on the bimodal perception of motion. A visual moving target was presented to the participants, associated with a concurrent sound, in a time reproduction task. Particular attention was paid to the structure of both the auditory and the visual stimuli. Four different laws of motion were tested for the visual motion, one of which is biological. Nine different sound profiles were tested, from an easier constant sound to more variable and complex pitch profiles, always presented synchronously with motion. Participants' responses show that constant sounds produce the worst duration estimation performance, even worse than the silent condition; more complex sounds, instead, guarantee significantly better performance. The structure of the visual stimulus and that of the auditory stimulus appear to condition the performance independently. Biological motion provides the best performance, while the motion featured by a constant-velocity profile provides the worst performance. Results clearly show that a concurrent sound influences the unified perception of motion; the type and magnitude of the bias depends on the structure of the sound stimulus. Contrary to expectations, the best performance is not generated by the simplest stimuli, but rather by more complex stimuli that are richer in information.

8.
Front Neurosci ; 15: 558421, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34025335

RESUMO

Introduction: The objective of our study was to evaluate musical perception and its relation to the quality of life in patients with bimodal binaural auditory stimulation. Materials and Methods: Nineteen adult patients with a cochlear implant (CI) for minimum 6 months, and moderate to severe contralateral hearing loss with a hearing aid (HA), and 21 normal hearing adults were included in this prospective, cross-sectional study. Pure-tone and speech audiometry, musical test evaluating sound perception characteristics and musical listening abilities, Munich questionnaire for musical habits, and the APHAB questionnaire were recoded. Performance in musical perception test with HA, CI, and HA + CI, and potential correlations between music test, audiometry and questionnaires were investigated. Results: Bimodal stimulation improved musical perception in several features (sound brightness, roughness, and clarity) in comparison to unimodal hearing, but CI did not add to HA performances in texture, polyphony or musical emotion and even appeared to interfere negatively in pitch perception with HA. Musical perception performances (sound clarity, instrument recognition) appeared to be correlated to hearing-related quality of life (APHAB RV and EC subdomains) but not with speech performances suggesting that the exploration of musical perception complements speech understanding evaluation to better describe every-day life hearing handicap. Conclusion: Testing musical sound perception provides important information on hearing performances as a complement to speech audiometry and appears to be related to hearing-related quality of life.

9.
Front Hum Neurosci ; 14: 216, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32670038

RESUMO

Past empirical studies have suggested that older adults preferentially use gaze-based mood regulation to lessen their negative experiences while watching an emotional scene. This preference for a low cognitively demanding regulatory strategy leaves open the question of whether the effortful processing of a more cognitively demanding reappraisal task is really spared from the general age-related decline. Because it does not allow perceptual attention to be redirected away from the emotional source, music provides an ideal way to address this question. The goal of our study was to examine the affective, behavioral, physiological, and cognitive outcomes of positive and detached reappraisal in response to negative musical emotion in younger and older adults. Participants first simply listened to a series of threatening musical excerpts and were then instructed to either positively reappraise or to detach themselves from the emotion elicited by music. Findings showed that, when instructed to simply listen to threatening music, older adults reported a more positive feeling associated with a smaller SCL in comparison with their younger counterparts. When implementing positive and detached reappraisal, participants showed more positive and more aroused emotional experiences, whatever the age group. We also found that the instruction to intentionally reappraise negative emotions results in a lesser cognitive cost for older adults in comparison with younger adults. Taken together, these data suggest that, compared to younger adults, older adults engage in spontaneous downregulation of negative affect and successfully implement downregulation instructions. This extends previous findings and brings compelling evidence that, even when auditory attention cannot be redirected away from the emotional source, older adults are still more effective at regulating emotions. Taking into account the age-associated decline in executive functioning, our results suggest that the working memory task could have distracted older adults from the reminiscences of the threat-evoking music, thus resulting in an emotional downregulation. Hence, even if they were instructed to implement reappraisal strategies, older adults might prefer distraction over engagement in reappraisal. This is congruent with the idea that, although getting older, people are more likely to be distracted from a negative source of emotion to maintain their well-being.

10.
Cortex ; 130: 78-93, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32645502

RESUMO

For the hemispheric laterality of emotion processing in the brain, two competing hypotheses are currently still debated. The first hypothesis suggests a greater involvement of the right hemisphere in emotion perception whereas the second hypothesis suggests different involvements of each hemisphere as a function of the valence of the emotion. These hypotheses are based on findings for facial and prosodic emotion perception. Investigating emotion perception for other stimuli, such as music, should provide further insight and potentially help to disentangle between these two hypotheses. The present study investigated musical emotion perception in patients with unilateral right brain damage (RBD, n = 16) or left brain damage (LBD, n = 16), as well as in matched healthy comparison participants (n = 28). The experimental task required explicit recognition of musical emotions as well as ratings on the perceived intensity of the emotion. Compared to matched comparison participants, musical emotion recognition was impaired only in LBD participants, suggesting a potential specificity of the left hemisphere for explicit emotion recognition in musical material. In contrast, intensity ratings of musical emotions revealed that RBD patients underestimated the intensity of negative emotions compared to positive emotions, while LBD patients and comparisons did not show this pattern. To control for a potential generalized emotion deficit for other types of stimuli, we also tested facial emotion recognition in the same patients and their matched healthy comparisons. This revealed that emotion recognition after brain damage might depend on the stimulus category or modality used. These results are in line with the hypothesis of a deficit of emotion perception depending on lesion laterality and valence in brain-damaged participants. The present findings provide critical information to disentangle the currently debated competing hypotheses and thus allow for a better characterization of the involvement of each hemisphere for explicit emotion recognition and their perceived intensity.


Assuntos
Música , Córtex Cerebral , Emoções , Expressão Facial , Lateralidade Funcional , Humanos , Reconhecimento Psicológico
11.
Neuropsychology ; 32(7): 880-894, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30047757

RESUMO

OBJECTIVE: To further our understanding of the role of perceptual processes in musical emotions, we investigated individuals with congenital amusia, a neurodevelopmental disorder that alters pitch processing. METHOD: Amusic and matched control participants were studied for emotion recognition and emotion intensity ratings of both musical excerpts and faces. RESULTS: Emotion recognition was found to be impaired in amusic participants relative to controls for the musical stimuli only. This impairment suggests that perceptual deficits in music processing reduce amusics' access to a verbal and conscious representation of musical emotions. Nevertheless, amusics' performance for emotion recognition was above chance level, and multidimensional scaling (MDS) analyses revealed that their categorization of musical pieces was based on similar representation spaces of emotions as for control participants. The emotion intensity ratings, nonverbal and possibly more implicit than the categorization task, seemed to be intact in amusic participants. CONCLUSIONS: These findings reveal that pitch deficits can hinder the recognition of emotions conveyed by musical pieces, while also highlighting the (at least partial) dissociation between emotion recognition and emotion intensity evaluation. Our study thus sheds light on the complex interactions between perceptual and emotional networks in the brain, by showing that impaired central auditory processing partially alters musical emotion processing. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Assuntos
Transtornos da Percepção Auditiva/psicologia , Emoções , Música/psicologia , Reconhecimento Psicológico , Adulto , Expressão Facial , Reconhecimento Facial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Altura Sonora , Desempenho Psicomotor , Percepção Social , Adulto Jovem
12.
Cogn Neuropsychol ; 24(6): 603-22, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18416511

RESUMO

Our study investigated with an implicit method (i.e., priming paradigm) whether I.R. - a brain-damaged patient exhibiting severe amusia - processes implicitly musical structures. The task consisted in identifying one of two phonemes (Experiment 1) or timbres (Experiment 2) on the last chord of eight-chord sequences (i.e., target). The targets were harmonically related or less related to the prior chords. I.R. displayed harmonic priming effects: Phoneme and timbre identification was faster for related than for less related targets (Experiments 1 and 2). However, I.R.'s explicit judgements of completion for the same sequences did not differ between related and less related contexts (Experiment 3). Her impaired performance in explicit judgements was not due to general difficulties with task demands since she performed like controls for completion judgements on spoken sentences (Experiment 4). The findings indicate that implicit knowledge of musical structures might remain intact and accessible, even when explicit judgements and overt recognition have been lost.


Assuntos
Transtornos da Percepção Auditiva/diagnóstico , Música , Transtornos da Percepção Auditiva/fisiopatologia , Feminino , Humanos , Julgamento , Pessoa de Meia-Idade , Fonética , Reconhecimento Psicológico , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiopatologia , Tomografia Computadorizada por Raios X
13.
Neuropsychologia ; 85: 10-8, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-26944873

RESUMO

Congenital amusia is a neurodevelopmental disorder of music perception and production, which has been attributed to a major deficit in pitch processing. While most studies and diagnosis tests have used explicit investigation methods, recent studies using implicit investigation approaches have revealed some unimpaired pitch structure processing in congenital amusia. The present study investigated amusic individuals' processing of tonal structures (e.g., musical structures respecting the Western tonal system) via three different questions. Amusic participants and their matched controls judged tonal versions (original musical excerpts) and atonal versions (with manipulated pitch content to remove tonal structures) of 12 musical pieces. For each piece, participants answered three questions that required judgments from different perspectives: an explicit structural one, a personal, emotional one and a more social one (judging the perception of others). Results revealed that amusic individuals' judgments differed between tonal and atonal versions. However, the question type influenced the extent of the revealed structure processing: while amusic individuals were impaired for the question requiring explicit structural judgments, they performed as well as their matched controls for the two other questions. Together with other recent studies, these findings suggest that congenital amusia might be related to a disorder of the conscious access to music processing rather than music processing per se.


Assuntos
Percepção Auditiva/fisiologia , Transtornos da Percepção Auditiva/fisiopatologia , Estado de Consciência/fisiologia , Discriminação Psicológica/fisiologia , Música , Estimulação Acústica , Adulto , Estudos de Casos e Controles , Feminino , Humanos , Julgamento , Masculino , Pessoa de Meia-Idade , Tempo de Reação/fisiologia , Estatística como Assunto , Adulto Jovem
14.
Hear Res ; 337: 89-95, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27240480

RESUMO

UNLABELLED: While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. MATERIAL AND METHODS: Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). RESULTS: EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs.


Assuntos
Percepção Auditiva , Implantes Cocleares , Surdez/reabilitação , Surdez/cirurgia , Percepção da Fala , Adolescente , Adulto , Criança , Pré-Escolar , Implante Coclear , Cognição , Feminino , Humanos , Desenvolvimento da Linguagem , Aprendizagem , Masculino , Pessoa de Meia-Idade
15.
Cognition ; 94(3): B67-78, 2005 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-15617668

RESUMO

It has been shown that harmonic structure may influence the processing of phonemes whatever the extent of participants' musical expertise [Bigand, E., Tillmann, B., Poulin, B., D'Adamo, D. A., & Madurell, F. (2001). The effect of harmonic context on phoneme monitoring in vocal music. Cognition, 81, B11-B20]. The present study goes a step further by investigating how musical harmony may potentially interfere with the processing of words in vocal music. Eight-chord sung sentences were presented, their last word being either semantically related (La girafe a un tres grand cou, The giraffe has a very long neck) or unrelated to the previous linguistic context (La girafe a un tres grand pied, The giraffe has a very long foot). The target word was sung on a chord that acted either as a referential tonic chord or as a congruent but less referential subdominant chord. Participants performed a lexical decision task on the target word. A significant interaction was observed between semantic and harmonic relatedness suggesting that music modulates semantic priming in vocal music. Following Jones' dynamic attention theory, we argue that music can modulate semantic priming in vocal music, by modifying the allocation of attentional resource necessary for linguistic computation.


Assuntos
Música , Fonação , Semântica , Humanos , Tempo de Reação
16.
Ann N Y Acad Sci ; 1060: 443-5, 2005 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-16597799

RESUMO

To investigate whether key processing precedes the appraisal of valence in music, participants listened to pairs of clips of same or different valence, played either in the same key or one semitone apart. They judged whether the second clip expressed the same emotion as the first one. Our predictions were confirmed: the response times obtained were shorter when both clips were played in the same key than when they were played one semitone apart.


Assuntos
Emoções , Música , Estimulação Acústica , Nível de Alerta , Percepção Auditiva , Humanos , Percepção da Altura Sonora , Psicoacústica
17.
Ann N Y Acad Sci ; 1060: 429-37, 2005 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-16597797

RESUMO

Two empirical studies investigate the time course of emotional responses to music. In the first one, musically trained and untrained listeners were required to listen to 27 musical excerpts and to group those that conveyed a similar emotional meaning. In one condition, the excerpts were 25 seconds long on average. In the other condition, excerpts were as short as 1 second. The groupings were then transformed into a matrix of emotional dissimilarity that was analyzed with multidimensional scaling methods (MDS). We compared the outcome of these analyses for the 25-s and 1-s duration conditions. In the second study, we presented musical excerpts of increasing duration, varying from 250 to 20 seconds. Participants were requested to evaluate on a subjective scale how "moving" each excerpt was. On the basis of the responses given for the longer duration, excerpts were then sorted into two groups: highly moving and weakly (or less) moving. The main purpose of the analysis was to identify the point in time where these two categories of excerpts started to be differentiated by participants. Both studies provide consistent findings that less than 1 s of music is enough to instill elaborated emotional responses in listeners.


Assuntos
Emoções , Música , Psicoacústica , Estimulação Acústica , Percepção Auditiva , Humanos , Discriminação da Altura Tonal , Fatores de Tempo
18.
Front Aging Neurosci ; 7: 11, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25741278

RESUMO

When presented with emotional visual scenes, older adults have been found to be equally capable to regulate emotion expression as younger adults, corroborating the view that emotion regulation skills are maintained or even improved in later adulthood. However, the possibility that gaze direction might help achieve an emotion control goal has not been taken into account, raising the question whether the effortful processing of expressive regulation is really spared from the general age-related decline. Since it does not allow perceptual attention to be redirected away from the emotional source, music provides a useful way to address this question. In the present study, affective, behavioral, and physiological consequences of free expression of emotion, expressive suppression and expressive enhancement were measured in 31 younger and 30 older adults while they listened to positive and negative musical excerpts. The main results indicated that compared to younger adults, older adults reported experiencing less emotional intensity in response to negative music during the free expression of emotion condition. No age difference was found in the ability to amplify or reduce emotional expressions. However, an age-related decline in the ability to reduce the intensity of emotional state and an age-related increase in physiological reactivity were found when participants were instructed to suppress negative expression. Taken together, the current data support previous findings suggesting an age-related change in response to music. They also corroborate the observation that older adults are as efficient as younger adults at controlling behavioral expression. But most importantly, they suggest that when faced with auditory sources of negative emotion, older age does not always confer a better ability to regulate emotions.

19.
Behav Neurol ; 2015: 707625, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26508813

RESUMO

Music can be thought of as a complex stimulus able to enrich the encoding of an event thus boosting its subsequent retrieval. However, several findings suggest that music can also interfere with memory performance. A better understanding of the behavioral and neural processes involved can substantially improve knowledge and shed new light on the most efficient music-based interventions. Based on fNIRS studies on music, episodic encoding, and the dorsolateral prefrontal cortex (PFC), this work aims to extend previous findings by monitoring the entire lateral PFC during both encoding and retrieval of verbal material. Nineteen participants were asked to encode lists of words presented with either background music or silence and subsequently tested during a free recall task. Meanwhile, their PFC was monitored using a 48-channel fNIRS system. Behavioral results showed greater chunking of words under the music condition, suggesting the employment of associative strategies for items encoded with music. fNIRS results showed that music provided a less demanding way of modulating both episodic encoding and retrieval, with a general prefrontal decreased activity under the music versus silence condition. This suggests that music-related memory processes rely on specific neural mechanisms and that music can positively influence both episodic encoding and retrieval of verbal information.


Assuntos
Rememoração Mental/fisiologia , Música , Córtex Pré-Frontal/fisiologia , Estimulação Acústica , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Neuroimagem Funcional , Humanos , Masculino , Espectroscopia de Luz Próxima ao Infravermelho , Adulto Jovem
20.
Front Psychol ; 6: 1316, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26388818

RESUMO

Learning new words is an increasingly common necessity in everyday life. External factors, among which music and social interaction are particularly debated, are claimed to facilitate this task. Due to their influence on the learner's temporal behavior, these stimuli are able to drive the learner's attention to the correct referent of new words at the correct point in time. However, do music and social interaction impact learning behavior in the same way? The current study aims to answer this question. Native German speakers (N = 80) were requested to learn new words (pseudo-words) during a contextual learning game. This learning task was performed alone with a computer or with a partner, with or without music. Results showed that music and social interaction had a different impact on the learner's behavior: Participants tended to temporally coordinate their behavior more with a partner than with music, and in both cases more than with a computer. However, when both music and social interaction were present, this temporal coordination was hindered. These results suggest that while music and social interaction do influence participants' learning behavior, they have a different impact. Moreover, impaired behavior when both music and a partner are present suggests that different mechanisms are employed to coordinate with the two types of stimuli. Whether one or the other approach is more efficient for word learning, however, is a question still requiring further investigation, as no differences were observed between conditions in a retrieval phase, which took place immediately after the learning session. This study contributes to the literature on word learning in adults by investigating two possible facilitating factors, and has important implications for situations such as music therapy, in which music and social interaction are present at the same time.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa