Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Radiology ; 310(2): e231143, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38349241

RESUMO

Background Cognitive behavioral therapy (CBT) is the current standard treatment for chronic severe tinnitus; however, preliminary evidence suggests that real-time functional MRI (fMRI) neurofeedback therapy may be more effective. Purpose To compare the efficacy of real-time fMRI neurofeedback against CBT for reducing chronic tinnitus distress. Materials and Methods In this prospective controlled trial, participants with chronic severe tinnitus were randomized from December 2017 to December 2021 to receive either CBT (CBT group) for 10 weekly group sessions or real-time fMRI neurofeedback (fMRI group) individually during 15 weekly sessions. Change in the Tinnitus Handicap Inventory (THI) score (range, 0-100) from baseline to 6 or 12 months was assessed. Secondary outcomes included four quality-of-life questionnaires (Beck Depression Inventory, Pittsburgh Sleep Quality Index, State-Trait Anxiety Inventory, and World Health Organization Disability Assessment Schedule). Questionnaire scores between treatment groups and between time points were assessed using repeated measures analysis of variance and the nonparametric Wilcoxon signed rank test. Results The fMRI group included 21 participants (mean age, 49 years ± 11.4 [SD]; 16 male participants) and the CBT group included 22 participants (mean age, 53.6 years ± 8.8; 16 male participants). The fMRI group showed a greater reduction in THI scores compared with the CBT group at both 6 months (mean score change, -28.21 points ± 18.66 vs -12.09 points ± 18.86; P = .005) and 12 months (mean score change, -30 points ± 25.44 vs -4 points ± 17.2; P = .01). Compared with baseline, the fMRI group showed improved sleep (mean score, 8.62 points ± 4.59 vs 7.25 points ± 3.61; P = .006) and trait anxiety (mean score, 44 points ± 11.5 vs 39.84 points ± 10.5; P = .02) at 1 month and improved depression (mean score, 13.71 points ± 9.27 vs 6.53 points ± 5.17; P = .01) and general functioning (mean score, 24.91 points ± 17.05 vs 13.06 points ± 10.1; P = .01) at 6 months. No difference in these metrics over time was observed for the CBT group (P value range, .14 to >.99). Conclusion Real-time fMRI neurofeedback therapy led to a greater reduction in tinnitus distress than the current standard treatment of CBT. ClinicalTrials.gov registration no.: NCT05737888; Swiss Ethics registration no.: BASEC2017-00813 © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Terapia Cognitivo-Comportamental , Neurorretroalimentação , Zumbido , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Zumbido/diagnóstico por imagem , Zumbido/terapia , Imageamento por Ressonância Magnética
2.
J Acoust Soc Am ; 142(4): 1805, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092548

RESUMO

There has been little research on the acoustic correlates of emotional expression in the singing voice. In this study, two pertinent questions are addressed: How does a singer's emotional interpretation of a musical piece affect acoustic parameters in the sung vocalizations? Are these patterns specific enough to allow statistical discrimination of the intended expressive targets? Eight professional opera singers were asked to sing the musical scale upwards and downwards (using meaningless content) to express different emotions, as if on stage. The studio recordings were acoustically analyzed with a standard set of parameters. The results show robust vocal signatures for the emotions studied. Overall, there is a major contrast between sadness and tenderness on the one hand, and anger, joy, and pride on the other. This is based on low vs high levels on the components of loudness, vocal dynamics, high perturbation variation, and a tendency for high low-frequency energy. This pattern can be explained by the high power and arousal characteristics of the emotions with high levels on these components. A multiple discriminant analysis yields classification accuracy greatly exceeding chance level, confirming the reliability of the acoustic patterns.


Assuntos
Acústica , Emoções , Canto , Feminino , Humanos , Masculino , Análise Multivariada , Espectrografia do Som , Voz
3.
Affect Sci ; 1(4): 208-224, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33283200

RESUMO

Appraisal theories suggest that valence appraisal should be differentiated into micro-valences, such as intrinsic pleasantness and goal-/need-related appraisals. In contrast to a macro-valence approach, this dissociation explains, among other things, the emergence of mixed or blended emotions. Here, we extend earlier research that showed that these valence types can be empirically dissociated. We examine the timing and the response patterns of these two micro-valences via measuring facial muscle activity changes (electromyography, EMG) over the brow and the cheek regions. In addition, we explore the effects of the sensory stimulus modality (vision, audition, and olfaction) on these patterns. The two micro-valences were manipulated in a social judgment task: first, intrinsic un/pleasantness (IP) was manipulated by exposing participants to appropriate stimuli presented in different sensory domains followed by a goal conduciveness/obstruction (GC) manipulation consisting of feedback on participants' judgments that were congruent or incongruent with their task-related goal. The results show significantly different EMG responses and timing patterns for both types of micro-valence, confirming the prediction that they are independent, consecutive parts of the appraisal process. Moreover, the lack of interaction effects with the sensory stimulus modality suggests high generalizability of the underlying appraisal mechanisms across different perception channels.

4.
J Pers Soc Psychol ; 114(3): 358-379, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29461080

RESUMO

Although research on facial emotion recognition abounds, there has been little attention on the nature of the underlying mechanisms. In this article, using a "reverse engineering" approach, we suggest that emotion inference from facial expression mirrors the expression process. As a strong case can be made for an appraisal theory account of emotional expression, which holds that appraisal results directly determine the nature of facial muscle actions, we claim that observers first detect specific appraisals from different facial muscle actions and then use implicit inference rules to categorize and name specific emotions. We report three experiments in which, guided by theoretical predictions and past empirical evidence, we systematically manipulated specific facial action units individually and in different configurations via synthesized avatar expressions. Large, diverse groups of participants judged the resulting videos for the underlying appraisals and/or the ensuing emotions. The results confirm that participants can infer targeted appraisals and emotions from synthesized facial actions based on appraisal predictions. We also report evidence that the ability to correctly interpret the synthesized stimuli is highly correlated with emotion recognition ability as part of emotional competence. We conclude by highlighting the importance of adopting a theory-based experimental approach in future research, focusing on the dynamic unfolding of facial expressions of emotion. (PsycINFO Database Record


Assuntos
Emoções/fisiologia , Expressão Facial , Músculos Faciais/fisiologia , Reconhecimento Facial/fisiologia , Percepção Social , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA