Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Métodos Terapêuticos e Terapias MTCI
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Distúrbios Comun. (Online) ; 36(1): e65819, 17/06/2024.
Artigo em Inglês, Português | LILACS | ID: biblio-1563122

RESUMO

Introdução: A voz é um indicador de estados emocionais, influenciada por fatores como o tônus vagal, a respiração e a variabilidade da frequência cardíaca. O estudo explora esses fatores e a relação com a regulação emocional e a prática meditativa como técnica de autorregulação. Objetivo: Investigar a diferença nas características vocais e na variação da frequência cardíaca em meditadores experientes (EM) e novatos (NM) antes e depois de uma prática meditativa e em não praticantes de meditação ­ grupo controle (CG), antes e depois de um teste controle. Métodos: Estudo quase-fatorial 3 x 2. Três grupos foram avaliados (meditadores experientes EM; meditadores novatos NM; e grupo controle CG, não praticantes de meditação) em dois momentos da manipulação experimental ­ antes e depois de uma sessão meditativa para praticantes de meditação, e antes e depois de uma tarefa de busca de palavras para o grupo controle. A frequência fundamental, jitter, shimmer, relação harmônico-ruído e o primeiro (F1), o segundo (F2) e terceiro (F3) formantes da vogal [a]; a variação da frequência cardíaca (SDNN, RMSSD, LF/HF, SD1 and SD2); estado de ansiedade e autopercepção vocal, foram investigados, antes e após a intervenção. Resultados: O grupo EM alcançou ótimo relaxamento do trato vocal. Os grupos NM e CG apresentaram mudanças em F1. Prática meditativa, de longa duração, está associado com grande diferença em F3, SDNN e SD2 na variação da frequência cardíaca. Conclusão: Os resultados sugerem que prática meditativa influencia a expressão vocal e reação emocional, e que a experiência em prática meditativa favorece esta relação. (AU)


Introduction: The voice is an indicator of emotional states, influenced by factors such as vagal tone, breathing and heart rate variability. This study explores these factors and their relationship with emotional regulation and meditative practice as a self-regulation technique. Purpose: To investigate the difference in vocal characteristics and heart rate variability in experienced (EM) and novice (NM) meditators before and after a meditation practice and in non-meditators - control group (CG), before and after a control test. Methods: 3 x 2 quasi-factorial study. Three groups were evaluated (experienced meditators EM; novice meditators NM; and control group CG, non-meditators) at two points in the experimental manipulation - before and after a meditation session for meditators, and before and after a word search task for the control group. The fundamental frequency, jitter, shimmer, harmonic-to-noise ratio and the first (F1), second (F2) and third (F3) formants of the vowel [a]; heart rate variation (SDNN, RMSSD, LF/HF, SD1 and SD2); anxiety state and vocal self-perception, were investigated, before and after the intervention. Results: The EM group achieved optimal vocal tract relaxation. The NM and CG groups showed changes in F1. Long-term meditative practice was associated with a large difference in F3, SDNN and SD2 in heart rate variation. Conclusion: The results suggest that meditation practice influences vocal expression and emotional reaction, and that experience in meditation practice favors this relationship. (AU)


Introducción: La voz es un indicador de los estados emocionales, influida por factores como el tono vagal, la respiración y la variabilidad de la frecuencia cardiaca. Este estudio explora estos factores y su relación con la regulación emocional y la práctica de la meditación. Objetivo: Investigar la diferencia en las características vocales y variabilidad de la frecuencia cardiaca en meditadores experimentados (EM) y novatos (NM) antes y después de una práctica de meditación y en no meditadores - grupo control (GC), antes y después de una prueba control. Métodos: Estudio cuasi-factorial 3 x 2. Se evaluaron tres grupos (meditadores experimentados EM; meditadores novatos NM; y grupo control CG, no meditadores) en dos momentos - antes y después de una sesión de meditación para los meditadores, y antes y después de una tarea de búsqueda de palabras para el grupo control. Se investigaron la frecuencia fundamental, jitter, shimmer, relación armónico-ruido y los formantes primero (F1), segundo (F2) y tercero (F3) de la vocal [a]; la variación de la frecuencia cardiaca (SDNN, RMSSD, LF/HF, SD1 y SD2); el estado de ansiedad y autopercepción vocal, antes y después de la intervención. Resultados: El grupo EM consiguió una relajación óptima del tracto vocal. Los grupos NM y CG mostraron cambios en F1. La práctica de meditación a largo plazo se asocia con una gran diferencia en F3, SDNN y SD2 en la variación de la frecuencia cardiaca. Conclusión: Los resultados sugieren que la práctica de meditación influye en la expresión vocal y reacción emocional. (AU)


Assuntos
Humanos , Masculino , Feminino , Adulto , Voz , Meditação , Regulação Emocional , Estudos Controlados Antes e Depois , Reconhecimento de Voz/fisiologia
2.
Neuroimage ; 263: 119647, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36162634

RESUMO

Recognising a speaker's identity by the sound of their voice is important for successful interaction. The skill depends on our ability to discriminate minute variations in the acoustics of the vocal signal. Performance on voice identity assessments varies widely across the population. The neural underpinnings of this ability and its individual differences, however, remain poorly understood. Here we provide critical tests of a theoretical framework for the neural processing stages of voice identity and address how individual differences in identity discrimination mediate activation in this neural network. We scanned 40 individuals on an fMRI adaptation task involving voices drawn from morphed continua between two personally familiar identities. Analyses dissociated neuronal effects induced by repetition of acoustically similar morphs from those induced by a switch in perceived identity. Activation in temporal voice-sensitive areas decreased with acoustic similarity between consecutive stimuli. This repetition suppression effect was mediated by the performance on an independent voice assessment and this result highlights an important functional role of adaptive coding in voice expertise. Bilateral anterior insulae and medial frontal gyri responded to a switch in perceived voice identity compared to an acoustically equidistant switch within identity. Our results support a multistep model of voice identity perception.


Assuntos
Acústica , Doenças Auditivas Centrais , Cognição , Reconhecimento de Voz , Humanos , Estimulação Acústica , Cognição/fisiologia , Imageamento por Ressonância Magnética , Córtex Pré-Frontal/fisiologia , Reconhecimento de Voz/fisiologia , Doenças Auditivas Centrais/fisiopatologia , Masculino , Feminino , Adolescente , Adulto Jovem , Adulto , Rede Nervosa/fisiologia
3.
J Neurosci ; 41(33): 7136-7147, 2021 08 18.
Artigo em Inglês | MEDLINE | ID: mdl-34244362

RESUMO

Recognizing speech in background noise is a strenuous daily activity, yet most humans can master it. An explanation of how the human brain deals with such sensory uncertainty during speech recognition is to-date missing. Previous work has shown that recognition of speech without background noise involves modulation of the auditory thalamus (medial geniculate body; MGB): there are higher responses in left MGB for speech recognition tasks that require tracking of fast-varying stimulus properties in contrast to relatively constant stimulus properties (e.g., speaker identity tasks) despite the same stimulus input. Here, we tested the hypotheses that (1) this task-dependent modulation for speech recognition increases in parallel with the sensory uncertainty in the speech signal, i.e., the amount of background noise; and that (2) this increase is present in the ventral MGB, which corresponds to the primary sensory part of the auditory thalamus. In accordance with our hypothesis, we show, by using ultra-high-resolution functional magnetic resonance imaging (fMRI) in male and female human participants, that the task-dependent modulation of the left ventral MGB (vMGB) for speech is particularly strong when recognizing speech in noisy listening conditions in contrast to situations where the speech signal is clear. The results imply that speech in noise recognition is supported by modifications at the level of the subcortical sensory pathway providing driving input to the auditory cortex.SIGNIFICANCE STATEMENT Speech recognition in noisy environments is a challenging everyday task. One reason why humans can master this task is the recruitment of additional cognitive resources as reflected in recruitment of non-language cerebral cortex areas. Here, we show that also modulation in the primary sensory pathway is specifically involved in speech in noise recognition. We found that the left primary sensory thalamus (ventral medial geniculate body; vMGB) is more involved when recognizing speech signals as opposed to a control task (speaker identity recognition) when heard in background noise versus when the noise was absent. This finding implies that the brain optimizes sensory processing in subcortical sensory pathway structures in a task-specific manner to deal with speech recognition in noisy environments.


Assuntos
Mapeamento Encefálico , Corpos Geniculados/fisiologia , Colículos Inferiores/fisiologia , Ruído , Percepção da Fala/fisiologia , Tálamo/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Neurológicos , Fonética , Projetos Piloto , Tempo de Reação , Razão Sinal-Ruído , Incerteza , Reconhecimento de Voz/fisiologia
4.
Neuroreport ; 32(10): 858-863, 2021 07 07.
Artigo em Inglês | MEDLINE | ID: mdl-34029292

RESUMO

People require multimodal emotional interactions to live in a social environment. Several studies using dynamic facial expressions and emotional voices have reported that multimodal emotional incongruency evokes an early sensory component of event-related potentials (ERPs), while others have found a late cognitive component. The integration mechanism of two different results remains unclear. We speculate that it is semantic analysis in a multimodal integration framework that evokes the late ERP component. An electrophysiological experiment was conducted using emotionally congruent or incongruent dynamic faces and natural voices to promote semantic analysis. To investigate the top-down modulation of the ERP component, attention was manipulated via two tasks that directed participants to attend to facial versus vocal expressions. Our results revealed interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N400 ERP amplitudes but not N1 and P2 amplitudes, for incongruent emotional face-voice combinations only in the face-attentive task. A late occipital positive potential amplitude emerged only during the voice-attentive task. Overall, these findings support the idea that semantic analysis is a key factor in evoking the late cognitive component. The task effect for these ERPs suggests that top-down attention alters not only the amplitude of ERP but also the ERP component per se. Our results implicate a principle of emotional face-voice processing in the brain that may underlie complex audiovisual interactions in everyday communication.


Assuntos
Emoções/fisiologia , Potenciais Evocados/fisiologia , Expressão Facial , Reconhecimento Facial/fisiologia , Lobo Occipital/fisiologia , Reconhecimento de Voz/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Distribuição Aleatória , Gravação em Vídeo/métodos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA