Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Behav Res Methods ; 50(1): 323-343, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-28374144

RESUMEN

We present an open-source software platform that transforms emotional cues expressed by speech signals using audio effects like pitch shifting, inflection, vibrato, and filtering. The emotional transformations can be applied to any audio file, but can also run in real time, using live input from a microphone, with less than 20-ms latency. We anticipate that this tool will be useful for the study of emotions in psychology and neuroscience, because it enables a high level of control over the acoustical and emotional content of experimental stimuli in a variety of laboratory situations, including real-time social situations. We present here results of a series of validation experiments aiming to position the tool against several methodological requirements: that transformed emotions be recognized at above-chance levels, valid in several languages (French, English, Swedish, and Japanese) and with a naturalness comparable to natural speech.


Asunto(s)
Señales (Psicología) , Emociones , Relaciones Interpersonales , Comunicación no Verbal/psicología , Habla , Conducta Verbal , Simulación por Computador , Femenino , Humanos , Lenguaje , Masculino , Percepción del Habla
3.
Clin Neurophysiol ; 135: 154-161, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35093702

RESUMEN

OBJECTIVE: The acoustic characteristics of stimuli influence the characteristics of the corresponding evoked potentials in healthy subjects. Own-name stimuli are used in clinical practice to assess the level of consciousness in intensive care units. The influence of the acoustic variability of these stimuli has never been evaluated. Here, we explored the influence of this variability on the characteristics of the subject's own name (SON) P300. METHODS: We retrospectively analyzed 251 disorders of consciousness patients from Lyon and Paris Hospitals who underwent an "own-name protocol". A reverse correlation analysis was performed to test for an association between acoustic properties of own-names stimuli used and the characteristics of the P300 wave observed. RESULTS: Own-names pronounced with increasing pitch prosody showed P300 responses 66 ms earlier than own-names that had a decreasing prosody [IC95% = 6.36; 125.9 ms]. CONCLUSIONS: Speech prosody of the stimuli in the "own name protocol" is associated with latencies differences of the P300 response among patients for whom these responses were observed. Further investigations are needed to confirm these results. SIGNIFICANCE: Speech prosody of the stimuli in the "own name protocol" is a non-negligible parameter, associated with P300 latency differences. Speech prosody should be standardized in SON P300 studies.


Asunto(s)
Coma/fisiopatología , Electroencefalografía/métodos , Potenciales Relacionados con Evento P300 , Percepción del Habla , Coma/diagnóstico , Electroencefalografía/normas , Femenino , Humanos , Masculino , Semántica , Acústica del Lenguaje
4.
Behav Processes ; 172: 104042, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31926279

RESUMEN

Many animal vocalizations contain nonlinear acoustic phenomena as a consequence of physiological arousal. In humans, nonlinear features are processed early in the auditory system, and are used to efficiently detect alarm calls and other urgent signals. Yet, high-level emotional and semantic contextual factors likely guide the perception and evaluation of roughness features in vocal sounds. Here we examined the relationship between perceived vocal arousal and auditory context. We presented listeners with nonverbal vocalizations (yells of a single vowel) at varying levels of portrayed vocal arousal, in two musical contexts (clean guitar, distorted guitar) and one non-musical context (modulated noise). As predicted, vocalizations with higher levels of portrayed vocal arousal were judged as more negative and more emotionally aroused than the same voices produced with low vocal arousal. Moreover, both the perceived valence and emotional arousal of vocalizations were significantly affected by both musical and non-musical contexts. These results show the importance of auditory context in judging emotional arousal and valence in voices and music, and suggest that nonlinear features in music are processed similarly to communicative vocal signals.


Asunto(s)
Percepción Auditiva , Emociones , Música , Voz , Adulto , Animales , Nivel de Alerta , Femenino , Humanos , Lenguaje , Masculino , Ruido , Adulto Joven
5.
PLoS One ; 14(4): e0205943, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30947281

RESUMEN

Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g. words, sentences or music, by lack of tools able to manipulate the stimulus dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, able to systematically randomize the prosody/melody of existing speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment's pitch, duration or amplitude, using a new Python-language implementation of the phase-vocoder digital audio technique. We present here two applications of the tool to generate stimuli for studying intonation processing of interrogative vs declarative speech, and rhythm processing of sung melodies.


Asunto(s)
Cognición/fisiología , Música , Percepción del Habla/fisiología , Habla/fisiología , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda