Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Chem Senses ; 2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39311704

RESUMO

The Social Odor Scale (SOS) is a 12-item questionnaire initially developed and validated in Italian and German to investigate self-reported awareness of social odors, which are odors emanating from the human body that convey diverse information and evoke various emotional responses. The scale includes a total score and three subscales representing social odors in the respective categories: romantic partner, familiar, and strangers. Here, we aimed to (i) replicate the validation of the Italian and German versions of the SOS, (ii) translate and validate the SOS into multiple additional languages (French, English, Dutch, Swedish, Chinese), and (iii) explore whether the factor structure of each translated version aligns with the original versions. Confirmatory Factor Analysis (CFA) supported the scale's structure, yielding a good fit across all languages. Notable differences in SOS mean scores were observed among the different languages: Swedish participants exhibited lower social odor awareness compared to the other groups, whereas Chinese participants reported higher social odor awareness compared to Dutch and Swedish participants. Furthermore, SOS scores correlated with respondents' geographical location, with higher (i.e., northern) latitudes linked to lower social odor awareness. These results corroborate the SOS as a valid and reliable instrument, especially for the SOS total score and the Familiar and Partner factors, emphasizing the influence of individual and geographic factors on social odor awareness.

2.
Brain Topogr ; 36(6): 854-869, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37639111

RESUMO

Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.


Assuntos
Encéfalo , Emoções , Humanos , Emoções/fisiologia , Encéfalo/fisiologia , Ira , Felicidade , Medo
3.
Curr Biol ; 34(1): 46-55.e4, 2024 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-38096819

RESUMO

Voices are the most relevant social sounds for humans and therefore have crucial adaptive value in development. Neuroimaging studies in adults have demonstrated the existence of regions in the superior temporal sulcus that respond preferentially to voices. Yet, whether voices represent a functionally specific category in the young infant's mind is largely unknown. We developed a highly sensitive paradigm relying on fast periodic auditory stimulation (FPAS) combined with scalp electroencephalography (EEG) to demonstrate that the infant brain implements a reliable preferential response to voices early in life. Twenty-three 4-month-old infants listened to sequences containing non-vocal sounds from different categories presented at 3.33 Hz, with highly heterogeneous vocal sounds appearing every third stimulus (1.11 Hz). We were able to isolate a voice-selective response over temporal regions, and individual voice-selective responses were found in most infants within only a few minutes of stimulation. This selective response was significantly reduced for the same frequency-scrambled sounds, indicating that voice selectivity is not simply driven by the envelope and the spectral content of the sounds. Such a robust selective response to voices as early as 4 months of age suggests that the infant brain is endowed with the ability to rapidly develop a functional selectivity to this socially relevant category of sounds.


Assuntos
Percepção Auditiva , Voz , Adulto , Lactente , Humanos , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Mapeamento Encefálico
4.
eNeuro ; 8(3)2021.
Artigo em Inglês | MEDLINE | ID: mdl-34016602

RESUMO

Voices are arguably among the most relevant sounds in humans' everyday life, and several studies have suggested the existence of voice-selective regions in the human brain. Despite two decades of research, defining the human brain regions supporting voice recognition remains challenging. Moreover, whether neural selectivity to voices is merely driven by acoustic properties specific to human voices (e.g., spectrogram, harmonicity), or whether it also reflects a higher-level categorization response is still under debate. Here, we objectively measured rapid automatic categorization responses to human voices with fast periodic auditory stimulation (FPAS) combined with electroencephalography (EEG). Participants were tested with stimulation sequences containing heterogeneous non-vocal sounds from different categories presented at 4 Hz (i.e., four stimuli/s), with vocal sounds appearing every three stimuli (1.333 Hz). A few minutes of stimulation are sufficient to elicit robust 1.333 Hz voice-selective focal brain responses over superior temporal regions of individual participants. This response is virtually absent for sequences using frequency-scrambled sounds, but is clearly observed when voices are presented among sounds from musical instruments matched for pitch and harmonicity-to-noise ratio (HNR). Overall, our FPAS paradigm demonstrates that the human brain seamlessly categorizes human voices when compared with other sounds including musical instruments' sounds matched for low level acoustic features and that voice-selective responses are at least partially independent from low-level acoustic features, making it a powerful and versatile tool to understand human auditory categorization in general.


Assuntos
Percepção Auditiva , Encéfalo , Estimulação Acústica , Humanos , Som , Lobo Temporal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA