Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Curr Biol ; 34(1): 46-55.e4, 2024 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-38096819

RESUMEN

Voices are the most relevant social sounds for humans and therefore have crucial adaptive value in development. Neuroimaging studies in adults have demonstrated the existence of regions in the superior temporal sulcus that respond preferentially to voices. Yet, whether voices represent a functionally specific category in the young infant's mind is largely unknown. We developed a highly sensitive paradigm relying on fast periodic auditory stimulation (FPAS) combined with scalp electroencephalography (EEG) to demonstrate that the infant brain implements a reliable preferential response to voices early in life. Twenty-three 4-month-old infants listened to sequences containing non-vocal sounds from different categories presented at 3.33 Hz, with highly heterogeneous vocal sounds appearing every third stimulus (1.11 Hz). We were able to isolate a voice-selective response over temporal regions, and individual voice-selective responses were found in most infants within only a few minutes of stimulation. This selective response was significantly reduced for the same frequency-scrambled sounds, indicating that voice selectivity is not simply driven by the envelope and the spectral content of the sounds. Such a robust selective response to voices as early as 4 months of age suggests that the infant brain is endowed with the ability to rapidly develop a functional selectivity to this socially relevant category of sounds.


Asunto(s)
Percepción Auditiva , Voz , Adulto , Lactante , Humanos , Percepción Auditiva/fisiología , Encéfalo/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Mapeo Encefálico
2.
Brain Topogr ; 36(6): 854-869, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37639111

RESUMEN

Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.


Asunto(s)
Encéfalo , Emociones , Humanos , Emociones/fisiología , Encéfalo/fisiología , Ira , Felicidad , Miedo
3.
eNeuro ; 8(3)2021.
Artículo en Inglés | MEDLINE | ID: mdl-34016602

RESUMEN

Voices are arguably among the most relevant sounds in humans' everyday life, and several studies have suggested the existence of voice-selective regions in the human brain. Despite two decades of research, defining the human brain regions supporting voice recognition remains challenging. Moreover, whether neural selectivity to voices is merely driven by acoustic properties specific to human voices (e.g., spectrogram, harmonicity), or whether it also reflects a higher-level categorization response is still under debate. Here, we objectively measured rapid automatic categorization responses to human voices with fast periodic auditory stimulation (FPAS) combined with electroencephalography (EEG). Participants were tested with stimulation sequences containing heterogeneous non-vocal sounds from different categories presented at 4 Hz (i.e., four stimuli/s), with vocal sounds appearing every three stimuli (1.333 Hz). A few minutes of stimulation are sufficient to elicit robust 1.333 Hz voice-selective focal brain responses over superior temporal regions of individual participants. This response is virtually absent for sequences using frequency-scrambled sounds, but is clearly observed when voices are presented among sounds from musical instruments matched for pitch and harmonicity-to-noise ratio (HNR). Overall, our FPAS paradigm demonstrates that the human brain seamlessly categorizes human voices when compared with other sounds including musical instruments' sounds matched for low level acoustic features and that voice-selective responses are at least partially independent from low-level acoustic features, making it a powerful and versatile tool to understand human auditory categorization in general.


Asunto(s)
Percepción Auditiva , Encéfalo , Estimulación Acústica , Humanos , Sonido , Lóbulo Temporal
4.
J Clin Med ; 9(6)2020 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-32560431

RESUMEN

BACKGROUND: Heart rate recovery (HRR) is a marker of vagal tone, which is a powerful predictor of mortality in patients with cardiovascular disease. Sacubitril/valsartan (S/V) is a treatment for heart failure with reduced ejection fraction (HFrEF), which impressively impacts cardiovascular outcome. This study aims at evaluating the effects of S/V on HRR and its correlation with cardiopulmonary indexes in HFrEF patients. METHODS: Patients with HFrEF admitted to outpatients' services were screened out for study inclusion. S/V was administered according to guidelines. Up-titration was performed every 4 weeks when tolerated. All patients underwent laboratory measurements, Doppler-echocardiography, and cardiopulmonary exercise stress testing (CPET) at baseline and at 12-month follow-up. RESULTS: Study population consisted of 134 HFrEF patients (87% male, mean age 57.9 ± 9.6 years). At 12-month follow-up, significant improvement in left ventricular ejection fraction (from 28% ± 5.8% to 31.8% ± 7.3%, p < 0.0001), peak exercise oxygen consumption (VO2peak) (from 15.3 ± 3.7 to 17.8 ± 4.2 mL/kg/min, p < 0.0001), the slope of increase in ventilation over carbon dioxide output (VE/VCO2 slope )(from 33.4 ± 6.2 to 30.3 ± 6.5, p < 0.0001), and HRR (from 11.4 ± 9.5 to 17.4 ± 15.1 bpm, p = 0.004) was observed. Changes in HRR were significantly correlated to changes in VE/VCO2slope (r = -0.330; p = 0.003). After adjusting for potential confounding factors, multivariate analysis showed that changes in HRR were significantly associated to changes in VE/VCO2slope (Beta (B) = -0.975, standard error (SE) = 0.364, standardized Beta coefficient (Bstd) = -0.304, p = 0.009). S/V showed significant reduction in exercise oscillatory ventilation (EOV) detection at CPET (28 EOV detected at baseline CPET vs. 9 EOV detected at 12-month follow-up, p < 0.001). HRR at baseline CPET was a significant predictor of EOV at 12-month follow-up (B = -2.065, SE = 0.354, p < 0.001). CONCLUSIONS: In HFrEF patients, S/V therapy improves autonomic function, functional capacity, and ventilation. Whether these findings might translate into beneficial effects on prognosis and outcome remains to be elucidated.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...