Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
eNeuro ; 11(5)2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38702194

RESUMEN

Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.


Asunto(s)
Electroencefalografía , Potenciales Relacionados con Evento P300 , Humanos , Masculino , Femenino , Adulto , Electroencefalografía/métodos , Adulto Joven , Potenciales Relacionados con Evento P300/fisiología , Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Estimulación Acústica/métodos , Potenciales Evocados/fisiología
2.
Curr Biol ; 34(1): 46-55.e4, 2024 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-38096819

RESUMEN

Voices are the most relevant social sounds for humans and therefore have crucial adaptive value in development. Neuroimaging studies in adults have demonstrated the existence of regions in the superior temporal sulcus that respond preferentially to voices. Yet, whether voices represent a functionally specific category in the young infant's mind is largely unknown. We developed a highly sensitive paradigm relying on fast periodic auditory stimulation (FPAS) combined with scalp electroencephalography (EEG) to demonstrate that the infant brain implements a reliable preferential response to voices early in life. Twenty-three 4-month-old infants listened to sequences containing non-vocal sounds from different categories presented at 3.33 Hz, with highly heterogeneous vocal sounds appearing every third stimulus (1.11 Hz). We were able to isolate a voice-selective response over temporal regions, and individual voice-selective responses were found in most infants within only a few minutes of stimulation. This selective response was significantly reduced for the same frequency-scrambled sounds, indicating that voice selectivity is not simply driven by the envelope and the spectral content of the sounds. Such a robust selective response to voices as early as 4 months of age suggests that the infant brain is endowed with the ability to rapidly develop a functional selectivity to this socially relevant category of sounds.


Asunto(s)
Percepción Auditiva , Voz , Adulto , Lactante , Humanos , Percepción Auditiva/fisiología , Encéfalo/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Mapeo Encefálico
3.
Brain Topogr ; 36(6): 854-869, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37639111

RESUMEN

Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.


Asunto(s)
Encéfalo , Emociones , Humanos , Emociones/fisiología , Encéfalo/fisiología , Ira , Felicidad , Miedo
4.
eNeuro ; 8(3)2021.
Artículo en Inglés | MEDLINE | ID: mdl-34016602

RESUMEN

Voices are arguably among the most relevant sounds in humans' everyday life, and several studies have suggested the existence of voice-selective regions in the human brain. Despite two decades of research, defining the human brain regions supporting voice recognition remains challenging. Moreover, whether neural selectivity to voices is merely driven by acoustic properties specific to human voices (e.g., spectrogram, harmonicity), or whether it also reflects a higher-level categorization response is still under debate. Here, we objectively measured rapid automatic categorization responses to human voices with fast periodic auditory stimulation (FPAS) combined with electroencephalography (EEG). Participants were tested with stimulation sequences containing heterogeneous non-vocal sounds from different categories presented at 4 Hz (i.e., four stimuli/s), with vocal sounds appearing every three stimuli (1.333 Hz). A few minutes of stimulation are sufficient to elicit robust 1.333 Hz voice-selective focal brain responses over superior temporal regions of individual participants. This response is virtually absent for sequences using frequency-scrambled sounds, but is clearly observed when voices are presented among sounds from musical instruments matched for pitch and harmonicity-to-noise ratio (HNR). Overall, our FPAS paradigm demonstrates that the human brain seamlessly categorizes human voices when compared with other sounds including musical instruments' sounds matched for low level acoustic features and that voice-selective responses are at least partially independent from low-level acoustic features, making it a powerful and versatile tool to understand human auditory categorization in general.


Asunto(s)
Percepción Auditiva , Encéfalo , Estimulación Acústica , Humanos , Sonido , Lóbulo Temporal
5.
Eur J Neurosci ; 52(7): 3746-3762, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32304122

RESUMEN

Perception necessitates interaction among neuronal ensembles, the dynamics of which can be conceptualized as the emergent behavior of coupled dynamical systems. Here, we propose a detailed neurobiologically realistic model that captures the neural mechanisms of inter-individual variability observed in cross-modal speech perception. From raw EEG signals recorded from human participants when they were presented with speech vocalizations of McGurk-incongruent and congruent audio-visual (AV) stimuli, we computed the global coherence metric to capture the neural variability of large-scale networks. We identified that participants' McGurk susceptibility was negatively correlated to their alpha band global coherence. The proposed biophysical model conceptualized the global coherence dynamics emerge from coupling between the interacting neural masses-representing the sensory-specific auditory/visual areas and modality nonspecific associative/integrative regions. Subsequently, we could predict that an extremely weak direct AV coupling results in a decrease in alpha band global coherence-mimicking the cortical dynamics of participants with higher McGurk susceptibility. Source connectivity analysis also showed decreased connectivity between sensory-specific regions in participants more susceptible to McGurk effect, thus establishing an empirical validation to the prediction. Overall, our study provides an outline to link variability in structural and functional connectivity metrics to variability of performance that can be useful for several perception and action task paradigms.


Asunto(s)
Percepción Auditiva , Percepción del Habla , Encéfalo , Humanos , Habla , Percepción Visual
6.
eNeuro ; 6(4)2019.
Artículo en Inglés | MEDLINE | ID: mdl-31311804

RESUMEN

Brain oscillations from EEG and MEG shed light on neurophysiological mechanisms of human behavior. However, to extract information on cortical processing, researchers have to rely on source localization methods that can be very broadly classified into current density estimates such as exact low-resolution brain electromagnetic tomography (eLORETA), minimum norm estimates (MNE), and beamformers such as dynamic imaging of coherent sources (DICS) and linearly constrained minimum variance (LCMV). These algorithms produce a distributed map of brain activity underlying sustained and transient responses during neuroimaging studies of behavior. On the other hand, there are very few comparative analyses that evaluates the "ground truth detection" capabilities of these methods. The current article evaluates the reliability in estimation of sources of spectral event generators in the cortex using a two-pronged approach. First, simulated EEG data with point dipoles and distributed dipoles are used to validate the accuracy and sensitivity of each one of these methods of source localization. The abilities of the techniques were tested by comparing the localization error, focal width, false positive (FP) ratios while detecting already known location of neural activity generators under varying signal-to-noise ratios (SNRs). Second, empirical EEG data during auditory steady state responses (ASSRs) in human participants were used to compare the distributed nature of source localization. All methods were successful in recovery of point sources in favorable signal to noise scenarios and could achieve high hit rates if FPs are ignored. Interestingly, focal activation map is generated by LCMV and DICS when compared to eLORETA while control of FPs is much superior in eLORETA. Subsequently drawbacks and strengths of each method are highlighted with a detailed discussion on how to choose a technique based on empirical requirements.


Asunto(s)
Ondas Encefálicas , Encéfalo/fisiología , Electroencefalografía , Magnetoencefalografía , Procesamiento de Señales Asistido por Computador , Algoritmos , Fenómenos Electrofisiológicos , Humanos , Reproducibilidad de los Resultados , Relación Señal-Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA