Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 46
1.
Nat Commun ; 15(1): 3941, 2024 May 10.
Article En | MEDLINE | ID: mdl-38729937

A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.


Acoustic Stimulation , Auditory Cortex , Callithrix , Electrocorticography , Animals , Auditory Cortex/physiology , Callithrix/physiology , Male , Female , Evoked Potentials/physiology , Frontal Lobe/physiology , Evoked Potentials, Auditory/physiology , Auditory Perception/physiology , Brain Mapping/methods
2.
Curr Biol ; 34(1): 213-223.e5, 2024 01 08.
Article En | MEDLINE | ID: mdl-38141619

Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.


Emotions , Facial Expression , Humans , Anger , Fear , Happiness
3.
Curr Biol ; 33(24): 5505-5514.e6, 2023 12 18.
Article En | MEDLINE | ID: mdl-38065096

Prediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization.1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories18,19,20,21,22,23,24-e.g., faces versus cars, which could lead to predictions of features specific to faces or cars, or features from both categories. Here, to pinpoint the information contents of predictions and thus their mechanistic processing in the brain, we identified the features that enable two different categorical perceptions of the same stimuli. We then trained multivariate classifiers to discern, from dynamic MEG brain responses, the features tied to each perception. With an auditory cueing design, we reveal where, when, and how the brain reactivates visual category features (versus the typical category contrast) before the stimulus is shown. We demonstrate that the predictions of category features have a more direct influence (bias) on subsequent decision behavior in participants than the typical category contrast. Specifically, these predictions are more precisely localized in the brain (lateralized), are more specifically driven by the auditory cues, and their reactivation strength before a stimulus presentation exerts a greater bias on how the individual participant later categorizes this stimulus. By characterizing the specific information contents that the brain predicts and then processes, our findings provide new insights into the brain's mechanisms of prediction for perception.


Brain , Cues , Humans , Brain/physiology , Brain Mapping , Photic Stimulation
4.
J Neurosci ; 43(29): 5391-5405, 2023 07 19.
Article En | MEDLINE | ID: mdl-37369588

Models of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant (N = 11, both sexes). Each was cued to the spatial location (left vs right) and contents [low spatial frequency (LSF) vs high spatial frequency (HSF)] of a predicted Gabor stimulus that they then categorized. Using each participant's concurrently measured MEG, we reconstructed networks that predict and categorize LSF versus HSF contents for behavior. We found that predicted contents flexibly propagate top down from temporal to lateralized occipital cortex, depending on task demands, under supervisory control of prefrontal cortex. When they reach lateralized occipital cortex, predictions enhance the bottom-up LSF versus HSF representations of the stimulus, all the way from occipital-ventral-parietal to premotor cortex, in turn producing faster categorization behavior. Importantly, content communications are subsets (i.e., 55-75%) of the signal-to-signal communications typically measured between brain regions. Hence, our study isolates functional networks that process the information of cognitive functions.SIGNIFICANCE STATEMENT An enduring cognitive hypothesis states that our perception is partly influenced by the bottom-up sensory input but also by top-down expectations. However, cognitive explanations of the dynamic brain networks mechanisms that flexibly predict and categorize the visual input according to task-demands remain elusive. We addressed them in a predictive experimental design by isolating the network communications of cognitive contents from all other communications. Our methods revealed a Prediction Network that flexibly communicates contents from temporal to lateralized occipital cortex, with explicit frontal control, and an occipital-ventral-parietal-frontal Categorization Network that represents more sharply the predicted contents from the shown stimulus, leading to faster behavior. Our framework and results therefore shed a new light of cognitive information processing on dynamic brain activity.


Brain Mapping , Magnetic Resonance Imaging , Male , Female , Humans , Occipital Lobe , Brain , Cognition , Photic Stimulation , Visual Perception
5.
PLoS Biol ; 21(5): e3002120, 2023 05.
Article En | MEDLINE | ID: mdl-37155704

In the search for the neural basis of conscious experience, perception and the cognitive processes associated with reporting perception are typically confounded as neural activity is recorded while participants explicitly report what they experience. Here, we present a novel way to disentangle perception from report using eye movement analysis techniques based on convolutional neural networks and neurodynamical analyses based on information theory. We use a bistable visual stimulus that instantiates two well-known properties of conscious perception: integration and differentiation. At any given moment, observers either perceive the stimulus as one integrated unitary object or as two differentiated objects that are clearly distinct from each other. Using electroencephalography, we show that measures of integration and differentiation based on information theory closely follow participants' perceptual experience of those contents when switches were reported. We observed increased information integration between anterior to posterior electrodes (front to back) prior to a switch to the integrated percept, and higher information differentiation of anterior signals leading up to reporting the differentiated percept. Crucially, information integration was closely linked to perception and even observed in a no-report condition when perceptual transitions were inferred from eye movements alone. In contrast, the link between neural differentiation and perception was observed solely in the active report condition. Our results, therefore, suggest that perception and the processes associated with report require distinct amounts of anterior-posterior network communication and anterior information differentiation. While front-to-back directed information is associated with changes in the content of perception when viewing bistable visual stimuli, regardless of report, frontal information differentiation was absent in the no-report condition and therefore is not directly linked to perception per se.


Brain , Electroencephalography , Humans , Feedback , Eye Movements , Perception , Visual Perception , Photic Stimulation
6.
J Vis ; 23(5): 14, 2023 05 02.
Article En | MEDLINE | ID: mdl-37200046

Human decision-making and self-reflection often depend on context and internal biases. For instance, decisions are often influenced by preceding choices, regardless of their relevance. It remains unclear how choice history influences different levels of the decision-making hierarchy. We used analyses grounded in information and detection theories to estimate the relative strength of perceptual and metacognitive history biases and to investigate whether they emerge from common/unique mechanisms. Although both perception and metacognition tended to be biased toward previous responses, we observed novel dissociations that challenge normative theories of confidence. Different evidence levels often informed perceptual and metacognitive decisions within observers, and response history distinctly influenced first- (perceptual) and second- (metacognitive) order decision-parameters, with the metacognitive bias likely to be strongest and most prevalent in the general population. We propose that recent choices and subjective confidence represent heuristics, which inform first- and second-order decisions in the absence of more relevant evidence.


Metacognition , Humans , Metacognition/physiology , Decision Making/physiology , Heuristics
7.
Trends Cogn Sci ; 26(8): 626-630, 2022 08.
Article En | MEDLINE | ID: mdl-35710894

Experimental studies in cognitive science typically focus on the population average effect. An alternative is to test each individual participant and then quantify the proportion of the population that would show the effect: the prevalence, or participant replication probability. We argue that this approach has conceptual and practical advantages.


Cognitive Science , Humans , Probability
8.
Neuroimage ; 258: 119347, 2022 09.
Article En | MEDLINE | ID: mdl-35660460

The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches.


Brain Mapping , Brain , Brain/physiology , Brain Mapping/methods , Cognition , Humans , Neuroimaging/methods , Reproducibility of Results
9.
Elife ; 112022 02 17.
Article En | MEDLINE | ID: mdl-35174783

A key challenge in neuroimaging remains to understand where, when, and now particularly how human brain networks compute over sensory inputs to achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved the classic XOR, OR, and AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation over the same inputs to produce the task-specific behavioral outputs. In each task, we found that source-localized MEG activity progresses through four computational stages identified within individual participants: (1) initial contralateral representation of each visual input in occipital cortex, (2) a joint linearly combined representation of both inputs in midline occipital cortex and right fusiform gyrus, followed by (3) nonlinear task-dependent input integration in temporal-parietal cortex, and finally (4) behavioral response representation in postcentral gyrus. We demonstrate the specific dynamics of each computation at the level of individual sources. The spatiotemporal patterns of the first two computations are similar across the three tasks; the last two computations are task specific. Our results therefore reveal where, when, and how dynamic network algorithms perform different computations over the same inputs to produce different behaviors.


Brain Mapping/methods , Brain/physiology , Magnetoencephalography/methods , Nerve Net/physiology , Neuroimaging/methods , Visual Perception/physiology , Female , Humans , Male , Photic Stimulation , Temporal Lobe/physiology
10.
Cognition ; 224: 105051, 2022 07.
Article En | MEDLINE | ID: mdl-35219954

⁠This study investigates the dynamics of speech envelope tracking during speech production, listening and self-listening. We use a paradigm in which participants listen to natural speech (Listening), produce natural speech (Speech Production), and listen to the playback of their own speech (Self-Listening), all while their neural activity is recorded with EEG. After time-locking EEG data collection and auditory recording and playback, we used a Gaussian copula mutual information measure to estimate the relationship between information content in the EEG and auditory signals. In the 2-10 Hz frequency range, we identified different latencies for maximal speech envelope tracking during speech production and speech perception. Maximal speech tracking takes place approximately 110 ms after auditory presentation during perception and 25 ms before vocalisation during speech production. These results describe a specific timeline for speech tracking in speakers and listeners in line with the idea of a speech chain and hence, delays in communication.


Speech Perception , Speech , Auditory Perception , Brain , Electroencephalography , Humans
11.
J Neurosci ; 42(11): 2344-2355, 2022 03 16.
Article En | MEDLINE | ID: mdl-35091504

Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.


Decision Making , Electroencephalography , Brain/physiology , Decision Making/physiology , Electroencephalography/methods , Female , Humans , Male , Photic Stimulation , Visual Perception/physiology
13.
Eur Arch Psychiatry Clin Neurosci ; 272(3): 437-448, 2022 Apr.
Article En | MEDLINE | ID: mdl-34401957

Schizophrenia is characterised by cognitive impairments that are already present during early stages, including in the clinical high-risk for psychosis (CHR-P) state and first-episode psychosis (FEP). Moreover, data suggest the presence of distinct cognitive subtypes during early-stage psychosis, with evidence for spared vs. impaired cognitive profiles that may be differentially associated with symptomatic and functional outcomes. Using cluster analysis, we sought to determine whether cognitive subgroups were associated with clinical and functional outcomes in CHR-P individuals. Data were available for 146 CHR-P participants of whom 122 completed a 6- and/or 12-month follow-up; 15 FEP participants; 47 participants not fulfilling CHR-P criteria (CHR-Ns); and 53 healthy controls (HCs). We performed hierarchical cluster analysis on principal components derived from neurocognitive and social cognitive measures. Within the CHR-P group, clusters were compared on clinical and functional variables and examined for associations with global functioning, persistent attenuated psychotic symptoms and transition to psychosis. Two discrete cognitive subgroups emerged across all participants: 45.9% of CHR-P individuals were cognitively impaired compared to 93.3% of FEP, 29.8% of CHR-N and 30.2% of HC participants. Cognitively impaired CHR-P participants also had significantly poorer functioning at baseline and follow-up than their cognitively spared counterparts. Specifically, cluster membership predicted functional but not clinical outcome. Our findings support the existence of distinct cognitive subgroups in CHR-P individuals that are associated with functional outcomes, with implications for early intervention and the understanding of underlying developmental processes.


Cognitive Dysfunction , Psychotic Disorders , Schizophrenia , Cluster Analysis , Cognition , Cognitive Dysfunction/etiology , Humans , Schizophrenia/complications , Schizophrenia/diagnosis
14.
Curr Biol ; 32(1): 200-209.e6, 2022 01 10.
Article En | MEDLINE | ID: mdl-34767768

Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.


Emotions , Facial Expression , Anger , Arousal , Face , Humans
15.
Neuroimage ; 247: 118841, 2022 02 15.
Article En | MEDLINE | ID: mdl-34952232

When exposed to complementary features of information across sensory modalities, our brains formulate cross-modal associations between features of stimuli presented separately to multiple modalities. For example, auditory pitch-visual size associations map high-pitch tones with small-size visual objects, and low-pitch tones with large-size visual objects. Preferential, or congruent, cross-modal associations have been shown to affect behavioural performance, i.e. choice accuracy and reaction time (RT) across multisensory decision-making paradigms. However, the neural mechanisms underpinning such influences in perceptual decision formation remain unclear. Here, we sought to identify when perceptual improvements from associative congruency emerge in the brain during decision formation. In particular, we asked whether such improvements represent 'early' sensory processing benefits, or 'late' post-sensory changes in decision dynamics. Using a modified version of the Implicit Association Test (IAT), coupled with electroencephalography (EEG), we measured the neural activity underlying the effect of auditory stimulus-driven pitch-size associations on perceptual decision formation. Behavioural results showed that participants responded significantly faster during trials when auditory pitch was congruent, rather than incongruent, with its associative visual size counterpart. We used multivariate Linear Discriminant Analysis (LDA) to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance. We found an 'Early' component (∼100-110 ms post-stimulus onset) coinciding with the time of maximal discrimination of the auditory stimuli), and a 'Late' component (∼330-340 ms post-stimulus onset) underlying IAT performance. To characterise the functional role of these components in decision formation, we incorporated a neurally-informed Hierarchical Drift Diffusion Model (HDDM), revealing that the Late component decreases response caution, requiring less sensory evidence to be accumulated, whereas the Early component increased the duration of sensory-encoding processes for incongruent trials. Overall, our results provide a mechanistic insight into the contribution of 'early' sensory processing, as well as 'late' post-sensory neural representations of associative congruency to perceptual decision formation.


Decision Making/physiology , Electroencephalography , Acoustic Stimulation , Adult , Discriminant Analysis , Female , Healthy Volunteers , Humans , Male , Photic Stimulation , Reaction Time/physiology
16.
Npj Ment Health Res ; 1(1): 10, 2022 Aug 30.
Article En | MEDLINE | ID: mdl-38609460

Human behaviours are guided by how confident we feel in our abilities. When confidence does not reflect objective performance, this can impact critical adaptive functions and impair life quality. Distorted decision-making and confidence have been associated with mental health problems. Here, utilising advances in computational and transdiagnostic psychiatry, we sought to map relationships between psychopathology and both decision-making and confidence in the general population across two online studies (N's = 344 and 473, respectively). The results revealed dissociable decision-making and confidence signatures related to distinct symptom dimensions. A dimension characterised by compulsivity and intrusive thoughts was found to be associated with reduced objective accuracy but, paradoxically, increased absolute confidence, whereas a dimension characterized by anxiety and depression was associated with systematically low confidence in the absence of impairments in objective accuracy. These relationships replicated across both studies and distinct cognitive domains (perception and general knowledge), suggesting that they are reliable and domain general. Additionally, whereas Big-5 personality traits also predicted objective task performance, only symptom dimensions related to subjective confidence. Domain-general signatures of decision-making and metacognition characterise distinct psychological dispositions and psychopathology in the general population and implicate confidence as a central component of mental health.

17.
Patterns (N Y) ; 2(10): 100348, 2021 Oct 08.
Article En | MEDLINE | ID: mdl-34693374

Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.

18.
Curr Biol ; 31(10): 2243-2252.e6, 2021 05 24.
Article En | MEDLINE | ID: mdl-33798430

Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5-7 including representing the diversity of beauty preferences within and across cultures.8-12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents.


Beauty , Culture , Face , Adult , Asian People , Female , Humans , Male , Sex Characteristics , White People
19.
Sleep ; 44(5)2021 05 14.
Article En | MEDLINE | ID: mdl-33220055

Functional connectivity (FC) metrics describe brain inter-regional interactions and may complement information provided by common power-based analyses. Here, we investigated whether the FC-metrics weighted Phase Lag Index (wPLI) and weighted Symbolic Mutual Information (wSMI) may unveil functional differences across four stages of vigilance-wakefulness (W), NREM-N2, NREM-N3, and REM sleep-with respect to each other and to power-based features. Moreover, we explored their possible contribution in identifying differences between stages characterized by distinct levels of consciousness (REM+W vs. N2+N3) or sensory disconnection (REM vs. W). Overnight sleep and resting-state wakefulness recordings from 24 healthy participants (27 ± 6 years, 13F) were analyzed to extract power and FC-based features in six classical frequency bands. Cross-validated linear discriminant analyses (LDA) were applied to investigate the ability of extracted features to discriminate (1) the four vigilance stages, (2) W+REM vs. N2+N3, and (3) W vs. REM. For the four-way vigilance stages classification, combining features based on power and both connectivity metrics significantly increased accuracy relative to considering only power, wPLI, or wSMI features. Delta-power and connectivity (0.5-4 Hz) represented the most relevant features for all the tested classifications, in line with a possible involvement of slow waves in consciousness and sensory disconnection. Sigma-FC, but not sigma-power (12-16 Hz), was found to strongly contribute to the differentiation between states characterized by higher (W+REM) and lower (N2+N3) probabilities of conscious experiences. Finally, alpha-FC resulted as the most relevant FC-feature for distinguishing among wakefulness and REM sleep and may thus reflect the level of disconnection from the external environment.


Electroencephalography , Wakefulness , Benchmarking , Consciousness , Humans , Sleep , Sleep Stages
20.
PLoS Comput Biol ; 16(10): e1008302, 2020 10.
Article En | MEDLINE | ID: mdl-33119593

Despite being the focus of a thriving field of research, the biological mechanisms that underlie information integration in the brain are not yet fully understood. A theory that has gained a lot of traction in recent years suggests that multi-scale integration is regulated by a hierarchy of mutually interacting neural oscillations. In particular, there is accumulating evidence that phase-amplitude coupling (PAC), a specific form of cross-frequency interaction, plays a key role in numerous cognitive processes. Current research in the field is not only hampered by the absence of a gold standard for PAC analysis, but also by the computational costs of running exhaustive computations on large and high-dimensional electrophysiological brain signals. In addition, various signal properties and analyses parameters can lead to spurious PAC. Here, we present Tensorpac, an open-source Python toolbox dedicated to PAC analysis of neurophysiological data. The advantages of Tensorpac include (1) higher computational efficiency thanks to software design that combines tensor computations and parallel computing, (2) the implementation of all most widely used PAC methods in one package, (3) the statistical analysis of PAC measures, and (4) extended PAC visualization capabilities. Tensorpac is distributed under a BSD-3-Clause license and can be launched on any operating system (Linux, OSX and Windows). It can be installed directly via pip or downloaded from Github (https://github.com/EtienneCmb/tensorpac). By making Tensorpac available, we aim to enhance the reproducibility and quality of PAC research, and provide open tools that will accelerate future method development in neuroscience.


Brain/physiology , Computational Biology/methods , Electrophysiological Phenomena/physiology , Software , Humans , Signal Processing, Computer-Assisted
...