Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Behav Res Methods ; 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38918315

ABSTRACT

EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-'action'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS® motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS® system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.

2.
Heliyon ; 9(10): e20725, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37876480

ABSTRACT

The neural mechanisms that unfold when humans form a large group defined by an overarching context, such as audiences in theater or sports, are largely unknown and unexplored. This is mainly due to the lack of availability of a scalable system that can record the brain activity from a significantly large portion of such an audience simultaneously. Although the technology for such a system has been readily available for a long time, the high cost as well as the large overhead in human resources and logistic planning have prohibited the development of such a system. However, during the recent years reduction in technology costs and size have led to the emergence of low-cost, consumer-oriented EEG systems, developed primarily for recreational use. Here by combining such a low-cost EEG system with other off-the-shelve hardware and tailor-made software, we develop in the lab and test in a cinema such a scalable EEG hyper-scanning system. The system has a robust and stable performance and achieves accurate unambiguous alignment of the recorded data of the different EEG headsets. These characteristics combined with small preparation time and low-cost make it an ideal candidate for recording large portions of audiences.

3.
Neurobiol Lang (Camb) ; 4(1): 120-144, 2023.
Article in English | MEDLINE | ID: mdl-37229144

ABSTRACT

Speech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated. In two MEG experiments, we investigate lexical and sublexical word-level processing and the interactions with (acoustic) syllable processing using a frequency-tagging paradigm. Participants listened to disyllabic words presented at a rate of 4 syllables/s. Lexical content (native language), sublexical syllable-to-syllable transitions (foreign language), or mere syllabic information (pseudo-words) were presented. Two conjectures were evaluated: (i) syllable-to-syllable transitions contribute to word-level processing; and (ii) processing of words activates brain areas that interact with acoustic syllable processing. We show that syllable-to-syllable transition information compared to mere syllable information, activated a bilateral superior, middle temporal and inferior frontal network. Lexical content resulted, additionally, in increased neural activity. Evidence for an interaction of word- and acoustic syllable-level processing was inconclusive. Decreases in syllable tracking (cerebroacoustic coherence) in auditory cortex and increases in cross-frequency coupling between right superior and middle temporal and frontal areas were found when lexical content was present compared to all other conditions; however, not when conditions were compared separately. The data provide experimental insight into how subtle and sensitive syllable-to-syllable transition information for word-level processing is.

4.
Neuropsychologia ; 181: 108491, 2023 03 12.
Article in English | MEDLINE | ID: mdl-36707026

ABSTRACT

Grapheme-colour synaesthetes experience an anomalous form of perception in which graphemes systematically induce specific colour concurrents in their mind's eye ("associator" type). Although grapheme-colour synaesthesia has been well characterised behaviourally, its neural mechanisms remain largely unresolved. There are currently several competing models, which can primarily be distinguished according to the anatomical and temporal predictions of synaesthesia-inducing neural activity. The first main model (Cross-Activation/Cascaded Cross-Tuning and its variants) posits early recruitment of occipital colour areas in the initial feed-forward sweep of brain activity. The second (Disinhibited Feedback) posits: (i) later involvement of a multisensory convergence zone (for example, in parietal cortices) after graphemes have been processed in their entirety; and (ii) subsequent feedback to early visual areas (i.e., occipital colour areas). In this study, we examine both the timing and anatomical correlates of associator grapheme-colour synaesthetes (n = 6) using MEG. Using innovative and unbiased analysis methods with little a priori assumptions, we applied Independent Component Analysis (ICA) on a single-subject level to identify the dominant patterns of activity corresponding to the induced, synaesthetic percept. We observed evoked activity that significantly dissociates between synaesthesia-inducing and non-inducing graphemes at approximately 190 ms following grapheme presentation. This effect is present in grapheme-colour synaesthetes, but not in matched controls, and exhibits an occipito-parietal topology localised consistently within individuals to extrastriate visual cortices and superior parietal lobes. Due to the observed timing of this evoked activity and its localization, our results support a model predicting relatively late synaesthesia-inducing activity, more akin to the Disinhibited Feedback model.


Subject(s)
Color Perception , Humans , Color , Color Perception/physiology , Synesthesia
5.
Eur J Neurosci ; 55(11-12): 3373-3390, 2022 06.
Article in English | MEDLINE | ID: mdl-34155728

ABSTRACT

Ample evidence shows that the human brain carefully tracks acoustic temporal regularities in the input, perhaps by entraining cortical neural oscillations to the rate of the stimulation. To what extent the entrained oscillatory activity influences processing of upcoming auditory events remains debated. Here, we revisit a critical finding from Hickok et al. (2015) that demonstrated a clear impact of auditory entrainment on subsequent auditory detection. Participants were asked to detect tones embedded in stationary noise, following a noise that was amplitude modulated at 3 Hz. Tonal targets occurred at various phases relative to the preceding noise modulation. The original study (N = 5) showed that the detectability of the tones (presented at near-threshold intensity) fluctuated cyclically at the same rate as the preceding noise modulation. We conducted an exact replication of the original paradigm (N = 23) and a conceptual replication using a shorter experimental procedure (N = 24). Neither experiment revealed significant entrainment effects at the group level. A restricted analysis on the subset of participants (36%) who did show the entrainment effect revealed no consistent phase alignment between detection facilitation and the preceding rhythmic modulation. Interestingly, both experiments showed group-wide presence of a non-cyclic behavioural pattern, wherein participants' detection of the tonal targets was lower at early and late time points of the target period. The two experiments highlight both the sensitivity of the task to elicit oscillatory entrainment and the striking individual variability in performance.


Subject(s)
Auditory Perception , Noise , Acoustic Stimulation/methods , Auditory Perception/physiology , Humans
6.
Brain Res ; 1773: 147664, 2021 12 15.
Article in English | MEDLINE | ID: mdl-34560052

ABSTRACT

Predictive models in the brain rely on the continuous extraction of regularities from the environment. These models are thought to be updated by novel information, as reflected in prediction error responses such as the mismatch negativity (MMN). However, although in real life individuals often face situations in which uncertainty prevails, it remains unclear whether and how predictive models emerge in high-uncertainty contexts. Recent research suggests that uncertainty affects the magnitude of MMN responses in the context of music listening. However, musical predictions are typically studied with MMN stimulation paradigms based on Western tonal music, which are characterized by relatively high predictability. Hence, we developed an MMN paradigm to investigate how the high uncertainty of atonal music modulates predictive processes as indexed by the MMN and behavior. Using MEG in a group of 20 subjects without musical training, we demonstrate that the magnetic MMN in response to pitch, intensity, timbre, and location deviants is evoked in both tonal and atonal melodies, with no significant differences between conditions. In contrast, in a separate behavioral experiment involving 39 non-musicians, participants detected pitch deviants more accurately and rated confidence higher in the tonal than in the atonal musical context. These results indicate that contextual tonal uncertainty modulates processing stages in which conscious awareness is involved, although deviants robustly elicit low-level pre-attentive responses such as the MMN. The achievement of robust MMN responses, despite high tonal uncertainty, is relevant for future studies comparing groups of listeners' MMN responses to increasingly ecological music stimuli.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Cognition/physiology , Music , Adult , Brain/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Magnetoencephalography , Male , Middle Aged , Pitch Perception/physiology , Young Adult
7.
Proc Natl Acad Sci U S A ; 118(16)2021 04 20.
Article in English | MEDLINE | ID: mdl-33853943

ABSTRACT

The environment is shaped by two sources of temporal uncertainty: the discrete probability of whether an event will occur and-if it does-the continuous probability of when it will happen. These two types of uncertainty are fundamental to every form of anticipatory behavior including learning, decision-making, and motor planning. It remains unknown how the brain models the two uncertainty parameters and how they interact in anticipation. It is commonly assumed that the discrete probability of whether an event will occur has a fixed effect on event expectancy over time. In contrast, we first demonstrate that this pattern is highly dynamic and monotonically increases across time. Intriguingly, this behavior is independent of the continuous probability of when an event will occur. The effect of this continuous probability on anticipation is commonly proposed to be driven by the hazard rate (HR) of events. We next show that the HR fails to account for behavior and propose a model of event expectancy based on the probability density function of events. Our results hold for both vision and audition, suggesting independence of the representation of the two uncertainties from sensory input modality. These findings enrich the understanding of fundamental anticipatory processes and have provocative implications for many aspects of behavior and its neural underpinnings.


Subject(s)
Anticipation, Psychological/physiology , Decision Making/physiology , Uncertainty , Adult , Auditory Perception/physiology , Female , Humans , Learning/physiology , Male , Probability , Spatio-Temporal Analysis , Visual Perception/physiology
8.
Neuroimage ; 227: 117436, 2021 02 15.
Article in English | MEDLINE | ID: mdl-33039619

ABSTRACT

When we feel connected or engaged during social behavior, are our brains in fact "in sync" in a formal, quantifiable sense? Most studies addressing this question use highly controlled tasks with homogenous subject pools. In an effort to take a more naturalistic approach, we collaborated with art institutions to crowdsource neuroscience data: Over the course of 5 years, we collected electroencephalogram (EEG) data from thousands of museum and festival visitors who volunteered to engage in a 10-min face-to-face interaction. Pairs of participants with various levels of familiarity sat inside the Mutual Wave Machine-an artistic neurofeedback installation that translates real-time correlations of each pair's EEG activity into light patterns. Because such inter-participant EEG correlations are prone to noise contamination, in subsequent offline analyses we computed inter-brain coupling using Imaginary Coherence and Projected Power Correlations, two synchrony metrics that are largely immune to instantaneous, noise-driven correlations. When applying these methods to two subsets of recorded data with the most consistent protocols, we found that pairs' trait empathy, social closeness, engagement, and social behavior (joint action and eye contact) consistently predicted the extent to which their brain activity became synchronized, most prominently in low alpha (~7-10 Hz) and beta (~20-22 Hz) oscillations. These findings support an account where shared engagement and joint action drive coupled neural activity and behavior during dynamic, naturalistic social interactions. To our knowledge, this work constitutes a first demonstration that an interdisciplinary, real-world, crowdsourcing neuroscience approach may provide a promising method to collect large, rich datasets pertaining to real-life face-to-face interactions. Additionally, it is a demonstration of how the general public can participate and engage in the scientific process outside of the laboratory. Institutions such as museums, galleries, or any other organization where the public actively engages out of self-motivation, can help facilitate this type of citizen science research, and support the collection of large datasets under scientifically controlled experimental conditions. To further enhance the public interest for the out-of-the-lab experimental approach, the data and results of this study are disseminated through a website tailored to the general public (wp.nyu.edu/mutualwavemachine).


Subject(s)
Brain/physiology , Empathy/physiology , Social Behavior , Crowdsourcing , Electroencephalography , Humans , Interpersonal Relations , Neurofeedback
9.
Nat Commun ; 10(1): 5802, 2019 12 20.
Article in English | MEDLINE | ID: mdl-31862912

ABSTRACT

Humans anticipate events signaled by sensory cues. It is commonly assumed that two uncertainty parameters modulate the brain's capacity to predict: the hazard rate (HR) of event probability and the uncertainty in time estimation which increases with elapsed time. We investigate both assumptions by presenting event probability density functions (PDFs) in each of three sensory modalities. We show that perceptual systems use the reciprocal PDF and not the HR to model event probability density. We also demonstrate that temporal uncertainty does not necessarily grow with elapsed time but can also diminish, depending on the event PDF. Previous research identified neuronal activity related to event probability in multiple levels of the cortical hierarchy (sensory (V4), association (LIP), motor and other areas) proposing the HR as an elementary neuronal computation. Our results-consistent across vision, audition, and somatosensation-suggest that the neurobiological implementation of event anticipation is based on a different, simpler and more stable computation than HR: the reciprocal PDF of events in time.


Subject(s)
Anticipation, Psychological/physiology , Cerebral Cortex/physiology , Cues , Models, Psychological , Uncertainty , Adult , Cerebral Cortex/cytology , Female , Humans , Neurons/physiology , Perception/physiology , Reaction Time/physiology , Time Factors , Young Adult
10.
J Neurosci ; 39(33): 6498-6512, 2019 08 14.
Article in English | MEDLINE | ID: mdl-31196933

ABSTRACT

The way the human brain represents speech in memory is still unknown. An obvious characteristic of speech is its evolvement over time. During speech processing, neural oscillations are modulated by the temporal properties of the acoustic speech signal, but also acquired knowledge on the temporal structure of language influences speech perception-related brain activity. This suggests that speech could be represented in the temporal domain, a form of representation that the brain also uses to encode autobiographic memories. Empirical evidence for such a memory code is lacking. We investigated the nature of speech memory representations using direct cortical recordings in the left perisylvian cortex during delayed sentence reproduction in female and male patients undergoing awake tumor surgery. Our results reveal that the brain endogenously represents speech in the temporal domain. Temporal pattern similarity analyses revealed that the phase of frontotemporal low-frequency oscillations, primarily in the beta range, represents sentence identity in working memory. The positive relationship between beta power during working memory and task performance suggests that working memory representations benefit from increased phase separation.SIGNIFICANCE STATEMENT Memory is an endogenous source of information based on experience. While neural oscillations encode autobiographic memories in the temporal domain, little is known on their contribution to memory representations of human speech. Our electrocortical recordings in participants who maintain sentences in memory identify the phase of left frontotemporal beta oscillations as the most prominent information carrier of sentence identity. These observations provide evidence for a theoretical model on speech memory representations and explain why interfering with beta oscillations in the left inferior frontal cortex diminishes verbal working memory capacity. The lack of sentence identity coding at the syllabic rate suggests that sentences are represented in memory in a more abstract form compared with speech coding during speech perception and production.


Subject(s)
Brain/physiology , Memory, Short-Term/physiology , Speech Perception/physiology , Speech/physiology , Adult , Electrocorticography , Female , Humans , Male , Middle Aged , Young Adult
11.
Curr Biol ; 27(9): 1375-1380, 2017 May 08.
Article in English | MEDLINE | ID: mdl-28457867

ABSTRACT

The human brain has evolved for group living [1]. Yet we know so little about how it supports dynamic group interactions that the study of real-world social exchanges has been dubbed the "dark matter of social neuroscience" [2]. Recently, various studies have begun to approach this question by comparing brain responses of multiple individuals during a variety of (semi-naturalistic) tasks [3-15]. These experiments reveal how stimulus properties [13], individual differences [14], and contextual factors [15] may underpin similarities and differences in neural activity across people. However, most studies to date suffer from various limitations: they often lack direct face-to-face interaction between participants, are typically limited to dyads, do not investigate social dynamics across time, and, crucially, they rarely study social behavior under naturalistic circumstances. Here we extend such experimentation drastically, beyond dyads and beyond laboratory walls, to identify neural markers of group engagement during dynamic real-world group interactions. We used portable electroencephalogram (EEG) to simultaneously record brain activity from a class of 12 high school students over the course of a semester (11 classes) during regular classroom activities (Figures 1A-1C; Supplemental Experimental Procedures, section S1). A novel analysis technique to assess group-based neural coherence demonstrates that the extent to which brain activity is synchronized across students predicts both student class engagement and social dynamics. This suggests that brain-to-brain synchrony is a possible neural marker for dynamic social interactions, likely driven by shared attention mechanisms. This study validates a promising new method to investigate the neuroscience of group interactions in ecologically natural settings.


Subject(s)
Brain/physiology , Electroencephalography/methods , Interpersonal Relations , Schools , Social Behavior , Female , Humans , Male
12.
Neuron ; 89(2): 384-97, 2016 Jan 20.
Article in English | MEDLINE | ID: mdl-26777277

ABSTRACT

Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and we correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral- and dorsal-stream visual areas are differentially affected by inter-areal influences in the alpha-beta band.


Subject(s)
Alpha Rhythm/physiology , Beta Rhythm/physiology , Feedback, Physiological/physiology , Gamma Rhythm/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Animals , Female , Humans , Macaca , Male , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL