Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 250
Filter
Add more filters

Publication year range
1.
Eur Arch Psychiatry Clin Neurosci ; 274(7): 1585-1599, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38270620

ABSTRACT

Temporal coordination of communicative behavior is not only located between but also within interaction partners (e.g., gaze and gestures). This intrapersonal synchrony (IaPS) is assumed to constitute interpersonal alignment. Studies show systematic variations in IaPS in individuals with autism, which may affect the degree of interpersonal temporal coordination. In the current study, we reversed the approach and mapped the measured nonverbal behavior of interactants with and without ASD from a previous study onto virtual characters to study the effects of the differential IaPS on observers (N = 68), both with and without ASD (crossed design). During a communication task with both characters, who indicated targets with gaze and delayed pointing gestures, we measured response times, gaze behavior, and post hoc impression formation. Results show that character behavior indicative of ASD resulted in overall enlarged decoding times in observers and this effect was even pronounced in observers with ASD. A classification of observer's gaze types indicated differentiated decoding strategies. Whereas non-autistic observers presented with a rather consistent eyes-focused strategy associated with efficient and fast responses, observers with ASD presented with highly variable decoding strategies. In contrast to communication efficiency, impression formation was not influenced by IaPS. The results underline the importance of timing differences in both production and perception processes during multimodal nonverbal communication in interactants with and without ASD. In essence, the current findings locate the manifestation of reduced reciprocity in autism not merely in the person, but in the interactional dynamics of dyads.


Subject(s)
Autism Spectrum Disorder , Nonverbal Communication , Social Perception , Humans , Male , Female , Adult , Nonverbal Communication/physiology , Young Adult , Autism Spectrum Disorder/physiopathology , Gestures , Autistic Disorder/physiopathology , Autistic Disorder/psychology , Social Interaction , Fixation, Ocular/physiology , Adolescent , Interpersonal Relations , Reaction Time/physiology , Virtual Reality
2.
J Neurosci ; 40(44): 8530-8542, 2020 10 28.
Article in English | MEDLINE | ID: mdl-33023923

ABSTRACT

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials/physiology , Nonverbal Communication/physiology , Adult , Drug Resistant Epilepsy/surgery , Electrocorticography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Female , Humans , Middle Aged , Neurons/physiology , Nonverbal Communication/psychology , Photic Stimulation , Young Adult
3.
Neuroimage ; 237: 118220, 2021 08 15.
Article in English | MEDLINE | ID: mdl-34058335

ABSTRACT

Action observation is supported by a network of regions in occipito-temporal, parietal, and premotor cortex in primates. Recent research suggests that the parietal node has regions dedicated to different action classes including manipulation, interpersonal interactions, skin displacement, locomotion, and climbing. The goals of the current study consist of: 1) extending this work with new classes of actions that are communicative and specific to humans, 2) investigating how parietal cortex differs from the occipito-temporal and premotor cortex in representing action classes. Human subjects underwent fMRI scanning while observing three action classes: indirect communication, direct communication, and manipulation, plus two types of control stimuli, static controls which were static frames from the video clips, and dynamic controls consisting of temporally-scrambled optic flow information. Using univariate analysis, MVPA, and representational similarity analysis, our study presents several novel findings. First, we provide further evidence for the anatomical segregation in parietal cortex of different action classes: We have found a new site that is specific for representing human-specific indirect communicative actions in cytoarchitectonic parietal area PFt. Second, we found that the discriminability between action classes was higher in parietal cortex than the other two levels suggesting the coding of action identity information at this level. Finally, our results advocate the use of the control stimuli not just for univariate analysis of complex action videos but also when using multivariate techniques.


Subject(s)
Brain Mapping , Motor Activity/physiology , Nonverbal Communication/physiology , Parietal Lobe/physiology , Social Perception , Visual Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Parietal Lobe/diagnostic imaging , Young Adult
4.
J Nerv Ment Dis ; 209(2): 128-136, 2021 02 01.
Article in English | MEDLINE | ID: mdl-33214386

ABSTRACT

ABSTRACT: The pilot study investigated with a matched-subjects design whether nonverbal synchrony is a diagnostic feature for depression and whether it mediates between depression and postsession ratings of the interviewer behavior. The sample includes n = 15 patients with major depression and n = 15 healthy controls (aged 20-30 years, 40% female). We conducted structured diagnostic interviews for somatic complaints to standardize the recording setting, issue, and course of conversation. Body movements and facial expressions were coded automatically frame by frame using computer vision methods. Ratings of the interviewers' professional behavior and positive affect were assessed using questionnaires. Patients with depression showed less movement synchrony and less synchronous positive facial expressions. Only synchronous positive expressions mediated between depression and less perceived positive affect. We conclude that the applied methodology is well suited to examine nonverbal processes under naturalistic but widely standardized conditions and that depression affects the nonverbal communication in medical conversations.


Subject(s)
Depressive Disorder, Major/diagnosis , Facial Expression , Movement , Adult , Case-Control Studies , Depressive Disorder, Major/physiopathology , Female , Humans , Interview, Psychological , Male , Nonverbal Communication/physiology , Nonverbal Communication/psychology , Pilot Projects , Psychiatric Status Rating Scales , Surveys and Questionnaires , Young Adult
5.
Neuroimage ; 221: 117191, 2020 11 01.
Article in English | MEDLINE | ID: mdl-32711066

ABSTRACT

Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus ("fSTS") also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Cues , Facial Expression , Facial Recognition/physiology , Nonverbal Communication/physiology , Social Perception , Temporal Lobe/physiology , Adolescent , Adult , Gestures , Hand/physiology , Humans , Magnetic Resonance Imaging , Speech Perception/physiology , Young Adult
6.
Psychol Res ; 84(7): 1897-1911, 2020 Oct.
Article in English | MEDLINE | ID: mdl-31079227

ABSTRACT

Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees' comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor's faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.


Subject(s)
Biomechanical Phenomena/physiology , Comprehension/physiology , Gestures , Nonverbal Communication/physiology , Semantics , Adolescent , Adult , Female , Humans , Male , Netherlands , Young Adult
7.
Phonetica ; 77(5): 327-349, 2020.
Article in English | MEDLINE | ID: mdl-31962309

ABSTRACT

Prosodic features, such as intonation and voice intensity, have a well-documented role in communicating emotion, but less is known about the role of laryngeal voice quality in speech and particularly in nonverbal vocalizations such as laughs and moans. Potentially, however, variations in voice quality between tense and breathy may convey rich information about the speaker's physiological and affective state. In this study breathiness was manipulated in synthetic human nonverbal vocalizations by adjusting the relative strength of upper harmonics and aspiration noise. In experiment 1 (28 prototypes × 3 manipulations = 84 sounds), otherwise identical vocalizations with tense versus breathy voice quality were associated with higher arousal (general alertness), higher dominance, and lower valence (unpleasant states). Ratings on discrete emotions in experiment 2 (56 × 3 = 168 sounds) confirmed that breathiness was reliably associated with positive emotions, particularly in ambiguous vocalizations (gasps and moans). The spectral centroid did not fully account for the effect of manipulation, confirming that the perceived change in voice quality was more specific than a general shift in timbral brightness. Breathiness is thus involved in communicating emotion with nonverbal vocalizations, possibly due to changes in low-level auditory salience and perceived vocal effort.


Subject(s)
Nonverbal Communication , Voice Quality , Female , Humans , Male , Nonverbal Communication/physiology , Nonverbal Communication/psychology , Speech Acoustics
8.
Cogn Affect Behav Neurosci ; 19(5): 1259-1272, 2019 10.
Article in English | MEDLINE | ID: mdl-31290016

ABSTRACT

Nonverbal communication determines much of how we perceive explicit, verbal messages. Facial expressions and social touch, for example, influence affinity and conformity. To understand the interaction between nonverbal and verbal information, we studied how the psychophysiological time-course of semiotics-the decoding of the meaning of a message-is altered by interpersonal touch and facial expressions. A virtual-reality-based economic decision-making game, ultimatum, was used to investigate how participants perceived, and responded to, financial offers of variable levels of fairness. In line with previous studies, unfair offers evoked medial frontal negativity (MFN) within the N2 time window, which has been interpreted as reflecting an emotional reaction to violated social norms. Contrary to this emotional interpretation of the MFN, however, nonverbal signals did not modulate the MFN component, only affecting fairness perception during the P3 component. This suggests that the nonverbal context affects the late, but not the early, stage of fairness perception. We discuss the implications of the semiotics of the message and the messenger as a process by which parallel information sources of "who says what" are integrated in reverse order: of the message, then the messenger.


Subject(s)
Brain/physiology , Decision Making/physiology , Interpersonal Relations , Nonverbal Communication/physiology , Social Behavior , Adult , Electroencephalography , Evoked Potentials , Facial Expression , Facial Recognition/physiology , Female , Games, Experimental , Humans , Male , Middle Aged , Touch Perception/physiology , Young Adult
9.
J Med Internet Res ; 21(11): e15459, 2019 11 27.
Article in English | MEDLINE | ID: mdl-31774400

ABSTRACT

BACKGROUND: Attending to the wide range of communication behaviors that convey empathy is an important but often underemphasized concept to reduce errors in care, improve patient satisfaction, and improve cancer patient outcomes. A virtual human (VH)-based simulation, MPathic-VR, was developed to train health care providers in empathic communication with patients and in interprofessional settings and evaluated through a randomized controlled trial. OBJECTIVE: This mixed methods study aimed to investigate the differential effects of a VH-based simulation developed to train health care providers in empathic patient-provider and interprofessional communication. METHODS: We employed a mixed methods intervention design, involving a comparison of 2 quantitative measures-MPathic-VR-calculated scores and the objective structured clinical exam (OSCE) scores-with qualitative reflections by medical students about their experiences. This paper is a secondary, focused analysis of intervention arm data from the larger trial. Students at 3 medical schools in the United States (n=206) received simulation to improve empathic communication skills. We conducted analysis of variance, thematic text analysis, and merging mixed methods analysis. RESULTS: OSCE scores were significantly improved for learners in the intervention group (mean 0.806, SD 0.201) compared with the control group (mean 0.752, SD 0.198; F1,414=6.09; P=.01). Qualitative analysis revealed 3 major positive themes for the MPathic-VR group learners: gaining useful communication skills, learning awareness of nonverbal skills in addition to verbal skills, and feeling motivated to learn more about communication. Finally, the results of the mixed methods analysis indicated that most of the variation between high, middle, and lower performers was noted about nonverbal behaviors. Medium and high OSCE scorers most often commented on the importance of nonverbal communication. Themes of motivation to learn about communication were only present in middle and high scorers. CONCLUSIONS: VHs are a promising strategy for improving empathic communication in health care. Higher performers seemed most engaged to learn, particularly nonverbal skills.


Subject(s)
Clinical Competence/standards , Nonverbal Communication/physiology , Simulation Training/methods , Students, Medical/psychology , Communication , Female , Humans , Male
10.
Psychiatry Clin Neurosci ; 73(2): 50-62, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30565801

ABSTRACT

AIM: Emotional expressions are one of the most widely studied topics in neuroscience, from both clinical and non-clinical perspectives. Atypical emotional expressions are seen in various psychiatric conditions, including schizophrenia, depression, and autism spectrum conditions. Understanding the basics of emotional expressions and recognition can be crucial for diagnostic and therapeutic procedures. Emotions can be expressed in the face, gesture, posture, voice, and behavior and affect physiological parameters, such as the heart rate or body temperature. With modern technology, clinicians can use a variety of tools ranging from sophisticated laboratory equipment to smartphones and web cameras. The aim of this paper is to review the currently used tools using modern technology and discuss their usefulness as well as possible future directions in emotional expression research and treatment strategies. METHODS: The authors conducted a literature review in the PubMed, EBSCO, and SCOPUS databases, using the following key words: 'emotions,' 'emotional expression,' 'affective computing,' and 'autism.' The most relevant and up-to-date publications were identified and discussed. Search results were supplemented by the authors' own research in the field of emotional expression. RESULTS: We present a critical review of the currently available technical diagnostic and therapeutic methods. The most important studies are summarized in a table. CONCLUSION: Most of the currently available methods have not been adequately validated in clinical settings. They may be a great help in everyday practice; however, they need further testing. Future directions in this field include more virtual-reality-based and interactive interventions, as well as development and improvement of humanoid robots.


Subject(s)
Emotions/physiology , Facial Expression , Facial Muscles/physiology , Facial Recognition/physiology , Mental Disorders/physiopathology , Nonverbal Communication/physiology , Social Perception , Voice/physiology , Humans
11.
J Intellect Disabil Res ; 63(3): 244-254, 2019 03.
Article in English | MEDLINE | ID: mdl-30468263

ABSTRACT

BACKGROUND: The combination of intellectual, communicative and motor deficits limits the use of standardised behavioural assessments in individuals with Angelman syndrome (AS). The current study aimed to objectively evaluate the extent of social-emotional processing in AS using auditory event-related potentials (ERPs) during passive exposure to spoken stimuli. METHODS: Auditory ERP responses were recorded in 13 nonverbal individuals with the deletion subtype of AS, age 4-45 years, during the name recognition paradigm, in which their own names and names of close others (relative or friend) were presented among novel names. No behavioural responses were required. RESULTS: Contrary to findings in typical children and adults, there was no significant evidence of differential neural response to known vs. novel names in participants with AS. Nevertheless, greater amplitude differences between known and unknown names demonstrated the predicted association with better interpersonal relationships and receptive communication abilities. CONCLUSIONS: These findings indicate good tolerability of ERP procedures (85% success rate). The lack of own name differentiation is consistent with increased incidence of the autism-related symptoms in AS. Strong associations between the caregiver reports of adaptive functioning and neural indices of known name recognition support the utility of brain-based measures for objectively evaluating cognitive and affective processes in nonverbal persons with neurodevelopmental disorders.


Subject(s)
Adaptation, Psychological/physiology , Affect/physiology , Angelman Syndrome/physiopathology , Cerebral Cortex/physiopathology , Evoked Potentials, Auditory/physiology , Language Development Disorders/physiopathology , Nonverbal Communication/physiology , Recognition, Psychology/physiology , Social Perception , Social Skills , Speech Perception/physiology , Adolescent , Adult , Angelman Syndrome/complications , Child , Child, Preschool , Electroencephalography , Female , Humans , Language Development Disorders/etiology , Male , Middle Aged , Names , Young Adult
12.
Behav Res Methods ; 51(2): 769-777, 2019 04.
Article in English | MEDLINE | ID: mdl-30143970

ABSTRACT

Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human-robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.


Subject(s)
Gestures , Image Processing, Computer-Assisted/methods , Movement , Nonverbal Communication/physiology , Video Recording , Biomechanical Phenomena , Humans , Motion
13.
J Psycholinguist Res ; 48(1): 107-116, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30014310

ABSTRACT

Humans are social beings that form hierarchies to gain and maintain resources. Dominant positions are often obtained through resource control strategies, displayed through language. Language can be examined in a number of ways including number of vocalizations and pragmatic skills. The benefit of pragmatic skills, in relationship to popularity (group dominance), can be explained by virtue signalling and the sociometer theory. The current study examined the relationship of individuals in a novel group setting. Results revealed that popularity within the group was related to the number of vocalizations and increased pragmatic skills. Taken together, it appears that vocalizations and pragmatic skills may help individuals signal their position within the hierarchy, while monitoring the social communications of others.


Subject(s)
Group Processes , Nonverbal Communication/physiology , Psycholinguistics , Social Desirability , Social Dominance , Social Skills , Verbal Behavior/physiology , Adolescent , Adult , Female , Humans , Male , Speech/physiology , Young Adult
14.
Proc Natl Acad Sci U S A ; 112(39): E5434-42, 2015 Sep 29.
Article in English | MEDLINE | ID: mdl-26371313

ABSTRACT

Attending to emotional information conveyed by the eyes is an important social skill in humans. The current study examined this skill in early development by measuring attention to eyes while viewing emotional faces in 7-mo-old infants. In particular, we investigated individual differences in infant attention to eyes in the context of genetic variation (CD38 rs3796863 polymorphism) and experiential variation (exclusive breastfeeding duration) related to the oxytocin system. Our results revealed that, whereas infants at this age show a robust fear bias (increased attention to fearful eyes), their attention to angry and happy eyes varies as a function of exclusive breastfeeding experience and genetic variation in CD38. Specifically, extended exclusive breastfeeding duration selectively enhanced looking preference to happy eyes and decreased looking to angry eyes. Importantly, however, this interaction was impacted by CD38 variation, such that only the looking preferences of infants homozygous for the C allele of rs3796863 were affected by breastfeeding experience. This genotype has been associated with reduced release of oxytocin and higher rates of autism. In contrast, infants with the CA/AA genotype showed similar looking preferences regardless of breastfeeding exposure. Thus, differences in the sensitivity to emotional eyes may be linked to an interaction between the endogenous (CD38) and exogenous (breastfeeding) availability of oxytocin. These findings underline the importance of maternal care and the oxytocin system in contributing to the early development of responding to social eye cues.


Subject(s)
ADP-ribosyl Cyclase 1/genetics , Attention , Autistic Disorder/genetics , Breast Feeding/psychology , Emotions/physiology , Genetic Variation , Membrane Glycoproteins/genetics , Nonverbal Communication/physiology , Fixation, Ocular , Humans , Infant , Nonverbal Communication/psychology , Oxytocin , Photic Stimulation , Surveys and Questionnaires
15.
Cogn Emot ; 32(2): 286-302, 2018 03.
Article in English | MEDLINE | ID: mdl-28415957

ABSTRACT

Happiness can be expressed through smiles. Happiness can also be expressed through physical displays that without context, would appear to be sadness (tears, downward turned mouths, and crumpled body postures) and anger (clenched jaws, snarled lips, furrowed brows, and pumped fists). These seemingly incongruent displays of happiness, termed dimorphous expressions, we propose, represent and communicate expressers' motivational orientations. When participants reported their own aggressive expressions in positive or negative contexts, their expressions represented positive or negative emotional experiences respectively, imbued with appetitive orientations (feelings of wanting to go). In contrast, reported sad expressions, in positive or negative contexts, represented positive and negative emotional experiences respectively, imbued with consummatory orientations (feelings of wanting to pause). In six additional experiments, participant observers interpreted that aggression displayed in positive contexts signalled happy-appetitive states, and sadness displayed in positive contexts signalled happy-consummatory states. Implications for the production and interpretation of emotion expressions are discussed.


Subject(s)
Crying/psychology , Facial Expression , Happiness , Motivation/physiology , Adolescent , Adult , Aged , Aggression , Female , Humans , Male , Middle Aged , Nonverbal Communication/physiology , Nonverbal Communication/psychology , Smiling/psychology , Young Adult
16.
Neuroimage ; 157: 263-274, 2017 08 15.
Article in English | MEDLINE | ID: mdl-28610901

ABSTRACT

Social interaction is a fundamental part of our daily lives; however, exactly how our brains use social cues to determine whether to cooperate without being exploited remains unclear. In this study, we used an electroencephalography (EEG) hyperscanning approach to investigate the effect of face-to-face contact on the brain mechanisms underlying the decision to cooperate or defect in an iterated version of the Prisoner's Dilemma Game. Participants played the game either in face-to-face or face-blocked conditions. The face-to-face interaction led players to cooperate more often, providing behavioral evidence for the use of these nonverbal cues in their social decision-making. In addition, the EEG hyperscanning identified temporal dynamics and inter-brain synchronization across the cortex, providing evidence for involvement of these regions in the processing of face-to-face cues to read each other's intent to cooperate. Most notably, the power of the alpha frequency band (8-13Hz) in the right temporoparietal region immediately after seeing a round outcome significantly differed between face-to-face and face-blocked conditions and predicted whether an individual would adopt a 'cooperation' or 'defection' strategy. Moreover, inter-brain synchronies within this time and frequency range reflected the use of these strategies. This study provides evidence for how the cortex uses nonverbal social cues to determine other's intentions, and highlights the significance of power in the alpha band and inter-brain phase synchronizations in high-level socio-cognitive processing.


Subject(s)
Brain/physiology , Cerebral Cortex/physiology , Cooperative Behavior , Decision Making/physiology , Interpersonal Relations , Nonverbal Communication/physiology , Adult , Cues , Humans , Male , Prisoner Dilemma , Young Adult
17.
Epilepsy Behav ; 66: 57-63, 2017 01.
Article in English | MEDLINE | ID: mdl-28033547

ABSTRACT

Women outperform men in a host of episodic memory tasks, yet the neuroanatomical basis for this effect is unclear. It has been suggested that the anterior temporal lobe might be especially relevant for sex differences in memory. In the current study, we investigated whether temporal lobe epilepsy (TLE) has an influence on sex effects in learning and memory and whether women and men with TLE differ in their risk for memory deficits after epilepsy surgery. 177 patients (53 women and 41 men with left TLE, 42 women and 41 men with right TLE) were neuropsychologically tested before and one year after temporal lobe resection. We found that women with TLE had better verbal, but not figural, memory than men with TLE. The female advantage in verbal memory was not affected by temporal lobe resection. The same pattern of results was found in a more homogeneous subsample of 84 patients with only hippocampal sclerosis who were seizure-free after surgery. Our findings challenge the concept that the anterior temporal lobe plays a central role in the verbal memory advantage for women.


Subject(s)
Epilepsy, Temporal Lobe/psychology , Epilepsy, Temporal Lobe/surgery , Sex Characteristics , Temporal Lobe/physiology , Temporal Lobe/surgery , Verbal Learning/physiology , Adult , Epilepsy, Temporal Lobe/physiopathology , Female , Humans , Male , Memory Disorders/physiopathology , Memory Disorders/psychology , Middle Aged , Neuropsychological Tests , Nonverbal Communication/physiology , Nonverbal Communication/psychology , Retrospective Studies , Young Adult
18.
Conscious Cogn ; 51: 268-278, 2017 May.
Article in English | MEDLINE | ID: mdl-28433857

ABSTRACT

Joint attention (JA) is hypothesized to have a close relationship with developing theory of mind (ToM) capabilities. We tested the co-occurrence of ToM and JA in social interactions between adults with no reported history of psychiatric illness or neurodevelopmental disorders. Participants engaged in an experimental task that encouraged nonverbal communication, including JA, and also ToM activity. We adapted an in-lab variant of experience sampling methods (Bryant et al., 2013) to measure ToM during JA based on participants' subjective reports of their thoughts while performing the task. This experiment successfully elicited instances of JA in 17/20 dyads. We compared participants' thought contents during episodes of JA and non-JA. Our results suggest that, in adults, JA and ToM may occur independently.


Subject(s)
Attention/physiology , Interpersonal Relations , Nonverbal Communication/physiology , Social Perception , Theory of Mind/physiology , Adolescent , Adult , Ecological Momentary Assessment , Humans , Young Adult
19.
Chem Senses ; 41(1): 35-43, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26453051

ABSTRACT

The ability to detect conspecifics that represent a potential harm for an individual represents a high survival benefit. Humans communicate socially relevant information using all sensory modalities, including the chemosensory systems. In study 1, we investigated whether the body odor of a stranger with the intention to harm serves as a chemosignal of aggression. Sixteen healthy male participants donated their body odor while engaging in a boxing session characterized by aggression-induction methods (chemosignal of aggression) and while performing an ergometer session (exercise chemosignal). Self-reports on aggression-related physical activity, motivation to harm and angry emotions selectively increased after aggression induction. In study 2, we examined whether receivers smelling such chemosignals experience emotional contagion (e.g., anger) or emotional reciprocity (e.g., anxiety). The aggression and exercise chemosignals were therefore presented to 22 healthy normosmic participants in a double-blind, randomized exposure during which affective/cognitive processing was examined (i.e., emotion recognition task, emotional stroop task). Behavioral results indicate that chemosignals of aggression induce an affective/cognitive modulation compatible with an anxiety reaction in the recipients. These findings are discussed in light of mechanisms of emotional reciprocity as a way to convey not only affective but also motivational information via chemosensory signals in humans.


Subject(s)
Aggression/physiology , Nonverbal Communication/physiology , Odorants , Pheromones, Human/physiology , Smell/physiology , Adult , Anger/physiology , Anxiety/psychology , Double-Blind Method , Exercise/physiology , Female , Humans , Male , Motivation
20.
Annu Rev Psychol ; 66: 689-710, 2015 Jan 03.
Article in English | MEDLINE | ID: mdl-25251493

ABSTRACT

Human infants are involved in communicative interactions with others well before they start to speak or understand language. It is generally thought that this communication is useful for establishing interpersonal relations and supporting joint activities, but, in the absence of symbolic functions that language provides, these early communicative contexts do not allow infants to learn about the world. However, recent studies suggest that when someone demonstrates something using an object as the medium of instruction, infants can conceive the object as an exemplar of the whole class of objects of the same kind. Thus, an object, just like a word, can play the role of a symbol that stands for something else than itself, and infants can learn general knowledge about a kind of object from nonverbal communication about a single item of that kind. This rudimentary symbolic capacity may be one of the roots of the development of symbolic understanding in children.


Subject(s)
Child Development/physiology , Concept Formation/physiology , Learning/physiology , Nonverbal Communication/physiology , Humans , Infant
SELECTION OF CITATIONS
SEARCH DETAIL