Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
1.
iScience ; 27(6): 110070, 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38947497

ABSTRACT

We sought to replicate and expand previous work showing that the more human-like a robot appears, the more willing people are to attribute mind-like capabilities and socially engage with it. Forty-two participants played games against a human, a humanoid robot, a mechanoid robot, and a computer algorithm while undergoing functional neuroimaging. We confirmed that the more human-like the agent, the more participants attributed a mind to them. However, exploratory analyses revealed that the perceived socialness of an agent appeared to be as, if not more, important for mind attribution. Our findings suggest top-down knowledge cues may be equally or possibly more influential than bottom-up stimulus cues when exploring mind attribution in non-human agents. While further work is now required to test this hypothesis directly, these preliminary findings hold important implications for robotic design and to understand and test the flexibility of human social cognition when people engage with artificial agents.

2.
Dev Sci ; 27(4): e13492, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38553823

ABSTRACT

This paper presents rational inattention as a new, transdiagnostic theory of information seeking in neurodevelopmental conditions that have uneven cognitive and socio-emotional profiles, including developmental language disorder (DLD), dyslexia, dyscalculia and autism. Rational inattention holds that the optimal solution to minimizing epistemic uncertainty is to avoid imprecise information sources. The key theoretical contribution of this report is to endogenize imprecision, making it a function of the primary neurocognitive difficulties that have been invoked to explain neurodivergent phenotypes, including deficits in auditory perception, working memory, procedural learning and the social brain network. We argue that disengagement with information sources with low endogenous precision (e.g. speech in DLD, orthography-phonology mappings in dyslexia, numeric stimuli in dyscalculia and social signals in autism) constitutes resource-rational behaviour. We demonstrate the strength of this account in a series of computational simulations. In experiment 1, we simulate information seeking in artificial agents mimicking an array of neurodivergent phenotypes, which optimally explore a complex learning environment containing speech, text, numeric stimuli and social cues. In experiment 2, we simulate optimal information seeking in a cross-modal dual-task paradigm and qualitatively replicate empirical data from children with and without DLD. Across experiments, simulated agents' only aim was to maximally reduce epistemic uncertainty, with no difference in reward across information sources. We show that rational inattention emerges naturally in specific neurodivergent phenotypes as a function of low endogenous precision. For instance, an agent mimicking the DLD phenotype disengages with speech (and preferentially engages with alternative precise information sources) because endogenous imprecision renders speech not conducive to information gain. Because engagement is necessary for learning, simulation demonstrates how optimal information seeking may paradoxically contribute negatively to an already delayed learning trajectory in neurodivergent children. RESEARCH HIGHLIGHTS: We present the first comprehensive theory of information seeking in neurodivergent children to date, centred on the notion of rational inattention. We demonstrate the strength of this account in a series of computational simulations involving artificial agents mimicking specific neurodivergent phenotypes that optimally explore a complex learning environment containing speech, text, numeric stimuli, and social cues. We show how optimal information seeking may, paradoxically, contribute negatively to an already delayed learning trajectory in neurodivergent children. This report advances our understanding of the factors shaping short-term decision making and long-term learning in neurodivergent children.


Subject(s)
Attention , Humans , Attention/physiology , Information Seeking Behavior/physiology , Learning/physiology , Language Development Disorders/physiopathology , Computer Simulation , Cognition/physiology
3.
Curr Biol ; 34(2): 343-351.e5, 2024 01 22.
Article in English | MEDLINE | ID: mdl-38181794

ABSTRACT

Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.


Subject(s)
Visual Cortex , Humans , Visual Cortex/physiology , Social Interaction , Photic Stimulation , Transcranial Magnetic Stimulation , Visual Perception/physiology , Magnetic Resonance Imaging , Brain Mapping
4.
Imaging Neurosci (Camb) ; 1: 1-20, 2023 Aug 01.
Article in English | MEDLINE | ID: mdl-37719835

ABSTRACT

Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.

5.
J Neurosci ; 43(20): 3666-3674, 2023 05 17.
Article in English | MEDLINE | ID: mdl-36963845

ABSTRACT

Rapidly recognizing and understanding others' social interactions is an important ability that relies on deciphering multiple sources of information, for example, perceiving body information and inferring others' intentions. Despite recent advances in characterizing the brain basis of this ability in adults, its developmental underpinnings are virtually unknown. Here, we used fMRI to investigate which sources of social information support superior temporal sulcus responses to interactive biological motion (i.e., 2 interacting point-light human figures) at different developmental intervals in human participants (of either sex): Children show supportive functional connectivity with key nodes of the mentalizing network, while adults show stronger reliance on regions associated with body- and dynamic social interaction/biological motion processing. We suggest that adults use efficient action-intention understanding via body and biological motion information, while children show a stronger reliance on hidden mental state inferences as a potential means of learning to better understand others' interactive behavior.SIGNIFICANCE STATEMENT Recognizing others' interactive behavior is a critical human skill that depends on different sources of social information (e.g., observable body-action information, inferring others' hidden mental states, etc.). Understanding the brain-basis of this ability and characterizing how it emerges across development are important goals in social neuroscience. Here, we used fMRI to investigate which sources of social information support interactive biological motion processing in children (6-12 years) and adults. These results reveal a striking developmental difference in terms of how wider-brain connectivity shapes functional responses to interactive biological motion that suggests a reliance on distinct neuro-cognitive strategies in service of interaction understanding (i.e., children and adults show a greater reliance on explicit and implicit intentional inference, respectively).


Subject(s)
Brain , Temporal Lobe , Adult , Child , Humans , Brain/diagnostic imaging , Brain/physiology , Temporal Lobe/physiology , Intention , Brain Mapping/methods , Magnetic Resonance Imaging
6.
Q J Exp Psychol (Hove) ; 76(10): 2303-2311, 2023 Oct.
Article in English | MEDLINE | ID: mdl-36377819

ABSTRACT

Human visual attention is readily captured by the social information in scenes. Multiple studies have shown that social areas of interest (AOIs) such as faces and bodies attract more attention than non-social AOIs (e.g., objects or background). However, whether this attentional bias is moderated by the presence (or absence) of a social interaction remains unclear. Here, the gaze of 70 young adults was tracked during the free viewing of 60 naturalistic scenes. All photographs depicted two people, who were either interacting or not. Analyses of dwell time revealed that more attention was spent on human than background AOIs in the interactive pictures. In non-interactive pictures, however, dwell time did not differ between AOI type. In the time-to-first-fixation analysis, humans always captured attention before other elements of the scene, although this difference was slightly larger in interactive than non-interactive scenes. These findings confirm the existence of a bias towards social information in attentional capture and suggest our attention values social interactions beyond the presence of two people.


Subject(s)
Attentional Bias , Social Interaction , Young Adult , Humans , Fixation, Ocular
7.
Neuroimage ; 262: 119533, 2022 11 15.
Article in English | MEDLINE | ID: mdl-35931309

ABSTRACT

Humans are an inherently social species, with multiple focal brain regions sensitive to various visual social cues such as faces, bodies, and biological motion. More recently, research has begun to investigate how the brain responds to more complex, naturalistic social scenes, identifying a region in the posterior superior temporal sulcus (SI-pSTS; i.e., social interaction pSTS), amongst others, as an important region for processing social interaction. This research, however, has presented images or videos, and thus the contribution of motion to social interaction perception in these brain regions is not yet understood. In the current study, 22 participants viewed videos, image sequences, scrambled image sequences and static images of either social interactions or non-social independent actions. Combining univariate and multivariate analyses, we confirm that bilateral SI-pSTS plays a central role in dynamic social interaction perception but is much less involved when 'interactiveness' is conveyed solely with static cues. Regions in the social brain, including SI-pSTS and extrastriate body area (EBA), showed sensitivity to both motion and interactive content. While SI-pSTS is somewhat more tuned to video interactions than is EBA, both bilateral SI-pSTS and EBA showed a greater response to social interactions compared to non-interactions and both regions responded more strongly to videos than static images. Indeed, both regions showed higher responses to interactions than independent actions in videos and intact sequences, but not in other conditions. Exploratory multivariate regression analyses suggest that selectivity for simple visual motion does not in itself drive interactive sensitivity in either SI-pSTS or EBA. Rather, selectivity for interactions expressed in point-light animations, and selectivity for static images of bodies, make positive and independent contributions to this effect across the LOTC region. Our results strongly suggest that EBA and SI-pSTS work together during dynamic interaction perception, at least when interactive information is conveyed primarily via body information. As such, our results are also in line with proposals of a third visual stream supporting dynamic social scene perception.


Subject(s)
Brain Mapping , Motion Perception , Brain Mapping/methods , Humans , Magnetic Resonance Imaging/methods , Motion , Motion Perception/physiology , Social Interaction , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology
8.
Neuroimage ; 245: 118702, 2021 12 15.
Article in English | MEDLINE | ID: mdl-34742940

ABSTRACT

The contribution and neural basis of cognitive control is under-specified in many prominent models of socio-cognitive processing. Important outstanding questions include whether there are multiple, distinguishable systems underpinning control and whether control is ubiquitously or selectively engaged across different social behaviours and task demands. Recently, it has been proposed that the regulation of social behaviours could rely on brain regions specialised in the controlled retrieval of semantic information, namely the anterior inferior frontal gyrus (IFG) and posterior middle temporal gyrus. Accordingly, we investigated for the first time whether the neural activation commonly found in social functional neuroimaging studies extends to these 'semantic control' regions. We conducted five coordinate-based meta-analyses to combine results of 499 fMRI/PET experiments and identified the brain regions consistently involved in semantic control, as well as four social abilities: theory of mind, trait inference, empathy and moral reasoning. This allowed an unprecedented parallel review of the neural networks associated with each of these cognitive domains. The results confirmed that the anterior left IFG region involved in semantic control is reliably engaged in all four social domains. This supports the hypothesis that social cognition is partly regulated by the neurocognitive system underpinning semantic control.


Subject(s)
Cognition/physiology , Frontal Lobe/diagnostic imaging , Functional Neuroimaging , Magnetic Resonance Imaging , Positron-Emission Tomography , Semantics , Social Behavior , Temporal Lobe/diagnostic imaging , Datasets as Topic , Frontal Lobe/physiology , Humans , Temporal Lobe/physiology
9.
Hum Brain Mapp ; 42(13): 4224-4241, 2021 09.
Article in English | MEDLINE | ID: mdl-34196439

ABSTRACT

The process of understanding the minds of other people, such as their emotions and intentions, is mimicked when individuals try to understand an artificial mind. The assumption is that anthropomorphism, attributing human-like characteristics to non-human agents and objects, is an analogue to theory-of-mind, the ability to infer mental states of other people. Here, we test to what extent these two constructs formally overlap. Specifically, using a multi-method approach, we test if and how anthropomorphism is related to theory-of-mind using brain (Experiment 1) and behavioural (Experiment 2) measures. In a first exploratory experiment, we examine the relationship between dispositional anthropomorphism and activity within the theory-of-mind brain network (n = 108). Results from a Bayesian regression analysis showed no consistent relationship between dispositional anthropomorphism and activity in regions of the theory-of-mind network. In a follow-up, pre-registered experiment, we explored the relationship between theory-of-mind and situational and dispositional anthropomorphism in more depth. Participants (n = 311) watched a short movie while simultaneously completing situational anthropomorphism and theory-of-mind ratings, as well as measures of dispositional anthropomorphism and general theory-of-mind. Only situational anthropomorphism predicted the ability to understand and predict the behaviour of the film's characters. No relationship between situational or dispositional anthropomorphism and general theory-of-mind was observed. Together, these results suggest that while the constructs of anthropomorphism and theory-of-mind might overlap in certain situations, they remain separate and possibly unrelated at the personality level. These findings point to a possible dissociation between brain and behavioural measures when considering the relationship between theory-of-mind and anthropomorphism.


Subject(s)
Brain Mapping/methods , Brain/physiology , Nerve Net/physiology , Social Perception , Theory of Mind/physiology , Thinking/physiology , Adolescent , Adult , Brain/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging/methods , Male , Nerve Net/diagnostic imaging , Young Adult
10.
Dev Cogn Neurosci ; 44: 100803, 2020 08.
Article in English | MEDLINE | ID: mdl-32716852

ABSTRACT

Head motion remains a challenging confound in functional magnetic resonance imaging (fMRI) studies of both children and adults. Most pediatric neuroimaging labs have developed experience-based, child-friendly standards concerning e.g. the maximum length of a session or the time between mock scanner training and actual scanning. However, it is unclear which factors of child-friendly neuroimaging approaches are effective in reducing head motion. Here, we investigate three main factors including (i) time lag of mock scanner training to the actual scan, (ii) prior scan time, and (iii) task engagement in a dataset of 77 children (aged 6-13) and 64 adults (aged 18-35) using a multilevel modeling approach. In children, distributing fMRI data acquisition across multiple same-day sessions reduces head motion. In adults, motion is reduced after inside-scanner breaks. Despite these positive effects of splitting up data acquisition, motion increases over the course of a study as well as over the course of a run in both children and adults. Our results suggest that splitting up fMRI data acquisition is an effective tool to reduce head motion in general. At the same time, different ways of splitting up data acquisition benefit children and adults.


Subject(s)
Head/growth & development , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Adolescent , Adult , Child , Female , Humans , Male , Young Adult
11.
Dev Cogn Neurosci ; 42: 100774, 2020 04.
Article in English | MEDLINE | ID: mdl-32452460

ABSTRACT

Recent evidence demonstrates that a region of the posterior superior temporal sulcus (pSTS) is selective to visually observed social interactions in adults. In contrast, little is known about neural responses to social interactions in children. Here, we used fMRI to ask whether the pSTS is 'tuned' to social interactions in children at all, and if so, how selectivity might differ from adults. This was investigated in the pSTS, along with several other socially-tuned regions in neighbouring temporal cortex: extrastriate body area, face selective STS, fusiform face area, and mentalizing selective temporo-parietal junction. Both children and adults showed selectivity to social interaction within right pSTS, while only adults showed selectivity on the left. Adults also showed both more focal and greater selectivity than children (6-12 years) bilaterally. Exploratory sub-group analyses showed that younger children (6-8), but not older children (9-12), are less selective than adults on the right, while there was a continuous developmental trend (adults > older > younger) in left pSTS. These results suggest that, over development, the neural response to social interactions is characterized by increasingly more selective, focal, and bilateral pSTS responses, a process that likely continues into adolescence.


Subject(s)
Interpersonal Relations , Child , Female , Humans , Male , Visual Acuity
12.
Neuroimage ; 198: 296-302, 2019 09.
Article in English | MEDLINE | ID: mdl-31100434

ABSTRACT

Recent behavioural evidence shows that visual displays of two individuals interacting are not simply encoded as separate individuals, but as an interactive unit that is 'more than the sum of its parts'. Recent functional magnetic resonance imaging (fMRI) evidence shows the importance of the posterior superior temporal sulcus (pSTS) in processing human social interactions, and suggests that it may represent human-object interactions as qualitatively 'greater' than the average of their constituent parts. The current study aimed to investigate whether the pSTS or other posterior temporal lobe region(s): 1) Demonstrated evidence of a dyadic information effect - that is, qualitatively different responses to an interacting dyad than to averaged responses of the same two interactors, presented in isolation, and; 2) Significantly differentiated between different types of social interactions. Multivoxel pattern analysis was performed in which a classifier was trained to differentiate between qualitatively different types of dyadic interactions. Above-chance classification of interactions was observed in 'interaction selective' pSTS-I and extrastriate body area (EBA), but not in other regions of interest (i.e. face-selective STS and mentalizing-selective temporo-parietal junction). A dyadic information effect was not observed in the pSTS-I, but instead was shown in the EBA; that is, classification of dyadic interactions did not fully generalise to averaged responses to the isolated interactors, indicating that dyadic representations in the EBA contain unique information that cannot be recovered from the interactors presented in isolation. These findings complement previous observations for congruent grouping of human bodies and objects in the broader lateral occipital temporal cortex area.


Subject(s)
Interpersonal Relations , Occipital Lobe/physiology , Social Perception , Temporal Lobe/physiology , Adolescent , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Pattern Recognition, Visual/physiology , Support Vector Machine , Young Adult
13.
Neuroimage ; 197: 565-574, 2019 08 15.
Article in English | MEDLINE | ID: mdl-31077844

ABSTRACT

Many studies have investigated the development of face-, scene-, and body-selective regions in the ventral visual pathway. This work has primarily focused on comparing the size and univariate selectivity of these neural regions in children versus adults. In contrast, very few studies have investigated the developmental trajectory of more distributed activation patterns within and across neural regions. Here, we scanned both children (ages 5-7) and adults to test the hypothesis that distributed representational patterns arise before category selectivity (for faces, bodies, or scenes) in the ventral pathway. Consistent with this hypothesis, we found mature representational patterns in several ventral pathway regions (e.g., FFA, PPA, etc.), even in children who showed no hint of univariate selectivity. These results suggest that representational patterns emerge first in each region, perhaps forming a scaffold upon which univariate category selectivity can subsequently develop. More generally, our findings demonstrate an important dissociation between category selectivity and distributed response patterns, and raise questions about the relative roles of each in development and adult cognition.


Subject(s)
Child Development/physiology , Pattern Recognition, Visual/physiology , Visual Pathways , Adult , Child , Child, Preschool , Female , Humans , Magnetic Resonance Imaging , Male , Visual Pathways/growth & development , Visual Pathways/physiology
14.
PLoS One ; 14(1): e0198867, 2019.
Article in English | MEDLINE | ID: mdl-30673693

ABSTRACT

Imitation and perspective taking are core features of non-verbal social interactions. We imitate one another to signal a desire to affiliate and consider others' points of view to better understand their perspective. Prior research suggests that a relationship exists between prosocial behaviour and imitation. For example, priming prosocial behaviours has been shown to increase imitative tendencies in automatic imitation tasks. Despite its importance during social interactions, far less is known about how perspective taking might relate to either prosociality or imitation. The current study investigates the relationship between automatic imitation and perspective taking by testing the extent to which these skills are similarly modulated by prosocial priming. Across all experimental groups, a surprising ceiling effect emerged in the perspective taking task (the Director's Task), which prevented the investigation of prosocial priming on perspective taking. A comparison of other studies using the Director's Task shows wide variability in accuracy scores across studies and is suggestive of low task reliability. In addition, despite using a high-power design, and contrary to three previous studies, no effect of prosocial prime on imitation was observed. Meta-analysing all studies to date suggests that the effects of prosocial primes on imitation are variable and could be small. The current study, therefore, offers caution when using the computerised Director's Task as a measure of perspective taking with adult populations, as it shows high variability across studies and may suffer from a ceiling effect. In addition, the results question the size and robustness of prosocial priming effects on automatic imitation. More generally, by reporting null results we hope to minimise publication bias and by meta-analysing results as studies emerge and making data freely available, we hope to move towards a more cumulative science of social cognition.


Subject(s)
Cognition , Imitative Behavior/physiology , Social Behavior , Visual Perception , Adolescent , Adult , Female , Humans , Male
15.
J Neurophysiol ; 120(5): 2555-2570, 2018 11 01.
Article in English | MEDLINE | ID: mdl-30156457

ABSTRACT

A set of left frontal, temporal, and parietal brain regions respond robustly during language comprehension and production (e.g., Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010; Menenti L, Gierhan SM, Segaert K, Hagoort P. Psychol Sci 22: 1173-1182, 2011). These regions have been further shown to be selective for language relative to other cognitive processes, including arithmetic, aspects of executive function, and music perception (e.g., Fedorenko E, Behr MK, Kanwisher N. Proc Natl Acad Sci USA 108: 16428-16433, 2011; Monti MM, Osherson DN. Brain Res 1428: 33-42, 2012). However, one claim about overlap between language and nonlinguistic cognition remains prominent. In particular, some have argued that language processing shares computational demands with action observation and/or execution (e.g., Rizzolatti G, Arbib MA. Trends Neurosci 21: 188-194, 1998; Koechlin E, Jubault T. Neuron 50: 963-974, 2006; Tettamanti M, Weniger D. Cortex 42: 491-494, 2006). However, the evidence for these claims is indirect, based on observing activation for language and action tasks within the same broad anatomical areas (e.g., on the lateral surface of the left frontal lobe). To test whether language indeed shares machinery with action observation/execution, we examined the responses of language brain regions, defined functionally in each individual participant (Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010) to action observation ( experiments 1, 2, and 3a) and action imitation ( experiment 3b). With the exception of the language region in the angular gyrus, all language regions, including those in the inferior frontal gyrus (within "Broca's area"), showed little or no response during action observation/imitation. These results add to the growing body of literature suggesting that high-level language regions are highly selective for language processing (see Fedorenko E, Varley R. Ann NY Acad Sci 1369: 132-153, 2016 for a review). NEW & NOTEWORTHY Many have argued for overlap in the machinery used to interpret language and others' actions, either because action observation was a precursor to linguistic communication or because both require interpreting hierarchically-structured stimuli. However, existing evidence is indirect, relying on group analyses or reverse inference. We examined responses to action observation in language regions defined functionally in individual participants and found no response. Thus language comprehension and action observation recruit distinct circuits in the modern brain.


Subject(s)
Brain Mapping , Brain/physiology , Speech Perception , Adolescent , Adult , Facial Expression , Facial Recognition , Female , Humans , Male , Manual Communication , Middle Aged
16.
Neuropsychologia ; 112: 31-39, 2018 04.
Article in English | MEDLINE | ID: mdl-29476765

ABSTRACT

Success in the social world requires the ability to perceive not just individuals and their actions, but pairs of people and the interactions between them. Despite the complexity of social interactions, humans are adept at interpreting those interactions they observe. Although the brain basis of this remarkable ability has remained relatively unexplored, converging functional MRI evidence suggests the posterior superior temporal sulcus (pSTS) is centrally involved. Here, we sought to determine whether this region is sensitive to both the presence of interactive information, as well as to the content of qualitatively different interactions (i.e. competition vs. cooperation). Using point-light human figure stimuli, we demonstrate that the right pSTS is maximally activated when contrasting dyadic interactions vs. dyads performing independent, non-interactive actions. We then used this task to localize the same pSTS region in an independent participant group, and tested responses to non-human moving shape stimuli (i.e. two circles' movements conveying either interactive or non-interactive behaviour). We observed significant support vector machine classification for both the presence and type of interaction (i.e. interaction vs. non-interaction, and competition vs. cooperation, respectively) in the pSTS, as well as neighbouring temporo-parietal junction (TPJ). These findings demonstrate the important role that these regions play in perceiving and understanding social interactions, and lay the foundations for further research to fully characterize interaction responses in these areas.


Subject(s)
Brain/diagnostic imaging , Interpersonal Relations , Social Perception , Theory of Mind/physiology , Visual Perception/physiology , Brain Mapping , Female , Humans , Male
17.
Proc Natl Acad Sci U S A ; 114(43): E9145-E9152, 2017 10 24.
Article in English | MEDLINE | ID: mdl-29073111

ABSTRACT

Primates are highly attuned not just to social characteristics of individual agents, but also to social interactions between multiple agents. Here we report a neural correlate of the representation of social interactions in the human brain. Specifically, we observe a strong univariate response in the posterior superior temporal sulcus (pSTS) to stimuli depicting social interactions between two agents, compared with (i) pairs of agents not interacting with each other, (ii) physical interactions between inanimate objects, and (iii) individual animate agents pursuing goals and interacting with inanimate objects. We further show that this region contains information about the nature of the social interaction-specifically, whether one agent is helping or hindering the other. This sensitivity to social interactions is strongest in a specific subregion of the pSTS but extends to a lesser extent into nearby regions previously implicated in theory of mind and dynamic face perception. This sensitivity to the presence and nature of social interactions is not easily explainable in terms of low-level visual features, attention, or the animacy, actions, or goals of individual agents. This region may underlie our ability to understand the structure of our social world and navigate within it.


Subject(s)
Interpersonal Relations , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Adult , Attention/physiology , Brain/diagnostic imaging , Brain/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Nontherapeutic Human Experimentation , Photic Stimulation
18.
Ann N Y Acad Sci ; 1396(1): 166-182, 2017 05.
Article in English | MEDLINE | ID: mdl-28405964

ABSTRACT

Neuroscientific investigations interested in questions of person perception and impression formation have traditionally asked their participants to observe and evaluate isolated individuals. In recent years, however, there has been a surge of studies presenting third-party encounters between two (or more) individuals as stimuli. Owing to this subtle methodological change, the brain's capacity to understand other people's interactions and relationships from limited visual information--also known as people watching--has become a distinct topic of inquiry. Though initial evidence indicates that this capacity relies on several well-known networks of the social brain (including the person-perception network, the action-observation network, and the mentalizing network), a comprehensive framework of people watching must overcome three major challenges. First, it must develop a taxonomy of judgments that people habitually make when witnessing the encounters of others. Second, it must clarify which visual cues give rise to these encounter-based judgments. Third, it must elucidate how and why several brain networks work together to accomplish these judgments. To advance all three lines of research, we summarize what is currently known as well as what remains to be studied about the neuroscience of people watching.


Subject(s)
Brain/physiology , Cognition/physiology , Interpersonal Relations , Social Behavior , Humans , Judgment/physiology , Neurosciences/methods , Sensation/physiology
19.
Front Psychol ; 7: 1021, 2016.
Article in English | MEDLINE | ID: mdl-27458414

ABSTRACT

New technological devices, particularly those with touch screens, have become virtually omnipresent over the last decade. Practically from birth, children are now surrounded by smart phones and tablets. Despite being our constant companions, little is known about whether these tools can be used not only for entertainment, but also to collect reliable scientific data. Tablets may prove particularly useful for collecting behavioral data from those children (1-10 years), who are, for the most part, too old for studies based on looking times and too young for classical psychophysical testing. Here, we analyzed data from six studies that utilized touch screen tablets to deliver experimental paradigms in developmental psychology. In studies 1 and 2, we employed a simple sorting and recall task with children from the ages of 2-8. Study 3 (ages 9 and 10) extended these tasks by increasing the difficulty of the stimuli and adding a staircase-based perception task. A visual search paradigm was used in study 4 (ages 2-5), while 1- to 3-year-olds were presented with an extinction learning task in study 5. In study 6, we used a simple visuo-spatial paradigm to obtain more details about the distribution of reaction times on touch screens over all ages. We collected data from adult participants in each study as well, for comparison purposes. We analyzed these data sets in regard to four metrics: self-reported tablet usage, completeness of data, accuracy of responses and response times. In sum, we found that children from the age of two onwards are very capable of interacting with tablets, are able to understand the respective tasks and are able to use tablets to register their answers accordingly. Results from all studies reiterated the advantages of data collection through tablets: ease of use, high portability, low-cost, and high levels of engagement for children. We illustrate the great potential of conducting psychological studies in young children using tablets, and also discuss both methodological challenges and their potential solutions.

20.
Cereb Cortex ; 26(4): 1668-83, 2016 Apr.
Article in English | MEDLINE | ID: mdl-25628345

ABSTRACT

A fundamental and largely unanswered question in neuroscience is whether extrinsic connectivity and function are closely related at a fine spatial grain across the human brain. Using a novel approach, we found that the anatomical connectivity of individual gray-matter voxels (determined via diffusion-weighted imaging) alone can predict functional magnetic resonance imaging (fMRI) responses to 4 visual categories (faces, objects, scenes, and bodies) in individual subjects, thus accounting for both functional differentiation across the cortex and individual variation therein. Furthermore, this approach identified the particular anatomical links between voxels that most strongly predict, and therefore plausibly define, the neural networks underlying specific functions. These results provide the strongest evidence to date for a precise and fine-grained relationship between connectivity and function in the human brain, raise the possibility that early-developing connectivity patterns may determine later functional organization, and offer a method for predicting fine-grained functional organization in populations who cannot be functionally scanned.


Subject(s)
Cerebral Cortex/anatomy & histology , Cerebral Cortex/physiology , Pattern Recognition, Visual/physiology , Adult , Brain Mapping/methods , Diffusion Magnetic Resonance Imaging/methods , Female , Gray Matter/anatomy & histology , Gray Matter/physiology , Humans , Magnetic Resonance Imaging/methods , Male , Neural Pathways/anatomy & histology , Neural Pathways/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL