Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 135
Filtrar
1.
eNeuro ; 2024 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-38871455

RESUMEN

In human adults, multiple cortical regions respond robustly to faces, including the occipital face area (OFA) and fusiform face area (FFA), implicated in face perception, and the superior temporal sulcus (STS) and medial prefrontal cortex (MPFC), implicated in higher level social functions. When in development does face selectivity arise in each of these regions? Here, we combined two awake infant functional magnetic resonance imaging (fMRI) datasets to create a sample size twice the size of previous reports (n = 65 infants, 2.6-9.6 months). Infants watched movies of faces, bodies, objects, and scenes while fMRI data were collected. Despite variable amounts of data from each infant, individual subject whole-brain activation maps revealed responses to faces compared to non-face visual categories in the approximate location of OFA, FFA, STS, and MPFC. To determine the strength and nature of face selectivity in these regions, we used cross-validated functional region of interest (fROI) analyses. Across this larger sample size, face responses in OFA, FFA, STS, and MPFC were significantly greater than responses to bodies, objects, and scenes. Even the youngest infants (2-5 months) showed significantly face-selective responses in FFA, STS, and MPFC, but not OFA. These results demonstrate that face selectivity is present in multiple cortical regions within months of birth, providing powerful constraints on theories of cortical development.Significance Statement Social cognition often begins with face perception. In adults, several cortical regions respond robustly to faces, yet little is known about when and how these regions first arise in development. To test whether face selectivity changes in the first year of life, we combined two datasets, doubling the sample size relative to previous reports. In the approximate location of the fusiform face area (FFA), superior temporal sulcus (STS), and medial prefrontal cortex (MPFC) but not occipital face area (OFA), face selectivity was present in the youngest group. These findings demonstrate that face-selective responses are present across multiple lobes of the brain very early in life.

2.
Dev Psychol ; 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38913758

RESUMEN

The study of infant gaze has long been a key tool for understanding the developing mind. However, labor-intensive data collection and processing limit the speed at which this understanding can be advanced. Here, we demonstrate an asynchronous workflow for conducting violation-of-expectation (VoE) experiments, which is fully "hands-off" for the experimenter. We first replicate four classic VoE experiments in a synchronous online setting, and show that VoE can generate highly replicable effects through remote testing. We then confirm the accuracy of a state-of-the-art gaze annotation software, iCatcher+ in a new setting. Third, we train parents to control the experiment flow based on the infant's gaze. Combining all three innovations, we then conduct an asynchronous automated infant-contingent VoE experiment. The hands-off workflow successfully replicates a classic VoE effect: infants look longer at inefficient actions than efficient ones. We compare the resulting effect size and statistical power to the same study run in-lab and synchronously via Zoom. The hands-off workflow significantly reduces the marginal cost and time per participant, enabling larger sample sizes. By enhancing the reproducibility and robustness of findings relying on infant looking, this workflow could help support a cumulative science of infant cognition. Tools to implement the workflow are openly available. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

3.
bioRxiv ; 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38746251

RESUMEN

Humans effortlessly use vision to plan and guide navigation through the local environment, or "scene". A network of three cortical regions responds selectively to visual scene information, including the occipital (OPA), parahippocampal (PPA), and medial place areas (MPA) - but how this network supports visually-guided navigation is unclear. Recent evidence suggests that one region in particular, the OPA, supports visual representations for navigation, while PPA and MPA support other aspects of scene processing. However, most previous studies tested only static scene images, which lack the dynamic experience of navigating through scenes. We used dynamic movie stimuli to test whether OPA, PPA, and MPA represent two critical kinds of navigationally-relevant information: navigational affordances (e.g., can I walk to the left, right, or both?) and ego-motion (e.g., am I walking forward or backward? turning left or right?). We found that OPA is sensitive to both affordances and ego-motion, as well as the conflict between these cues - e.g., turning toward versus away from an open doorway. These effects were significantly weaker or absent in PPA and MPA. Responses in OPA were also dissociable from those in early visual cortex, consistent with the idea that OPA responses are not merely explained by lower-level visual features. OPA responses to affordances and ego-motion were stronger in the contralateral than ipsilateral visual field, suggesting that OPA encodes navigationally relevant information within an egocentric reference frame. Taken together, these results support the hypothesis that OPA contains visual representations that are useful for planning and guiding navigation through scenes.

4.
bioRxiv ; 2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38798360

RESUMEN

Left hemisphere damage in adulthood often leads to linguistic deficits, but many cases of early damage leave linguistic processing preserved, and a functional language system can develop in the right hemisphere. To explain this early apparent equipotentiality of the two hemispheres for language, some have proposed that the language system is bilateral during early development and only becomes left-lateralized with age. We examined language lateralization using functional magnetic resonance imaging with two large pediatric cohorts (total n=273 children ages 4-16; n=107 adults). Strong, adult-level left-hemispheric lateralization (in activation volume and response magnitude) was evident by age 4. Thus, although the right hemisphere can take over language function in some cases of early brain damage, and although some features of the language system do show protracted development (magnitude of language response and strength of inter-regional correlations in the language network), the left-hemisphere bias for language is robustly present by 4 years of age. These results call for alternative accounts of early equipotentiality of the two hemispheres for language.

5.
Data Brief ; 52: 109905, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38146306

RESUMEN

Theory of mind (ToM) reasoning refers to the process by which we reason about the mental states (beliefs, desires, emotions) of others. Here, we describe an open dataset of responses from children who completed a story booklet task for assessing ToM reasoning (n = 321 3-12-year-old children, including 64 (neurotypical) children assessed longitudinally and 68 autistic children). Children completed one of two versions of the story booklet task (Booklet 1 or 2). Both versions include two-alternative forced choice and free response questions that tap ToM concepts ranging in difficulty from reasoning about desires and beliefs to reasoning about moral blameworthiness and mistaken referents. Booklet 2 additionally includes items that assess understanding of sarcasm, lies, and second-order belief-desire reasoning. Compared to other ToM tasks, the booklet task provides relatively dense sampling of ToM reasoning within each child (Booklet 1: 41 items; Booklet 2: 65 items). Experimental sessions were video recorded and data were coded offline; the open dataset consists of children's accuracy (binary) on each item and, for many children (n = 171), transcriptions of free responses. The dataset also includes children's scores on standardized tests of receptive language and non-verbal IQ, as well as other demographic information. As such, this dataset is a valuable resource for investigating the development of ToM reasoning in early and middle childhood.

6.
Neurobiol Lang (Camb) ; 4(4): 575-610, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38144236

RESUMEN

Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.

7.
R Soc Open Sci ; 10(7): 221385, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37476513

RESUMEN

People facing material deprivation are more likely to turn to acquisitive crime. It is not clear why it makes sense for them to do so, given that apprehension and punishment may make their situation even worse. Recent theory suggests that people should be more willing to steal if they are on the wrong side of a 'desperation threshold'; that is, a level of resources critical to wellbeing. Below such a threshold, people should pursue any risky behaviour that offers the possibility of a short route back above, and should be insensitive to the severity of possible punishments, since they have little left to lose. We developed a multi-round, multi-player economic game with a desperation threshold and the possibility of theft as well as cooperation. Across four experiments with 1000 UK and US adults, we showed that falling short of a desperation threshold increased stealing from other players, even when the payoff from stealing was negative on average. Within the microsocieties created in the game, the presence of more players with below-threshold resources produced low trust, driven by the experience of being stolen from. Contrary to predictions, our participants appeared to be somewhat sensitive to the severity of punishment for being caught trying to steal. Our results show, in an experimental microcosm, that some members of society falling short of a threshold of material desperation can have powerful social consequences.

8.
Philos Trans A Math Phys Eng Sci ; 381(2251): 20220047, 2023 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-37271174

RESUMEN

From sparse descriptions of events, observers can make systematic and nuanced predictions of what emotions the people involved will experience. We propose a formal model of emotion prediction in the context of a public high-stakes social dilemma. This model uses inverse planning to infer a person's beliefs and preferences, including social preferences for equity and for maintaining a good reputation. The model then combines these inferred mental contents with the event to compute 'appraisals': whether the situation conformed to the expectations and fulfilled the preferences. We learn functions mapping computed appraisals to emotion labels, allowing the model to match human observers' quantitative predictions of 20 emotions, including joy, relief, guilt and envy. Model comparison indicates that inferred monetary preferences are not sufficient to explain observers' emotion predictions; inferred social preferences are factored into predictions for nearly every emotion. Human observers and the model both use minimal individualizing information to adjust predictions of how different people will respond to the same event. Thus, our framework integrates inverse planning, event appraisals and emotion concepts in a single computational model to reverse-engineer people's intuitive theory of emotions. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.


Asunto(s)
Teoría de la Mente , Humanos , Inteligencia Artificial , Emociones
11.
Dev Sci ; 26(5): e13387, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36951215

RESUMEN

Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized "model-matched" stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk. RESEARCH HIGHLIGHTS: Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI. Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants. Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus. Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.


Asunto(s)
Corteza Auditiva , Música , Percepción del Habla , Adulto , Humanos , Lactante , Femenino , Estimulación Acústica , Percepción Auditiva/fisiología , Corteza Auditiva/fisiología , Ruido , Imagen por Resonancia Magnética , Percepción del Habla/fisiología
12.
Brain Sci ; 13(2)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36831839

RESUMEN

Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources.

13.
Top Cogn Sci ; 15(2): 290-302, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36322897

RESUMEN

From birth, humans constantly make decisions about what to look at and for how long. Yet, the mechanism behind such decision-making remains poorly understood. Here, we present the rational action, noisy choice for habituation (RANCH) model. RANCH is a rational learning model that takes noisy perceptual samples from stimuli and makes sampling decisions based on expected information gain (EIG). The model captures key patterns of looking time documented in developmental research: habituation and dishabituation. We evaluated the model with adult looking time collected from a paradigm analogous to the infant habituation paradigm. We compared RANCH with baseline models (no learning model, no perceptual noise model) and models with alternative linking hypotheses (Surprisal, KL divergence). We showed that (1) learning and perceptual noise are critical assumptions of the model, and (2) Surprisal and KL are good proxies for EIG under the current learning context.


Asunto(s)
Habituación Psicofisiológica , Aprendizaje , Adulto , Lactante , Humanos , Toma de Decisiones
14.
Trends Cogn Sci ; 26(12): 1062-1063, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36150968

RESUMEN

How do people perceive and pursue legitimate power? For the social sciences, this question is venerable. Yet, for cognitive science, it offers fresh and generative opportunities to explore how adults evaluate legitimacy, how children learn to do so, and what difference legitimate power makes for people's thoughts, feelings, and actions.


Asunto(s)
Emociones , Poder Psicológico , Adulto , Niño , Humanos
15.
Trends Cogn Sci ; 26(11): 959-971, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36089494

RESUMEN

Understanding Theory of Mind should begin with an analysis of the problems it solves. The traditional answer is that Theory of Mind is used for predicting others' thoughts and actions. However, the same Theory of Mind is also used for planning to change others' thoughts and actions. Planning requires that Theory of Mind consists of abstract structured causal representations and supports efficient search and selection from innumerable possible actions. Theory of Mind contrasts with less cognitively demanding alternatives: statistical predictive models of other people's actions, or model-free reinforcement of actions by their effects on other people. Theory of Mind is likely used to plan novel interventions and predict their effects, for example, in pedagogy, emotion regulation, and impression management.


Asunto(s)
Teoría de la Mente , Humanos , Modelos Estadísticos , Teoría de la Mente/fisiología
16.
Proc Natl Acad Sci U S A ; 119(32): e2121390119, 2022 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-35878009

RESUMEN

Infants are born into networks of individuals who are socially connected. How do infants begin learning which individuals are their own potential social partners? Using digitally edited videos, we showed 12-mo-old infants' social interactions between unknown individuals and their own parents. In studies 1 to 4, after their parent showed affiliation toward one puppet, infants expected that puppet to engage with them. In study 5, infants made the reverse inference; after a puppet engaged with them, the infants expected that puppet to respond to their parent. In each study, infants' inferences were specific to social interactions that involved their own parent as opposed to another infant's parent. Thus, infants combine observation of social interactions with knowledge of their preexisting relationship with their parent to discover which newly encountered individuals are potential social partners for themselves and their families.


Asunto(s)
Aprendizaje , Padres , Interacción Social , Humanos , Lactante
17.
Behav Brain Sci ; 45: e118, 2022 07 07.
Artículo en Inglés | MEDLINE | ID: mdl-35796353

RESUMEN

Group representations based on recursive utilities can be used to derive the same predictions as Pietraszewski in conflict situations. Additionally, these representations generalize to non-conflict situations, asymmetric relationships, and represent the stakes in a conflict. However, both proposals fail to represent asymmetries of power and responsibility and to account for generalizations from specific observed individuals to collections of non-observed individuals.


Asunto(s)
Conducta Social , Humanos
18.
Hum Brain Mapp ; 43(9): 2782-2800, 2022 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-35274789

RESUMEN

Scanning young children while they watch short, engaging, commercially-produced movies has emerged as a promising approach for increasing data retention and quality. Movie stimuli also evoke a richer variety of cognitive processes than traditional experiments, allowing the study of multiple aspects of brain development simultaneously. However, because these stimuli are uncontrolled, it is unclear how effectively distinct profiles of brain activity can be distinguished from the resulting data. Here we develop an approach for identifying multiple distinct subject-specific Regions of Interest (ssROIs) using fMRI data collected during movie-viewing. We focused on the test case of higher-level visual regions selective for faces, scenes, and objects. Adults (N = 13) were scanned while viewing a 5.6-min child-friendly movie, as well as a traditional localizer experiment with blocks of faces, scenes, and objects. We found that just 2.7 min of movie data could identify subject-specific face, scene, and object regions. While successful, movie-defined ssROIS still showed weaker domain selectivity than traditional ssROIs. Having validated our approach in adults, we then used the same methods on movie data collected from 3 to 12-year-old children (N = 122). Movie response timecourses in 3-year-old children's face, scene, and object regions were already significantly and specifically predicted by timecourses from the corresponding regions in adults. We also found evidence of continued developmental change, particularly in the face-selective posterior superior temporal sulcus. Taken together, our results reveal both early maturity and functional change in face, scene, and object regions, and more broadly highlight the promise of short, child-friendly movies for developmental cognitive neuroscience.


Asunto(s)
Mapeo Encefálico , Películas Cinematográficas , Retención en Psicología , Adulto , Mapeo Encefálico/métodos , Niño , Preescolar , Humanos , Imagen por Resonancia Magnética/métodos , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología
19.
Science ; 375(6578): 311-315, 2022 01 21.
Artículo en Inglés | MEDLINE | ID: mdl-35050656

RESUMEN

Across human societies, people form "thick" relationships characterized by strong attachments, obligations, and mutual responsiveness. People in thick relationships share food utensils, kiss, or engage in other distinctive interactions that involve sharing saliva. We found that children, toddlers, and infants infer that dyads who share saliva (as opposed to other positive social interactions) have a distinct relationship. Children expect saliva sharing to happen in nuclear families. Toddlers and infants expect that people who share saliva will respond to one another in distress. Parents confirm that saliva sharing is a valid cue of relationship thickness in their children's social environments. The ability to use distinctive interactions to infer categories of relationships thus emerges early in life, without explicit teaching; this enables young humans to rapidly identify close relationships, both within and beyond families.


Asunto(s)
Relaciones Interpersonales , Núcleo Familiar , Saliva , Niño , Desarrollo Infantil , Preescolar , Ingestión de Alimentos , Empatía , Femenino , Alimentos , Amigos , Humanos , Lactante , Masculino , Juego e Implementos de Juego
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...