Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 87
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Neurosci ; 43(29): 5391-5405, 2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37369588

RESUMEN

Models of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant (N = 11, both sexes). Each was cued to the spatial location (left vs right) and contents [low spatial frequency (LSF) vs high spatial frequency (HSF)] of a predicted Gabor stimulus that they then categorized. Using each participant's concurrently measured MEG, we reconstructed networks that predict and categorize LSF versus HSF contents for behavior. We found that predicted contents flexibly propagate top down from temporal to lateralized occipital cortex, depending on task demands, under supervisory control of prefrontal cortex. When they reach lateralized occipital cortex, predictions enhance the bottom-up LSF versus HSF representations of the stimulus, all the way from occipital-ventral-parietal to premotor cortex, in turn producing faster categorization behavior. Importantly, content communications are subsets (i.e., 55-75%) of the signal-to-signal communications typically measured between brain regions. Hence, our study isolates functional networks that process the information of cognitive functions.SIGNIFICANCE STATEMENT An enduring cognitive hypothesis states that our perception is partly influenced by the bottom-up sensory input but also by top-down expectations. However, cognitive explanations of the dynamic brain networks mechanisms that flexibly predict and categorize the visual input according to task-demands remain elusive. We addressed them in a predictive experimental design by isolating the network communications of cognitive contents from all other communications. Our methods revealed a Prediction Network that flexibly communicates contents from temporal to lateralized occipital cortex, with explicit frontal control, and an occipital-ventral-parietal-frontal Categorization Network that represents more sharply the predicted contents from the shown stimulus, leading to faster behavior. Our framework and results therefore shed a new light of cognitive information processing on dynamic brain activity.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Masculino , Femenino , Humanos , Lóbulo Occipital , Encéfalo , Cognición , Estimulación Luminosa , Percepción Visual
2.
J Neurosci ; 42(48): 9030-9044, 2022 11 30.
Artículo en Inglés | MEDLINE | ID: mdl-36280264

RESUMEN

To date, social and nonsocial decisions have been studied largely in isolation. Consequently, the extent to which social and nonsocial forms of decision uncertainty are integrated using shared neurocomputational resources remains elusive. Here, we address this question using simultaneous electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) in healthy human participants (young adults of both sexes) and a task in which decision evidence in social and nonsocial contexts varies along comparable scales. First, we identify time-resolved build-up of activity in the EEG, akin to a process of evidence accumulation (EA), across both contexts. We then use the endogenous trial-by-trial variability in the slopes of these accumulating signals to construct parametric fMRI predictors. We show that a region of the posterior-medial frontal cortex (pMFC) uniquely explains trial-wise variability in the process of evidence accumulation in both social and nonsocial contexts. We further demonstrate a task-dependent coupling between the pMFC and regions of the human valuation system in dorso-medial and ventro-medial prefrontal cortex across both contexts. Finally, we report domain-specific representations in regions known to encode the early decision evidence for each context. These results are suggestive of a domain-general decision-making architecture, whereupon domain-specific information is likely converted into a "common currency" in medial prefrontal cortex and accumulated for the decision in the pMFC.SIGNIFICANCE STATEMENT Little work has directly compared social-versus-nonsocial decisions to investigate whether they share common neurocomputational origins. Here, using combined electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) and computational modeling, we offer a detailed spatiotemporal account of the neural underpinnings of social and nonsocial decisions. Specifically, we identify a comparable mechanism of temporal evidence integration driving both decisions and localize this integration process in posterior-medial frontal cortex (pMFC). We further demonstrate task-dependent coupling between the pMFC and regions of the human valuation system across both contexts. Finally, we report domain-specific representations in regions encoding the early, domain-specific, decision evidence. These results suggest a domain-general decision-making architecture, whereupon domain-specific information is converted into a common representation in the valuation system and integrated for the decision in the pMFC.


Asunto(s)
Toma de Decisiones , Imagen por Resonancia Magnética , Adulto Joven , Masculino , Femenino , Humanos , Lóbulo Frontal , Electroencefalografía
3.
PLoS Biol ; 16(8): e2006558, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-30080855

RESUMEN

Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3-7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior-i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension.


Asunto(s)
Corteza Motora/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Adolescente , Adulto , Percepción Auditiva , Encéfalo/fisiología , Mapeo Encefálico/métodos , Comprensión/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Habla , Percepción Visual
4.
Proc Natl Acad Sci U S A ; 115(43): E10013-E10021, 2018 10 23.
Artículo en Inglés | MEDLINE | ID: mdl-30297420

RESUMEN

Real-world studies show that the facial expressions produced during pain and orgasm-two different and intense affective experiences-are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Asunto(s)
Emociones/fisiología , Cara/fisiología , Dolor/fisiopatología , Dolor/psicología , Placer/fisiología , Adulto , Comparación Transcultural , Cultura , Expresión Facial , Femenino , Humanos , Relaciones Interpersonales , Masculino , Reconocimiento en Psicología/fisiología , Adulto Joven
5.
Hum Brain Mapp ; 41(5): 1212-1225, 2020 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-31782861

RESUMEN

Fast and accurate face processing is critical for everyday social interactions, but it declines and becomes delayed with age, as measured by both neural and behavioral responses. Here, we addressed the critical challenge of understanding how aging changes neural information processing mechanisms to delay behavior. Young (20-36 years) and older (60-86 years) adults performed the basic social interaction task of detecting a face versus noise while we recorded their electroencephalogram (EEG). In each participant, using a new information theoretic framework we reconstructed the features supporting face detection behavior, and also where, when and how EEG activity represents them. We found that occipital-temporal pathway activity dynamically represents the eyes of the face images for behavior ~170 ms poststimulus, with a 40 ms delay in older adults that underlies their 200 ms behavioral deficit of slower reaction times. Our results therefore demonstrate how aging can change neural information processing mechanisms that underlie behavioral slow down.


Asunto(s)
Cara , Envejecimiento Saludable , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Envejecimiento/psicología , Mapeo Encefálico , Electroencefalografía , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Procesos Mentales , Persona de Mediana Edad , Vías Nerviosas/diagnóstico por imagen , Vías Nerviosas/fisiología , Lóbulo Occipital/diagnóstico por imagen , Lóbulo Occipital/fisiología , Interacción Social , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Adulto Joven
6.
Proc Natl Acad Sci U S A ; 113(17): E2450-9, 2016 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-27071095

RESUMEN

Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons' responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category).


Asunto(s)
Neuronas/fisiología , Lóbulo Temporal/fisiología , Animales , Mapeo Encefálico , Neuroimagen Funcional , Cuerpo Humano , Macaca mulatta/fisiología , Imagen por Resonancia Magnética , Masculino , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa
7.
Annu Rev Psychol ; 68: 269-297, 2017 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-28051933

RESUMEN

As a highly social species, humans are equipped with a powerful tool for social communication-the face. Although seemingly simple, the human face can elicit multiple social perceptions due to the rich variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional methods. In the past decade, the emerging field of social psychophysics has developed new methods to address this challenge, with the potential to transfer psychophysical laws of social perception to the digital economy via avatars and social robots. At this exciting juncture, it is timely to review these new methodological developments. In this article, we introduce and review the foundational methodological developments of social psychophysics, present work done in the past decade that has advanced understanding of the face as a tool for social communication, and discuss the major challenges that lie ahead.


Asunto(s)
Expresión Facial , Comunicación no Verbal/psicología , Percepción Social , Humanos , Psicofísica
8.
Hum Brain Mapp ; 38(3): 1541-1573, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-27860095

RESUMEN

We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc.


Asunto(s)
Mapeo Encefálico , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Teoría de la Información , Neuroimagen/métodos , Distribución Normal , Simulación por Computador , Electroencefalografía , Entropía , Humanos , Sensibilidad y Especificidad
9.
Psychol Sci ; 28(9): 1259-1270, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28741981

RESUMEN

A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.


Asunto(s)
Relaciones Interpersonales , Apego a Objetos , Recompensa , Sonrisa/psicología , Predominio Social , Percepción Social , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
10.
Cereb Cortex ; 26(11): 4123-4135, 2016 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-27550865

RESUMEN

A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features.

11.
J Vis ; 17(6): 5, 2017 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-28593249

RESUMEN

What makes identification of familiar faces seemingly effortless? Recent studies using unfamiliar face stimuli suggest that selective processing of information conveyed by horizontally oriented spatial frequency components supports accurate performance in a variety of tasks involving matching of facial identity. Here, we studied upright and inverted face discrimination using stimuli with which observers were either unfamiliar or personally familiar (i.e., friends and colleagues). Our results reveal increased sensitivity to horizontal spatial frequency structure in personally familiar faces, further implicating the selective processing of this information in the face processing expertise exhibited by human observers throughout their daily lives.


Asunto(s)
Reconocimiento Facial/fisiología , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Adulto , Cara/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad
12.
Neuroimage ; 133: 504-515, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27033682

RESUMEN

We develop a novel methodology for the single-trial analysis of multichannel time-varying neuroimaging signals. We introduce the space-by-time M/EEG decomposition, based on Non-negative Matrix Factorization (NMF), which describes single-trial M/EEG signals using a set of non-negative spatial and temporal components that are linearly combined with signed scalar activation coefficients. We illustrate the effectiveness of the proposed approach on an EEG dataset recorded during the performance of a visual categorization task. Our method extracts three temporal and two spatial functional components achieving a compact yet full representation of the underlying structure, which validates and summarizes succinctly results from previous studies. Furthermore, we introduce a decoding analysis that allows determining the distinct functional role of each component and relating them to experimental conditions and task parameters. In particular, we demonstrate that the presented stimulus and the task difficulty of each trial can be reliably decoded using specific combinations of components from the identified space-by-time representation. When comparing with a sliding-window linear discriminant algorithm, we show that our approach yields more robust decoding performance across participants. Overall, our findings suggest that the proposed space-by-time decomposition is a meaningful low-dimensional representation that carries the relevant information of single-trial M/EEG signals.


Asunto(s)
Mapeo Encefálico/métodos , Electroencefalografía/métodos , Interpretación de Imagen Asistida por Computador/métodos , Magnetoencefalografía/métodos , Reconocimiento Visual de Modelos/fisiología , Análisis Espacio-Temporal , Corteza Visual/fisiología , Algoritmos , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
13.
J Vis ; 16(8): 14, 2016 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-27305521

RESUMEN

Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions.


Asunto(s)
Emociones/fisiología , Expresión Facial , Movimiento/fisiología , Percepción Espacial/fisiología , Percepción del Tiempo/fisiología , Ambiente , Miedo/fisiología , Femenino , Felicidad , Humanos , Masculino , Adulto Joven
14.
Proc Natl Acad Sci U S A ; 109(19): 7241-4, 2012 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-22509011

RESUMEN

Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.


Asunto(s)
Comparación Transcultural , Emociones , Expresión Facial , Interfaz Usuario-Computador , Pueblo Asiatico/psicología , Características Culturales , Femenino , Humanos , Masculino , Modelos Psicológicos , Estimulación Luminosa , Encuestas y Cuestionarios , Percepción Visual , Población Blanca/psicología , Adulto Joven
15.
Psychol Sci ; 25(5): 1087-97, 2014 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-24604146

RESUMEN

Research on scene categorization generally concentrates on gist processing, particularly the speed and minimal features with which the "story" of a scene can be extracted. However, this focus has led to a paucity of research into how scenes are categorized at specific hierarchical levels (e.g., a scene could be a road or more specifically a highway); consequently, research has disregarded a potential diagnostically driven feedback process. We presented participants with scenes that were low-pass filtered so only their gist was revealed, while a gaze-contingent window provided the fovea with full-resolution details. By recording where in a scene participants fixated prior to making a basic- or subordinate-level judgment, we identified the scene information accrued when participants made either categorization. We observed a feedback process, dependent on categorization level, that systematically accrues sufficient and detailed diagnostic information from the same scene. Our results demonstrate that during scene processing, a diagnostically driven bidirectional interplay between top-down and bottom-up information facilitates relevant category processing.


Asunto(s)
Percepción de Forma/fisiología , Reconocimiento Visual de Modelos/fisiología , Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Humanos , Juicio/fisiología , Tiempo de Reacción/fisiología
16.
Psychol Sci ; 25(5): 1079-86, 2014 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-24659191

RESUMEN

Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.


Asunto(s)
Emociones/fisiología , Cara/anatomía & histología , Expresión Facial , Factores Sociológicos , Adolescente , Adulto , Femenino , Humanos , Masculino , Percepción , Percepción Social , Adulto Joven
17.
PLoS Biol ; 9(5): e1001064, 2011 May.
Artículo en Inglés | MEDLINE | ID: mdl-21610856

RESUMEN

Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency) code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times) relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Expresión Facial , Conducta , Simulación por Computador , Electroencefalografía , Emociones , Humanos , Percepción Visual
18.
J Vis ; 14(13): 7, 2014 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-25385898

RESUMEN

In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye.


Asunto(s)
Potenciales Evocados Visuales/fisiología , Cara , Reconocimiento Visual de Modelos/fisiología , Atención , Electroencefalografía , Femenino , Fijación Ocular/fisiología , Humanos , Masculino , Tiempo de Reacción , Adulto Joven
19.
J Exp Psychol Gen ; 153(3): 742-753, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38271012

RESUMEN

Social class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown. That is, what makes someone look like they are of higher or lower social class standing (e.g., rich or poor), and how does this relate to harmful or advantageous stereotypes? We addressed these questions using a perception-based data-driven method to model the specific three-dimensional facial features that drive social class judgments and compared them to those of stereotype-related judgments (competence, warmth, dominance, and trustworthiness), based on White Western culture participants and face stimuli. Using a complementary data-reduction analysis and machine learning approach, we show that social class judgments are driven by a unique constellation of facial features that reflect multiple embedded stereotypes: poor-looking (vs. rich-looking) faces are wider, shorter, and flatter with downturned mouths and darker, cooler complexions, mirroring features of incompetent, cold, and untrustworthy-looking (vs. competent, warm, and trustworthy-looking) faces. Our results reveal the specific facial features that underlie the connection between impressions of social class and stereotype-related social traits, with implications for central social perception theories, including understanding the causal links between stereotype knowledge and social class judgments. We anticipate that our results will inform future interventions designed to interrupt biased perception and social inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Reconocimiento Facial , Estereotipo , Humanos , Percepción Social , Actitud , Juicio , Clase Social , Expresión Facial , Confianza
20.
Curr Biol ; 34(1): 213-223.e5, 2024 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-38141619

RESUMEN

Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.


Asunto(s)
Emociones , Expresión Facial , Humanos , Ira , Miedo , Felicidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA