Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 97
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38795358

RESUMEN

We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.


Asunto(s)
Prosopagnosia , Humanos , Prosopagnosia/fisiopatología , Femenino , Adulto , Encéfalo/fisiopatología , Redes Neurales de la Computación , Persona de Mediana Edad , Reconocimiento Visual de Modelos/fisiología , Masculino , Modelos Neurológicos
2.
J Vis ; 24(1): 7, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38197738

RESUMEN

Humans communicate internal states through complex facial movements shaped by biological and evolutionary constraints. Although real-life social interactions are flooded with dynamic signals, current knowledge on facial expression recognition mainly arises from studies using static face images. This experimental bias might stem from previous studies consistently reporting that young adults minimally benefit from the richer dynamic over static information, whereas children, the elderly, and clinical populations very strongly do (Richoz, Jack, Garrod, Schyns, & Caldara, 2015, Richoz, Jack, Garrod, Schyns, & Caldara, 2018b). These observations point to a near-optimal facial expression decoding system in young adults, almost insensitive to the advantage of dynamic over static cues. Surprisingly, no study has yet tested the idea that such evidence might be rooted in a ceiling effect. To this aim, we asked 70 healthy young adults to perform static and dynamic facial expression recognition of the six basic expressions while parametrically and randomly varying the low-level normalized phase and contrast signal (0%-100%) of the faces. As predicted, when 100% face signals were presented, static and dynamic expressions were recognized with equal efficiency with the exception of those with the most informative dynamics (i.e., happiness and surprise). However, when less signal was available, dynamic expressions were all better recognized than their static counterpart (peaking at ∼20%). Our data show that facial movements increase our ability to efficiently identify emotional states of others under the suboptimal visual conditions that can occur in everyday life. Dynamic signals are more effective and sensitive than static ones for decoding all facial expressions of emotion for all human observers.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Niño , Anciano , Adulto Joven , Humanos , Emociones , Felicidad , Señales (Psicología)
3.
J Exp Child Psychol ; 229: 105622, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36641829

RESUMEN

In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Adulto , Niño , Humanos , Adolescente , Movimientos Oculares , Emociones , Ira , Felicidad
4.
J Vis ; 22(13): 9, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36580295

RESUMEN

Humans show individual differences in neural facial identity discrimination (FID) responses across viewing positions. Critically, these variations have been shown to be reliable over time and to directly relate to observers' idiosyncratic preferences in facial information sampling. This functional signature in facial identity processing might relate to observer-specific diagnostic information processing. Although these individual differences are a valuable source of information for interpreting data, they can also be difficult to isolate when it is not possible to test many conditions. To address this potential issue, we explored whether reducing stimulus size would help decrease these interindividual variations in neural FID. We manipulated the size of face stimuli (covering 3°, 5°, 6.7°, 8.5°, and 12° of visual angle), as well as the fixation location (left eye, right eye, below the nasion, nose, and mouth) while recording electrophysiological responses. Same identity faces were presented with a base frequency of 6 Hz. Different identity faces were periodically inserted within this sequence to trigger an objective index of neural FID. Our data show robust and consistent individual differences in neural face identity discrimination across viewing positions for all face sizes. Nevertheless, FID was optimal for a larger number of observers when faces subtended 6.7° of visual angle and fixation was below the nasion. This condition is the most suited to reduce natural interindividual variations in neural FID patterns, defining an important benchmark to measure neural FID when it is not possible to assess and control for observers' idiosyncrasies.


Asunto(s)
Electroencefalografía , Cara , Humanos , Ojo , Estimulación Luminosa , Reconocimiento Visual de Modelos/fisiología
5.
J Vis ; 21(12): 1, 2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34724530

RESUMEN

The human visual system is very fast and efficient at extracting socially relevant information from faces. Visual studies employing foveated faces have consistently reported faster categorization by race response times for other-race compared with same-race faces. However, in everyday life we typically encounter faces outside the foveated visual field. In study 1, we explored whether and how race is categorized extrafoveally in same- and other-race faces normalized for low-level properties by tracking eye movements of Western Caucasian and East Asian observers in a saccadic response task. The results show that not only are people sensitive to race in faces presented outside of central vision, but the speed advantage in categorizing other-race faces occurs astonishingly quickly in as little as 200 ms. Critically, this visual categorization process was approximately 300 ms faster than the typical button press responses on centrally presented foveated faces. Study 2 investigated the genesis of the extrafoveal saccadic response speed advantage by comparing the influences of the response modality (button presses and saccadic responses), as well as the potential contribution of the impoverished low-spatial frequency spectrum characterizing extrafoveal visual information processing. Button press race categorization was not significantly faster with reconstructed retinal-filtered low spatial frequency faces, regardless of the visual field presentation. The speed of race categorization was significantly boosted only by extrafoveal saccades and not centrally foveated faces. Race is a potent, rapid, and effective visual signal transmitted by faces used for the categorization of ingroup/outgroup members. This fast universal visual categorization can occur outside central vision, igniting a cascade of social processes.


Asunto(s)
Cara , Movimientos Sacádicos , Humanos , Reconocimiento Visual de Modelos , Tiempo de Reacción , Percepción Visual , Población Blanca
6.
J Neurosci ; 39(21): 4113-4123, 2019 05 22.
Artículo en Inglés | MEDLINE | ID: mdl-30867260

RESUMEN

Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations.SIGNIFICANCE STATEMENT When engaging in face recognition, observers deploy idiosyncratic fixation patterns to sample facial information. Whether these individual differences concur with idiosyncratic face-sensitive neural responses remains unclear. To address this issue, we recorded observers' fixation patterns, as well as their neural face discrimination responses elicited during fixation of 10 different locations on the face, corresponding to different types of facial information. Our data reveal a clear interplay between individuals' face-sensitive neural responses and their idiosyncratic eye-movement patterns during identity processing, which emerges as early as the first fixation. Collectively, our findings favor the existence of idiosyncratic, rather than universal face representations.


Asunto(s)
Movimientos Oculares , Reconocimiento Facial/fisiología , Adulto , Atención/fisiología , Electroencefalografía , Femenino , Humanos , Masculino
7.
Neuroimage ; 189: 468-475, 2019 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-30654176

RESUMEN

Over the past years, much interest has been devoted to understanding how individuals differ in their ability to process facial identity. Fast periodic visual stimulation (FPVS) is a promising technique to obtain objective and highly sensitive neural correlates of face processing across various populations, from infants to neuropsychological patients. Here, we use FPVS to investigate how neural face identity discrimination varies in amplitude and topography across observers. To ascertain more detailed inter-individual differences, we parametrically manipulated the visual input fixated by observers across ten viewing positions (VPs). Specifically, we determined the inter-session reliability of VP-dependent neural face discrimination responses, both across and within observers (6-month inter-session interval). All observers exhibited idiosyncratic VP-dependent neural response patterns, with reliable individual differences in terms of response amplitude for the majority of VPs. Importantly, the topographical reliability varied across VPs and observers, the majority of which exhibited reliable responses only for specific VPs. Crucially, this topographical reliability was positively correlated with the response magnitude over occipito-temporal regions: observers with stronger responses also displayed more reliable response topographies. Our data extend previous findings of idiosyncrasies in visuo-perceptual processing. They highlight the need to consider intra-individual neural response reliability in order to better understand the functional role(s) and underlying basis of such inter-individual differences.


Asunto(s)
Electroencefalografía/métodos , Reconocimiento Facial/fisiología , Individualidad , Lóbulo Occipital/fisiología , Lóbulo Temporal/fisiología , Adulto , Discriminación en Psicología/fisiología , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Adulto Joven
8.
Psychosom Med ; 81(2): 155-164, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30702549

RESUMEN

OBJECTIVE: Impairments in facial emotion recognition are an underlying factor of deficits in emotion regulation and interpersonal difficulties in mental disorders and are evident in eating disorders (EDs). METHODS: We used a computerized psychophysical paradigm to manipulate parametrically the quantity of signal in facial expressions of emotion (QUEST threshold seeking algorithm). This was used to measure emotion recognition in 308 adult women (anorexia nervosa [n = 61], bulimia nervosa [n = 58], healthy controls [n = 130], and mixed mental disorders [mixed, n = 59]). The M (SD) age was 22.84 (3.90) years. The aims were to establish recognition thresholds defining how much information a person needs to recognize a facial emotion expression and to identify deficits in EDs compared with healthy and clinical controls. The stimuli included six basic emotion expressions (fear, anger, disgust, happiness, sadness, surprise), plus a neutral expression. RESULTS: Happiness was discriminated at the lowest, fear at the highest threshold by all groups. There were no differences regarding thresholds between groups, except for the mixed and the bulimia nervosa group with respect to the expression of disgust (F(3,302) = 5.97, p = .001, η = .056). Emotional clarity, ED pathology, and depressive symptoms did not predict performance (RChange ≤ .010, F(1,305) ≤ 5.74, p ≥ .079). The confusion matrix did not reveal specific biases in either group. CONCLUSIONS: Overall, within-subject effects were as expected, whereas between-subject effects were marginal and psychopathology did not influence emotion recognition. Facial emotion recognition abilities in women experiencing EDs compared with women experiencing mixed mental disorders and healthy controls were similar. Although basic facial emotion recognition processes seems to be intact, dysfunctional aspects such as misinterpretation might be important in emotion regulation problems. CLINICAL TRIAL REGISTRATION NUMBER: DRKS-ID: DRKS00005709.


Asunto(s)
Regulación Emocional , Expresión Facial , Reconocimiento Facial/fisiología , Trastornos de Alimentación y de la Ingestión de Alimentos/fisiopatología , Percepción Social , Adolescente , Adulto , Femenino , Humanos , Adulto Joven
9.
Perception ; 48(3): 197-213, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30758252

RESUMEN

The present study examined whether children with autism spectrum disorder (ASD) and typically developing (TD) children differed in visual perception of food stimuli at both sensorimotor and affective levels. A potential link between visual perception and food neophobia was also investigated. To these aims, 11 children with ASD and 11 TD children were tested. Visual pictures of food were used, and food neophobia was assessed by the parents. Results revealed that children with ASD explored visually longer food stimuli than TD children. Complementary analyses revealed that whereas TD children explored more multiple-item dishes (vs. simple-item dishes), children with ASD explored all the dishes in a similar way. In addition, children with ASD gave more negative appreciation in general. Moreover, hedonic rating was negatively correlated with food neophobia scores in children with ASD, but not in TD children. In sum, we show here that children with ASD have more difficulty than TD children in liking a food when presented visually. Our findings also suggest that a prominent factor that needs to be considered is time management during the food choice process. They also provide new ways of measuring and understanding food neophobia in children with ASD.


Asunto(s)
Afecto , Trastorno del Espectro Autista/complicaciones , Trastorno del Espectro Autista/psicología , Trastornos Fóbicos/complicaciones , Trastornos Fóbicos/psicología , Percepción Visual , Adolescente , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Alimentos , Humanos , Masculino , Filosofía , Estimulación Luminosa
10.
J Deaf Stud Deaf Educ ; 24(4): 346-355, 2019 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-31271428

RESUMEN

We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.


Asunto(s)
Sordera/psicología , Emociones , Expresión Facial , Reconocimiento Facial , Adolescente , Adulto , Femenino , Humanos , Masculino , Lengua de Signos , Adulto Joven
11.
Cogn Neuropsychol ; 35(5-6): 304-313, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29749293

RESUMEN

Determining the familiarity and identity of a face have been considered as independent processes. Covert face recognition in cases of acquired prosopagnosia, as well as rapid detection of familiarity have been taken to support this view. We tested P.S. a well-described case of acquired prosopagnosia, and two healthy controls (her sister and daughter) in two saccadic reaction time (SRT) experiments. Stimuli depicted their family members and well-matched unfamiliar distractors in the context of binary gender, or familiarity decisions. Observers' minimum SRTs were estimated with Bayesian approaches. For gender decisions, P.S. and her daughter achieved sufficient performance, but displayed different SRT distributions. For familiarity decisions, her daughter exhibited above chance level performance and minimum SRTs corresponding to those reported previously in healthy observers, while P.S. performed at chance. These findings extend previous observations, indicating that decisional space determines performance in both the intact and impaired face processing system.


Asunto(s)
Reconocimiento Facial/fisiología , Reconocimiento Visual de Modelos/fisiología , Prosopagnosia/diagnóstico , Tiempo de Reacción/fisiología , Adulto , Anciano , Toma de Decisiones , Femenino , Humanos , Persona de Mediana Edad , Prosopagnosia/patología , Movimientos Sacádicos
12.
J Exp Child Psychol ; 174: 41-59, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29906651

RESUMEN

Behavioral studies investigating facial expression recognition during development have applied various methods to establish by which age emotional expressions can be recognized. Most commonly, these methods employ static images of expressions at their highest intensity (apex) or morphed expressions of different intensities, but they have not previously been compared. Our aim was to (a) quantify the intensity and signal use for recognition of six emotional expressions from early childhood to adulthood and (b) compare both measures and assess their functional relationship to better understand the use of different measures across development. Using a psychophysical approach, we isolated the quantity of signal necessary to recognize an emotional expression at full intensity and the quantity of expression intensity (using neutral expression image morphs of varying intensities) necessary for each observer to recognize the six basic emotions while maintaining performance at 75%. Both measures revealed that fear and happiness were the most difficult and easiest expressions to recognize across age groups, respectively, a pattern already stable during early childhood. The quantity of signal and intensity needed to recognize sad, angry, disgust, and surprise expressions decreased with age. Using a Bayesian update procedure, we then reconstructed the response profiles for both measures. This analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, cannot be straightforwardly compared during development. Altogether, our findings offer novel methodological and theoretical insights and tools for the investigation of the developing affective system.


Asunto(s)
Envejecimiento/psicología , Emociones , Expresión Facial , Reconocimiento Facial , Adolescente , Adulto , Teorema de Bayes , Niño , Preescolar , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
13.
J Vis ; 18(9): 5, 2018 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-30208425

RESUMEN

The effective transmission and decoding of dynamic facial expressions of emotion is omnipresent and critical for adapted social interactions in everyday life. Thus, common intuition would suggest an advantage for dynamic facial expression recognition (FER) over the static snapshots routinely used in most experiments. However, although many studies reported an advantage in the recognition of dynamic over static expressions in clinical populations, results obtained from healthy participants are contrasted. To clarify this issue, we conducted a large cross-sectional study to investigate FER across the life span in order to determine if age is a critical factor to account for such discrepancies. More than 400 observers (age range 5-96) performed recognition tasks of the six basic expressions in static, dynamic, and shuffled (temporally randomized frames) conditions, normalized for the amount of energy sampled over time. We applied a Bayesian hierarchical step-linear model to capture the nonlinear relationship between age and FER for the different viewing conditions. Although replicating the typical accuracy profiles of FER, we determined the age at which peak efficiency was reached for each expression and found greater accuracy for most dynamic expressions across the life span. This advantage in the elderly population was driven by a significant decrease in performance for static images, which was twice as large as for the young adults. Our data posit the use of dynamic stimuli as being critical in the assessment of FER in the elderly population, inviting caution when drawing conclusions from the sole use of static face images to this aim.


Asunto(s)
Envejecimiento/fisiología , Emociones , Expresión Facial , Reconocimiento Facial/fisiología , Adolescente , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Teorema de Bayes , Niño , Preescolar , Estudios Transversales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
14.
J Deaf Stud Deaf Educ ; 23(1): 62-70, 2018 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-28977622

RESUMEN

Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed-accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.


Asunto(s)
Sordera/psicología , Reconocimiento Facial , Lengua de Signos , Adulto , Análisis de Varianza , Sordera/rehabilitación , Femenino , Audífonos , Humanos , Masculino , Persona de Mediana Edad , Tiempo de Reacción/fisiología , Adulto Joven
15.
Proc Biol Sci ; 284(1862)2017 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-28878060

RESUMEN

Human adults show an attentional bias towards fearful faces, an adaptive behaviour that relies on amygdala function. This attentional bias emerges in infancy between 5 and 7 months, but the underlying developmental mechanism is unknown. To examine possible precursors, we investigated whether 3.5-, 6- and 12-month-old infants show facilitated detection of fearful faces in noise, compared to happy faces. Happy or fearful faces, mixed with noise, were presented to infants (N = 192), paired with pure noise. We applied multivariate pattern analyses to several measures of infant looking behaviour to derive a criterion-free, continuous measure of face detection evidence in each trial. Analyses of the resulting psychometric curves supported the hypothesis of a detection advantage for fearful faces compared to happy faces, from 3.5 months of age and across all age groups. Overall, our data show a readiness to detect fearful faces (compared to happy faces) in younger infants that developmentally precedes the previously documented attentional bias to fearful faces in older infants and adults.


Asunto(s)
Atención , Expresión Facial , Miedo , Felicidad , Cara , Humanos , Lactante
16.
Proc Natl Acad Sci U S A ; 111(38): 13795-8, 2014 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-25201950

RESUMEN

The influence of language familiarity upon speaker identification is well established, to such an extent that it has been argued that "Human voice recognition depends on language ability" [Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Science 333(6042):595]. However, 7-mo-old infants discriminate speakers of their mother tongue better than they do foreign speakers [Johnson EK, Westrek E, Nazzi T, Cutler A (2011) Dev Sci 14(5):1002-1011] despite their limited speech comprehension abilities, suggesting that speaker discrimination may rely on familiarity with the sound structure of one's native language rather than the ability to comprehend speech. To test this hypothesis, we asked Chinese and English adult participants to rate speaker dissimilarity in pairs of sentences in English or Mandarin that were first time-reversed to render them unintelligible. Even in these conditions a language-familiarity effect was observed: Both Chinese and English listeners rated pairs of native-language speakers as more dissimilar than foreign-language speakers, despite their inability to understand the material. Our data indicate that the language familiarity effect is not based on comprehension but rather on familiarity with the phonology of one's native language. This effect may stem from a mechanism analogous to the "other-race" effect in face recognition.


Asunto(s)
Comprensión/fisiología , Lenguaje , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Adulto , Femenino , Humanos , Masculino
17.
J Vis ; 17(5): 16, 2017 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-28549354

RESUMEN

In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces-the Facespan-remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm. This procedure allowed us to parametrically vary the amount of facial information available at a fixated location in order to determine the minimal aperture size at which face recognition performance plateaus. As expected, accuracy increased nonlinearly with spotlight size apertures. Analyses of Structural Similarity comparing the available information during spotlight and natural viewing conditions indicate that the Facespan-the minimum spatial extent of preserved facial information leading to comparable performance as in natural viewing-encompasses 7° of visual angle in our viewing conditions (size of the face stimulus: 15.6°; viewing distance: 70 cm), which represents 45% of the face. The present findings provide a benchmark for future investigations that will address if and how the Facespan is modulated by factors such as cultural, developmental, idiosyncratic, or task-related differences.


Asunto(s)
Cara/fisiología , Reconocimiento Facial/fisiología , Fijación Ocular/fisiología , Percepción Visual/fisiología , Femenino , Humanos , Masculino , Desempeño Psicomotor/fisiología , Tiempo de Reacción , Movimientos Sacádicos/fisiología , Adulto Joven
18.
Behav Res Methods ; 49(2): 559-575, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27142836

RESUMEN

A major challenge in modern eye movement research is to statistically map where observers are looking, by isolating the significant differences between groups and conditions. As compared to the signals from contemporary neuroscience measures, such as magneto/electroencephalography and functional magnetic resonance imaging, eye movement data are sparser, with much larger variations in space across trials and participants. As a result, the implementation of a conventional linear modeling approach on two-dimensional fixation distributions often returns unstable estimations and underpowered results, leaving this statistical problem unresolved (Liversedge, Gilchrist, & Everling, 2011). Here, we present a new version of the iMap toolbox (Caldara & Miellet, 2011) that tackles this issue by implementing a statistical framework comparable to those developed in state-of-the-art neuroimaging data-processing toolboxes. iMap4 uses univariate, pixel-wise linear mixed models on smoothed fixation data, with the flexibility of coding for multiple between- and within-subjects comparisons and performing all possible linear contrasts for the fixed effects (main effects, interactions, etc.). Importantly, we also introduced novel nonparametric tests based on resampling, to assess statistical significance. Finally, we validated this approach by using both experimental and Monte Carlo simulation data. iMap4 is a freely available MATLAB open source toolbox for the statistical fixation mapping of eye movement data, with a user-friendly interface providing straightforward, easy-to-interpret statistical graphical outputs. iMap4 matches the standards of robust statistical neuroimaging methods and represents an important step in the data-driven processing of eye movement fixation data, an important field of vision sciences.


Asunto(s)
Biometría/métodos , Movimientos Oculares/fisiología , Modelos Lineales , Programas Informáticos , Humanos , Método de Montecarlo , Estadísticas no Paramétricas , Interfaz Usuario-Computador
19.
Dev Sci ; 18(6): 926-39, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25704672

RESUMEN

Reading the non-verbal cues from faces to infer the emotional states of others is central to our daily social interactions from very early in life. Despite the relatively well-documented ontogeny of facial expression recognition in infancy, our understanding of the development of this critical social skill throughout childhood into adulthood remains limited. To this end, using a psychophysical approach we implemented the QUEST threshold-seeking algorithm to parametrically manipulate the quantity of signals available in faces normalized for contrast and luminance displaying the six emotional expressions, plus neutral. We thus determined observers' perceptual thresholds for effective discrimination of each emotional expression from 5 years of age up to adulthood. Consistent with previous studies, happiness was most easily recognized with minimum signals (35% on average), whereas fear required the maximum signals (97% on average) across groups. Overall, recognition improved with age for all expressions except happiness and fear, for which all age groups including the youngest remained within the adult range. Uniquely, our findings characterize the recognition trajectories of the six basic emotions into three distinct groupings: expressions that show a steep improvement with age - disgust, neutral, and anger; expressions that show a more gradual improvement with age - sadness, surprise; and those that remain stable from early childhood - happiness and fear, indicating that the coding for these expressions is already mature by 5 years of age. Altogether, our data provide for the first time a fine-grained mapping of the development of facial expression recognition. This approach significantly increases our understanding of the decoding of emotions across development and offers a novel tool to measure impairments for specific facial expressions in developmental clinical populations.


Asunto(s)
Emociones/fisiología , Expresión Facial , Desarrollo Humano/fisiología , Reconocimiento Visual de Modelos/fisiología , Adolescente , Factores de Edad , Análisis de Varianza , Teorema de Bayes , Niño , Cara , Femenino , Humanos , Modelos Lineales , Masculino , Estimulación Luminosa , Psicometría , Psicofísica , Adulto Joven
20.
Proc Natl Acad Sci U S A ; 109(19): 7241-4, 2012 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-22509011

RESUMEN

Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.


Asunto(s)
Comparación Transcultural , Emociones , Expresión Facial , Interfaz Usuario-Computador , Pueblo Asiatico/psicología , Características Culturales , Femenino , Humanos , Masculino , Modelos Psicológicos , Estimulación Luminosa , Encuestas y Cuestionarios , Percepción Visual , Población Blanca/psicología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA