Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Vis ; 24(1): 7, 2024 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-38197738

RESUMEN

Humans communicate internal states through complex facial movements shaped by biological and evolutionary constraints. Although real-life social interactions are flooded with dynamic signals, current knowledge on facial expression recognition mainly arises from studies using static face images. This experimental bias might stem from previous studies consistently reporting that young adults minimally benefit from the richer dynamic over static information, whereas children, the elderly, and clinical populations very strongly do (Richoz, Jack, Garrod, Schyns, & Caldara, 2015, Richoz, Jack, Garrod, Schyns, & Caldara, 2018b). These observations point to a near-optimal facial expression decoding system in young adults, almost insensitive to the advantage of dynamic over static cues. Surprisingly, no study has yet tested the idea that such evidence might be rooted in a ceiling effect. To this aim, we asked 70 healthy young adults to perform static and dynamic facial expression recognition of the six basic expressions while parametrically and randomly varying the low-level normalized phase and contrast signal (0%-100%) of the faces. As predicted, when 100% face signals were presented, static and dynamic expressions were recognized with equal efficiency with the exception of those with the most informative dynamics (i.e., happiness and surprise). However, when less signal was available, dynamic expressions were all better recognized than their static counterpart (peaking at ∼20%). Our data show that facial movements increase our ability to efficiently identify emotional states of others under the suboptimal visual conditions that can occur in everyday life. Dynamic signals are more effective and sensitive than static ones for decoding all facial expressions of emotion for all human observers.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Niño , Anciano , Adulto Joven , Humanos , Emociones , Felicidad , Señales (Psicología)
2.
PeerJ Comput Sci ; 9: e1516, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37705656

RESUMEN

PyMC is a probabilistic programming library for Python that provides tools for constructing and fitting Bayesian models. It offers an intuitive, readable syntax that is close to the natural syntax statisticians use to describe models. PyMC leverages the symbolic computation library PyTensor, allowing it to be compiled into a variety of computational backends, such as C, JAX, and Numba, which in turn offer access to different computational architectures including CPU, GPU, and TPU. Being a general modeling framework, PyMC supports a variety of models including generalized hierarchical linear regression and classification, time series, ordinary differential equations (ODEs), and non-parametric models such as Gaussian processes (GPs). We demonstrate PyMC's versatility and ease of use with examples spanning a range of common statistical models. Additionally, we discuss the positive role of PyMC in the development of the open-source ecosystem for probabilistic programming.

3.
J Exp Child Psychol ; 229: 105622, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36641829

RESUMEN

In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Adulto , Niño , Humanos , Adolescente , Movimientos Oculares , Emociones , Ira , Felicidad
4.
Comput Toxicol ; 21: 100206, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35211661

RESUMEN

In a century where toxicology and chemical risk assessment are embracing alternative methods to animal testing, there is an opportunity to understand the causal factors of neurodevelopmental disorders such as learning and memory disabilities in children, as a foundation to predict adverse effects. New testing paradigms, along with the advances in probabilistic modelling, can help with the formulation of mechanistically-driven hypotheses on how exposure to environmental chemicals could potentially lead to developmental neurotoxicity (DNT). This investigation aimed to develop a Bayesian hierarchical model of a simplified AOP network for DNT. The model predicted the probability that a compound induces each of three selected common key events (CKEs) of the simplified AOP network and the adverse outcome (AO) of DNT, taking into account correlations and causal relations informed by the key event relationships (KERs). A dataset of 88 compounds representing pharmaceuticals, industrial chemicals and pesticides was compiled including physicochemical properties as well as in silico and in vitro information. The Bayesian model was able to predict DNT potential with an accuracy of 76%, classifying the compounds into low, medium or high probability classes. The modelling workflow achieved three further goals: it dealt with missing values; accommodated unbalanced and correlated data; and followed the structure of a directed acyclic graph (DAG) to simulate the simplified AOP network. Overall, the model demonstrated the utility of Bayesian hierarchical modelling for the development of quantitative AOP (qAOP) models and for informing the use of new approach methodologies (NAMs) in chemical risk assessment.

5.
Heliyon ; 7(5): e07018, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34041389

RESUMEN

During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.

6.
J Deaf Stud Deaf Educ ; 24(4): 346-355, 2019 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-31271428

RESUMEN

We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.


Asunto(s)
Sordera/psicología , Emociones , Expresión Facial , Reconocimiento Facial , Adolescente , Adulto , Femenino , Humanos , Masculino , Lengua de Signos , Adulto Joven
7.
Sci Rep ; 9(1): 4176, 2019 03 12.
Artículo en Inglés | MEDLINE | ID: mdl-30862845

RESUMEN

In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5-10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11-15 y/o children look mainly at the vehicles' appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5-10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11-15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.


Asunto(s)
Atención/fisiología , Peatones , Adolescente , Adulto , Algoritmos , Niño , Preescolar , Toma de Decisiones , Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Humanos , Estimulación Luminosa , Adulto Joven
8.
J Neurosci ; 39(21): 4113-4123, 2019 05 22.
Artículo en Inglés | MEDLINE | ID: mdl-30867260

RESUMEN

Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations.SIGNIFICANCE STATEMENT When engaging in face recognition, observers deploy idiosyncratic fixation patterns to sample facial information. Whether these individual differences concur with idiosyncratic face-sensitive neural responses remains unclear. To address this issue, we recorded observers' fixation patterns, as well as their neural face discrimination responses elicited during fixation of 10 different locations on the face, corresponding to different types of facial information. Our data reveal a clear interplay between individuals' face-sensitive neural responses and their idiosyncratic eye-movement patterns during identity processing, which emerges as early as the first fixation. Collectively, our findings favor the existence of idiosyncratic, rather than universal face representations.


Asunto(s)
Movimientos Oculares , Reconocimiento Facial/fisiología , Adulto , Atención/fisiología , Electroencefalografía , Femenino , Humanos , Masculino
9.
Perception ; 48(3): 197-213, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30758252

RESUMEN

The present study examined whether children with autism spectrum disorder (ASD) and typically developing (TD) children differed in visual perception of food stimuli at both sensorimotor and affective levels. A potential link between visual perception and food neophobia was also investigated. To these aims, 11 children with ASD and 11 TD children were tested. Visual pictures of food were used, and food neophobia was assessed by the parents. Results revealed that children with ASD explored visually longer food stimuli than TD children. Complementary analyses revealed that whereas TD children explored more multiple-item dishes (vs. simple-item dishes), children with ASD explored all the dishes in a similar way. In addition, children with ASD gave more negative appreciation in general. Moreover, hedonic rating was negatively correlated with food neophobia scores in children with ASD, but not in TD children. In sum, we show here that children with ASD have more difficulty than TD children in liking a food when presented visually. Our findings also suggest that a prominent factor that needs to be considered is time management during the food choice process. They also provide new ways of measuring and understanding food neophobia in children with ASD.


Asunto(s)
Afecto , Trastorno del Espectro Autista/complicaciones , Trastorno del Espectro Autista/psicología , Trastornos Fóbicos/complicaciones , Trastornos Fóbicos/psicología , Percepción Visual , Adolescente , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Alimentos , Humanos , Masculino , Filosofía , Estimulación Luminosa
10.
Psychosom Med ; 81(2): 155-164, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30702549

RESUMEN

OBJECTIVE: Impairments in facial emotion recognition are an underlying factor of deficits in emotion regulation and interpersonal difficulties in mental disorders and are evident in eating disorders (EDs). METHODS: We used a computerized psychophysical paradigm to manipulate parametrically the quantity of signal in facial expressions of emotion (QUEST threshold seeking algorithm). This was used to measure emotion recognition in 308 adult women (anorexia nervosa [n = 61], bulimia nervosa [n = 58], healthy controls [n = 130], and mixed mental disorders [mixed, n = 59]). The M (SD) age was 22.84 (3.90) years. The aims were to establish recognition thresholds defining how much information a person needs to recognize a facial emotion expression and to identify deficits in EDs compared with healthy and clinical controls. The stimuli included six basic emotion expressions (fear, anger, disgust, happiness, sadness, surprise), plus a neutral expression. RESULTS: Happiness was discriminated at the lowest, fear at the highest threshold by all groups. There were no differences regarding thresholds between groups, except for the mixed and the bulimia nervosa group with respect to the expression of disgust (F(3,302) = 5.97, p = .001, η = .056). Emotional clarity, ED pathology, and depressive symptoms did not predict performance (RChange ≤ .010, F(1,305) ≤ 5.74, p ≥ .079). The confusion matrix did not reveal specific biases in either group. CONCLUSIONS: Overall, within-subject effects were as expected, whereas between-subject effects were marginal and psychopathology did not influence emotion recognition. Facial emotion recognition abilities in women experiencing EDs compared with women experiencing mixed mental disorders and healthy controls were similar. Although basic facial emotion recognition processes seems to be intact, dysfunctional aspects such as misinterpretation might be important in emotion regulation problems. CLINICAL TRIAL REGISTRATION NUMBER: DRKS-ID: DRKS00005709.


Asunto(s)
Regulación Emocional , Expresión Facial , Reconocimiento Facial/fisiología , Trastornos de Alimentación y de la Ingestión de Alimentos/fisiopatología , Percepción Social , Adolescente , Adulto , Femenino , Humanos , Adulto Joven
11.
J Vis ; 18(9): 5, 2018 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-30208425

RESUMEN

The effective transmission and decoding of dynamic facial expressions of emotion is omnipresent and critical for adapted social interactions in everyday life. Thus, common intuition would suggest an advantage for dynamic facial expression recognition (FER) over the static snapshots routinely used in most experiments. However, although many studies reported an advantage in the recognition of dynamic over static expressions in clinical populations, results obtained from healthy participants are contrasted. To clarify this issue, we conducted a large cross-sectional study to investigate FER across the life span in order to determine if age is a critical factor to account for such discrepancies. More than 400 observers (age range 5-96) performed recognition tasks of the six basic expressions in static, dynamic, and shuffled (temporally randomized frames) conditions, normalized for the amount of energy sampled over time. We applied a Bayesian hierarchical step-linear model to capture the nonlinear relationship between age and FER for the different viewing conditions. Although replicating the typical accuracy profiles of FER, we determined the age at which peak efficiency was reached for each expression and found greater accuracy for most dynamic expressions across the life span. This advantage in the elderly population was driven by a significant decrease in performance for static images, which was twice as large as for the young adults. Our data posit the use of dynamic stimuli as being critical in the assessment of FER in the elderly population, inviting caution when drawing conclusions from the sole use of static face images to this aim.


Asunto(s)
Envejecimiento/fisiología , Emociones , Expresión Facial , Reconocimiento Facial/fisiología , Adolescente , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Teorema de Bayes , Niño , Preescolar , Estudios Transversales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
12.
Am J Hum Biol ; 30(6): e23178, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30251293

RESUMEN

OBJECTIVES: Recent research on the signal value of masculine physical characteristics in men has focused on the possibility that such characteristics are valid cues of physical strength. However, evidence that sexually dimorphic vocal characteristics are correlated with physical strength is equivocal. Consequently, we undertook a further test for possible relationships between physical strength and masculine vocal characteristics. METHODS: We tested the putative relationships between White UK (N = 115) and Chinese (N = 106) participants' handgrip strength (a widely used proxy for general upper-body strength) and five sexually dimorphic acoustic properties of voices: fundamental frequency (F0), fundamental frequency's SD (F0-SD), formant dispersion (Df), formant position (Pf), and estimated vocal-tract length (VTL). RESULTS: Analyses revealed no clear evidence that stronger individuals had more masculine voices. CONCLUSIONS: Our results do not support the hypothesis that masculine vocal characteristics are a valid cue of physical strength.


Asunto(s)
Fuerza de la Mano , Caracteres Sexuales , Calidad de la Voz , Adulto , China/etnología , Femenino , Humanos , Masculino , Escocia/etnología , Adulto Joven
13.
Psychoneuroendocrinology ; 98: 1-5, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30077864

RESUMEN

Putative associations between sex hormones and attractive physical characteristics in women are central to many theories of human physical attractiveness and mate choice. Although such theories have become very influential, evidence that physically attractive and unattractive women have different hormonal profiles is equivocal. Consequently, we investigated hypothesized relationships between salivary estradiol and progesterone and two aspects of women's physical attractiveness that are commonly assumed to be correlated with levels of these hormones: facial attractiveness (N = 249) and waist-to-hip ratio (N = 247). Our analyses revealed no compelling evidence that women with more attractive faces or lower (i.e., more attractive) waist-to-hip ratios had higher levels of estradiol or progesterone. One analysis did suggest that women with more attractive waist-to-hip ratios had significantly higher progesterone, but the relationship was weak and the relationship not significant in other analyses. These results do not support the influential hypothesis that between-women differences in physical attractiveness are related to estradiol and/or progesterone.


Asunto(s)
Conducta de Elección/fisiología , Matrimonio/psicología , Conducta Sexual/psicología , Estradiol/análisis , Cara , Reconocimiento Facial , Femenino , Fertilidad , Humanos , Ciclo Menstrual/fisiología , Apariencia Física/fisiología , Progesterona/análisis , Saliva/química , Caracteres Sexuales , Relación Cintura-Cadera/psicología , Adulto Joven
14.
J Neurosci Methods ; 308: 74-87, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-29969602

RESUMEN

BACKGROUND: fMRI provides spatial resolution that is unmatched by non-invasive neuroimaging techniques. Its temporal dynamics however are typically neglected due to the sluggishness of the hemodynamic signal. NEW METHODS: We present temporal multivariate pattern analysis (tMVPA), a method for investigating the temporal evolution of neural representations in fMRI data, computed on single-trial BOLD time-courses, leveraging both spatial and temporal components of the fMRI signal. We implemented an expanding sliding window approach that allows identifying the time-window of an effect. RESULTS: We demonstrate that tMVPA can successfully detect condition-specific multivariate modulations over time, in the absence of mean BOLD amplitude differences. Using Monte-Carlo simulations and synthetic data, we quantified family-wise error rate (FWER) and statistical power. Both at the group and single-subject levels, FWER was either at or significantly below 5%. We reached the desired power with 18 subjects and 12 trials for the group level, and with 14 trials in the single-subject scenario. COMPARISON WITH EXISTING METHODS: We compare the tMVPA statistical evaluation to that of a linear support vector machine (SVM). SVM outperformed tMVPA with large N and trial numbers. Conversely, tMVPA, leveraging on single trials analyses, outperformed SVM in low N and trials and in a single-subject scenario. CONCLUSION: Recent evidence suggesting that the BOLD signal carries finer-grained temporal information than previously thought, advocates the need for analytical tools, such as tMVPA, tailored to investigate BOLD temporal dynamics. The comparable performance between tMVPA and SVM, a powerful and reliable tool for fMRI, supports the validity of our technique.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Imagen por Resonancia Magnética , Adolescente , Adulto , Interpretación Estadística de Datos , Femenino , Humanos , Masculino , Análisis Multivariante , Máquina de Vectores de Soporte , Factores de Tiempo , Adulto Joven
15.
J Exp Child Psychol ; 174: 41-59, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29906651

RESUMEN

Behavioral studies investigating facial expression recognition during development have applied various methods to establish by which age emotional expressions can be recognized. Most commonly, these methods employ static images of expressions at their highest intensity (apex) or morphed expressions of different intensities, but they have not previously been compared. Our aim was to (a) quantify the intensity and signal use for recognition of six emotional expressions from early childhood to adulthood and (b) compare both measures and assess their functional relationship to better understand the use of different measures across development. Using a psychophysical approach, we isolated the quantity of signal necessary to recognize an emotional expression at full intensity and the quantity of expression intensity (using neutral expression image morphs of varying intensities) necessary for each observer to recognize the six basic emotions while maintaining performance at 75%. Both measures revealed that fear and happiness were the most difficult and easiest expressions to recognize across age groups, respectively, a pattern already stable during early childhood. The quantity of signal and intensity needed to recognize sad, angry, disgust, and surprise expressions decreased with age. Using a Bayesian update procedure, we then reconstructed the response profiles for both measures. This analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, cannot be straightforwardly compared during development. Altogether, our findings offer novel methodological and theoretical insights and tools for the investigation of the developing affective system.


Asunto(s)
Envejecimiento/psicología , Emociones , Expresión Facial , Reconocimiento Facial , Adolescente , Adulto , Teorema de Bayes , Niño , Preescolar , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
16.
Cogn Neuropsychol ; 35(5-6): 304-313, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29749293

RESUMEN

Determining the familiarity and identity of a face have been considered as independent processes. Covert face recognition in cases of acquired prosopagnosia, as well as rapid detection of familiarity have been taken to support this view. We tested P.S. a well-described case of acquired prosopagnosia, and two healthy controls (her sister and daughter) in two saccadic reaction time (SRT) experiments. Stimuli depicted their family members and well-matched unfamiliar distractors in the context of binary gender, or familiarity decisions. Observers' minimum SRTs were estimated with Bayesian approaches. For gender decisions, P.S. and her daughter achieved sufficient performance, but displayed different SRT distributions. For familiarity decisions, her daughter exhibited above chance level performance and minimum SRTs corresponding to those reported previously in healthy observers, while P.S. performed at chance. These findings extend previous observations, indicating that decisional space determines performance in both the intact and impaired face processing system.


Asunto(s)
Reconocimiento Facial/fisiología , Reconocimiento Visual de Modelos/fisiología , Prosopagnosia/diagnóstico , Tiempo de Reacción/fisiología , Adulto , Anciano , Toma de Decisiones , Femenino , Humanos , Persona de Mediana Edad , Prosopagnosia/patología , Movimientos Sacádicos
17.
Neuropsychology ; 32(2): 123-137, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29528679

RESUMEN

OBJECTIVE: Recent evidence showed that individuals with congenital face processing impairment (congenital prosopagnosia [CP]) are highly accurate when they have to recognize their own face (self-face advantage) in an implicit matching task, with a preference for the right-half of the self-face (right perceptual bias). Yet the perceptual strategies underlying this advantage are unclear. Here, we aimed to verify whether both the self-face advantage and the right perceptual bias emerge in an explicit task, and whether those effects are linked to a different scanning strategy between the self-face and unfamiliar faces. METHOD: Eye movements were recorded from 7 CPs and 13 controls, during a self/other discrimination task of stimuli depicting the self-face and another unfamiliar face, presented upright and inverted. RESULTS: Individuals with CP and controls differed significantly in how they explored faces. In particular, compared with controls, CPs used a distinct eye movement sampling strategy for processing inverted faces, by deploying significantly more fixations toward the nose and mouth areas, which resulted in more efficient recognition. Moreover, the results confirmed the presence of a self-face advantage in both groups, but the eye movement analyses failed to reveal any differences in the exploration of the self-face compared with the unfamiliar face. Finally, no bias toward the right-half of the self-face was found. CONCLUSIONS: Our data suggest that the self-face advantage emerges both in implicit and explicit recognition tasks in CPs as much as in good recognizers, and it is not linked to any specific visual exploration strategies. (PsycINFO Database Record


Asunto(s)
Movimientos Oculares , Reconocimiento Facial , Prosopagnosia/psicología , Adulto , Discriminación en Psicología , Cara , Femenino , Fijación Ocular , Humanos , Masculino , Prosopagnosia/congénito , Desempeño Psicomotor , Tiempo de Reacción , Reconocimiento en Psicología , Campos Visuales , Adulto Joven
18.
J Deaf Stud Deaf Educ ; 23(1): 62-70, 2018 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-28977622

RESUMEN

Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed-accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.


Asunto(s)
Sordera/psicología , Reconocimiento Facial , Lengua de Signos , Adulto , Análisis de Varianza , Sordera/rehabilitación , Femenino , Audífonos , Humanos , Masculino , Persona de Mediana Edad , Tiempo de Reacción/fisiología , Adulto Joven
20.
Soc Cogn Affect Neurosci ; 12(12): 1959-1971, 2017 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-29040780

RESUMEN

The rapid extraction of facial identity and emotional expressions is critical for adapted social interactions. These biologically relevant abilities have been associated with early neural responses on the face sensitive N170 component. However, whether all facial expressions uniformly modulate the N170, and whether this effect occurs only when emotion categorization is task-relevant, is still unclear. To clarify this issue, we recorded high-resolution electrophysiological signals while 22 observers perceived the six basic expressions plus neutral. We used a repetition suppression paradigm, with an adaptor followed by a target face displaying the same identity and expression (trials of interest). We also included catch trials to which participants had to react, by varying identity (identity-task), expression (expression-task) or both (dual-task) on the target face. We extracted single-trial Repetition Suppression (stRS) responses using a data-driven spatiotemporal approach with a robust hierarchical linear model to isolate adaptation effects on the trials of interest. Regardless of the task, fear was the only expression modulating the N170, eliciting the strongest stRS responses. This observation was corroborated by distinct behavioral performance during the catch trials for this facial expression. Altogether, our data reinforce the view that fear elicits distinct neural processes in the brain, enhancing attention and facilitating the early coding of faces.


Asunto(s)
Cara , Miedo/psicología , Adulto , Electroencefalografía , Expresión Facial , Femenino , Humanos , Modelos Lineales , Masculino , Estimulación Luminosa , Desempeño Psicomotor , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA