Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Diabetes Obes Metab ; 26(8): 3299-3305, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38757537

RESUMEN

AIMS: To describe the development and report the first-stage validation of a digital version of the digit symbol substitution test (DSST), for assessment of cognitive function in older people with diabetes. MATERIALS AND METHODS: A multidisciplinary team of experts was convened to conceptualize and build a digital version of the DSST and develop a machine-learning (ML) algorithm to analyse the inputs. One hundred individuals with type 2 diabetes (aged ≥ 60 years) were invited to participate in a one-time meeting in which both the digital and the pencil-and-paper (P&P) versions of the DSST were administered. Information pertaining to demographics, laboratory measurements, and diabetes indices was collected. The correlation between the digital and P&P versions of the test was determined. Additionally, as part of the validation process, the performance of the digital version in people with and without known risk factors for cognitive impairment was analysed. RESULTS: The ML model yielded an overall accuracy of 89.1%. A strong correlation was found between the P&P and digital versions (r = 0.76, p < 0.001) of the DSST, as well as between the ML model and the manual reading of the digital DSST (r = 0.99, p < 0.001). CONCLUSIONS: This study describes the development of and provides first-stage validation data for a newly developed digital cognitive assessment tool that may be used for screening and surveillance of cognitive function in older people with diabetes. More studies are needed to further validate this tool, especially when self-administered and in different clinical settings.


Asunto(s)
Cognición , Diabetes Mellitus Tipo 2 , Humanos , Anciano , Femenino , Masculino , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/psicología , Persona de Mediana Edad , Cognición/fisiología , Reproducibilidad de los Resultados , Disfunción Cognitiva/diagnóstico , Disfunción Cognitiva/etiología , Pruebas Neuropsicológicas , Anciano de 80 o más Años , Aprendizaje Automático
2.
Front Psychol ; 15: 1287952, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38770252

RESUMEN

Individuals with Parkinson's disease (PD) may exhibit impaired emotion perception. However, research demonstrating this decline has been based almost entirely on the recognition of isolated emotional cues. In real life, emotional cues such as expressive faces are typically encountered alongside expressive bodies. The current study investigated emotion perception in individuals with PD (n = 37) using emotionally incongruent composite displays of facial and body expressions, as well as isolated face and body expressions, and congruent composite displays as a baseline. In addition to a group of healthy controls (HC) (n = 50), we also included control individuals with schizophrenia (SZ) (n = 30), who display, as in PD, similar motor symptomology and decreased emotion perception abilities. The results show that individuals with PD showed an increased tendency to categorize incongruent face-body combinations in line with the body emotion, whereas those with HC showed a tendency to classify them in line with the facial emotion. No consistent pattern for prioritizing the face or body was found in individuals with SZ. These results were not explained by the emotional recognition of the isolated cues, cognitive status, depression, or motor symptoms of individuals with PD and SZ. As real-life expressions may include inconsistent cues in the body and face, these findings may have implications for the way individuals with PD and SZ interpret the emotions of others.

3.
Child Neuropsychol ; 29(1): 115-135, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-35545855

RESUMEN

Following mild traumatic brain injury (mTBI) children usually experience one or more somatic, cognitive, and/or emotional-behavioral post-concussion symptoms (PCS). PCS may be transient, however for some children, persistent post-concussion symptoms (PPCS) might linger for months or years. Identifying risk factors for PPCS may allow earlier interventions for patients at greater risk. We examined pre-injury social difficulties and acute stress reaction as risk factors to PPCS in children. Participants were 83 children (aged 8-16) with mTBI. In a prospective follow-up, pre-injury social difficulties, 24-hours post-concussion symptoms, and acute stress reactions were tested as predictors of one-week and four-months PCS reports. Parents' reports, self-reports, and neurocognitive tests were employed. One-week PCS level was associated with acute stress, and not with 24-hours post-concussion symptoms or pre-injury social difficulties. Four-months PCS level was predicted by pre-injury social difficulties and 24-hours post-concussion symptoms, with no contribution of acute stress. Interestingly, less symptoms at 24-hour from injury were associated with a higher level of PCS at four months. Cognitive functioning at four months was predicted by acute stress, with no contribution of 24-hours post-concussion symptoms or pre-injury social difficulties. Cognitive functioning did not differ between children with and without PPCS. In conclusion, non-injury, socio-emotional factors (pre-injury social difficulties, acute stress) should be considered, alongside injury-related factors, in predicting recovery from mTBI. Pre-injury social difficulties and stress reaction to the traumatic event might pose an emotional burden and limit one's social support during recovery, thus require clinical attention in children following mTBI.


Asunto(s)
Conmoción Encefálica , Síndrome Posconmocional , Humanos , Niño , Síndrome Posconmocional/diagnóstico , Estudios Prospectivos , Conmoción Encefálica/psicología , Factores de Riesgo , Cognición
4.
Clin Neuropsychol ; 37(7): 1389-1409, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-36416168

RESUMEN

Background: Acute stress following mild Traumatic Brain Injury (mTBI) is highly prevalent and associated with Persistent Post-Concussion symptoms (PPCS). However, the mechanism mediating this relationship is understudied. Objective: To examine whether parental accommodation (i.e. parents' attempts to adjust the environment to the child's difficulties) and child's coping strategies mediate the association between acute stress and PPCS in children following mTBI. Method: Participants were 58 children aged 8-16 who sustained a mTBI and their parents. Children's acute stress (one-week post-injury) and coping strategies (three weeks post-injury), and parental accommodation (three weeks and four months post-injury) were assessed. Outcome measures included PPCS (four months post-injury) and neuropsychological tests of cognitive functioning (attention and memory). A baseline for PPCS was obtained by a retrospective report of pre-injury symptoms immediately after the injury. Results: Children's acute stress and negative coping strategies (escape-oriented coping strategies) and four-months parental accommodation were significantly related to PPCS. Acute stress predicted PPCS and attention and memory performance. Parental accommodation significantly mediated the association between acute stress and PPCS. Conclusions: Stress plays an important role in children's recovery from mTBI and PPCS. Parental accommodation mediates this relationship, and thus, clinical attention to parental reactions during recovery is needed.


Asunto(s)
Conmoción Encefálica , Síndrome Posconmocional , Humanos , Niño , Síndrome Posconmocional/etiología , Estudios Retrospectivos , Estudios Prospectivos , Pruebas Neuropsicológicas , Conmoción Encefálica/diagnóstico , Padres/psicología , Adaptación Psicológica
5.
Emotion ; 22(6): 1394-1399, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36006704

RESUMEN

A basic premise of classic emotion theories is that distinct emotional experiences yield distinct emotional vocalizations-each informative of its situational context. Furthermore, it is commonly assumed that emotional vocalizations become more distinct and diagnostic as their intensity increases. Critically, these theoretical assumptions largely rely on research utilizing posed vocal reactions of actors, which may be overly simplified and stereotypical. While recent work suggests that intense, real-life vocalizations may be nondiagnostic, the exact way in which increasing degrees of situational intensity affect the perceived valence of real-life versus posed expressions remains unknown. Here we compared real-life and posed vocalizations to winning increasing amounts of money in the lottery. Results show that while posed vocalizations are perceived as positive for both low- and high-sum wins, real-life vocalizations are perceived as positive only for low-sum wins, but as negative for high-sum wins. These findings demonstrate the potential gaps between real-life and posed expressions and highlight the role of situational intensity in driving perceptual ambiguity for real-life emotional expressions. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Emociones , Voz , Humanos
6.
Emotion ; 22(5): 844-860, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32658507

RESUMEN

Facial expression recognition relies on the processing of diagnostic information from different facial regions. For example, successful recognition of anger versus disgust requires one to process information located in the eye/brow region, or in the mouth/nose region, respectively. Yet, how this information is extracted from the face is less clear. One widespread view, supported by cross-cultural experiments as well as neuropsychological case studies, is that the distribution of gaze fixations on specific diagnostic regions plays a critical role in the extraction of affective information. According to this view, emotion recognition is strongly related to the distribution of fixations to diagnostic regions. Alternatively, facial expression recognition may not rely merely on the exact patterns of fixations, but rather on other factors such as the processing of extrafoveal information. In the present study, we examined this matter by characterizing and using individual differences in fixation distributions during facial expression recognition. We revealed 4 groups of observers that differed in their distribution of fixations toward face regions in a robust and consistent manner. In line with previous studies, we found that different facial emotion categories evoked distinct distribution of fixations according to their diagnostic facial regions. However, individual distinctive patterns of fixations were not correlated with emotion recognition: individuals that strongly focused on the eyes, or on the mouth, achieved comparable emotion recognition accuracy. These findings suggest that extrafoveal processing may play a larger role in emotion recognition from faces than previously assumed. Consequently, successful emotion recognition can rise from diverse patterns of fixations. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Emociones/fisiología , Reconocimiento Facial/fisiología , Fijación Ocular , Humanos , Reconocimiento en Psicología/fisiología
7.
Emotion ; 22(4): 641-652, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32437176

RESUMEN

Perceiving emotional expressions automatically triggers a tendency to react with a matching facial expression. Although it is considered fundamental for healthy social interactions, the mechanism behind it is unclear. One prevalent explanation suggests that perceiving emotional expressions induces emotions in the observer and that it is these emotions that elicit the facial reactions. This study directly tested this hypothesis, investigating whether emotion elicitation is what drives the effect. Two experiments used a facial stimulus-response compatibility (SRC) paradigm-a widely used measure of the tendency to facially match emotional expressions-in which the irrelevant stimuli were happy and angry body postures. Reaction times were measured using facial electromyography. Experiment 1 replicated the known SRC effect to body postures using a simpler task with only one, prespecified, response. This established a novel variant of the paradigm in which the facial effects cannot be attributed to motor matching or response selection and which focuses specifically on the automatic components of the effect. Experiment 2 then added to this paradigm a habituation protocol and self-report ratings of affective valence. Results indicated that emotional body postures elicited limited emotional reactions, which were further habituated following repeated presentations. However, the facial SRC effect did not undergo such habituation, suggesting that reducing emotional reaction to observed expressions does not reduce the tendency to match those expressions. Our findings do not support the emotion elicitation hypothesis and suggest that automatic facial reactions to emotional body postures are not driven by emotional reactions to the stimuli. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Emociones , Expresión Facial , Ira/fisiología , Emociones/fisiología , Cara , Felicidad , Humanos
8.
J Gerontol B Psychol Sci Soc Sci ; 77(1): 84-93, 2022 01 12.
Artículo en Inglés | MEDLINE | ID: mdl-33842959

RESUMEN

OBJECTIVES: It is commonly argued that older adults show difficulties in standardized tasks of emotional expression perception, yet most previous works relied on classic sets of static, decontextualized, and stereotypical facial expressions. In real life, facial expressions are dynamic and embedded in a rich context, 2 key factors that may aid emotion perception. Specifically, body language provides important affective cues that may disambiguate facial movements. METHOD: We compared emotion perception of dynamic faces, bodies, and their combination in a sample of older (age 60-83, n = 126) and young (age 18-30, n = 124) adults. We used the Geneva Multimodal Emotion Portrayals set, which includes a full view of expressers' faces and bodies, displaying a diverse range of positive and negative emotions, portrayed dynamically and holistically in a nonstereotypical, unconstrained manner. Critically, we digitally manipulated the dynamic cue such that perceivers viewed isolated faces (without bodies), isolated bodies (without faces), or faces with bodies. RESULTS: Older adults showed better perception of positive and negative dynamic facial expressions, while young adults showed better perception of positive isolated dynamic bodily expressions. Importantly, emotion perception of faces with bodies was comparable across ages. DISCUSSION: Dynamic emotion perception in young and older adults may be more similar than previously assumed, especially when the task is more realistic and ecological. Our results emphasize the importance of contextualized and ecological tasks in emotion perception across ages.


Asunto(s)
Envejecimiento/fisiología , Emociones/fisiología , Reconocimiento Facial/fisiología , Cinésica , Percepción Social , Adolescente , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Expresión Facial , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
9.
Affect Sci ; 2(2): 163-170, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36043174

RESUMEN

Semantic emotional labels can influence the recognition of isolated facial expressions. However, it is unknown if labels also influence the susceptibility of facial expressions to context. To examine this, participants categorized expressive faces presented with emotionally congruent or incongruent bodies, serving as context. Face-body composites were presented together, aligned in their natural form, or spatially misaligned with the head shifted horizontally beside the body-a condition known to reduce the contextual impact of the body on the face. Critically, participants responded either by choosing emotion labels or by perceptually matching the target expression with expression probes. The results show a label dominance effect: Face-body congruency effects were larger with semantic labels than with perceptual expression matching, indicating that facial expressions are more prone to contextual influence when categorized with emotion labels, an effect only found when faces and bodies were aligned. These findings suggest that the role of conceptual language in face-body context effects may be larger than previously assumed.

10.
Emotion ; 21(3): 557-568, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31971411

RESUMEN

The social context-seeing people emotionally interacting-is one of the most common contexts in which emotion perception occurs. Despite its importance, emotion perception of social interactions from a 3rd-person perspective is poorly understood. Here we investigated whether emotion recognition of fear and anger is facilitated by mere congruency (the contextual figure exhibits the same emotion as the target) or by functional relations (the contextual figure exhibits a complementary emotion to the target). Furthermore, we examined which expression channel, face or body, drives social context effects. In the 1st 2 experiments (Studies 1a and 1b), participants in an online survey platform (N = 146) or university students (N = 34), viewed interacting figures displaying fear or anger, presented either as faces, bodies, or both. Participants were instructed to categorize the target figure's emotions while the other figure served as context. Results showed that fear recognition was facilitated by an interacting angry figure more strongly than by an interacting fearful figure. Moreover, this effect occurred when participants viewed the figures' bodies (with or without the faces), but not when they viewed the figures' faces alone. A 3rd online experiment (Study 2) established that this context effect was stronger when participants (N = 464) watched the figures interacting (facing each other) than when figures were not interacting (facing away from each other), suggesting that social context influences emotion perception by revealing the interactants' relation. Our findings demonstrate that emotional perception is grounded in the broader process of social interaction and highlight the role of the body in interpersonal context effects. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Emociones/fisiología , Expresión Facial , Cinésica , Reconocimiento en Psicología/fisiología , Interacción Social/ética , Adulto , Femenino , Humanos , Masculino
11.
Emotion ; 21(2): 247-259, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31886681

RESUMEN

According to the influential shared signal hypothesis, perceived gaze direction influences the recognition of emotion from the face, for example, gaze averted sideways facilitates the recognition of sad expressions because both gaze and expression signal avoidance. Importantly, this approach assumes that gaze direction is an independent cue that influences emotion recognition. But could gaze direction also impact emotion recognition because it is part of the stereotypical representation of the expression itself? In Experiment 1, we measured gaze aversion in participants engaged in a facial expression posing task. In Experiment 2, we examined the use of gaze aversion when constructing facial expressions on a computerized avatar. Results from both experiments demonstrated that downward gaze plays a central role in the representation of sad expressions. In Experiment 3, we manipulated gaze direction in perceived facial expressions and found that sadness was the only expression yielding a recognition advantage for downward, but not sideways gaze. Finally, in Experiment 4 we independently manipulated gaze aversion and eyelid closure, thereby demonstrating that downward gaze enhances sadness recognition irrespective of eyelid position. Together, these findings indicate that (1) gaze and expression are not independent cues and (2) the specific type of averted gaze is critical. In consequence, several premises of the shared signal hypothesis may need revision. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Expresión Facial , Fijación Ocular/fisiología , Adulto , Femenino , Humanos , Masculino , Tristeza , Adulto Joven
12.
Psychophysiology ; 57(12): e13684, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32996608

RESUMEN

When perceiving emotional facial expressions there is an automatic tendency to react with a matching facial expression. A classic explanation of this phenomenon, termed the matched motor hypothesis, highlights the importance of topographic matching, that is, the correspondence in body parts, between perceived and produced actions. More recent studies using mimicry paradigms have challenged this classic account, producing ample evidence against the matched motor hypothesis. However, research using stimulus-response compatibility (SRC) paradigms usually assumed the effect relies on topographic matching. While mimicry and SRC share some characteristics, critical differences between the paradigms suggest conclusions cannot be simply transferred from one to another. Thus, our aim in the present study was to directly test the matched motor hypothesis using SRC. Specifically, we investigated whether observing emotional body postures or hearing emotional vocalizations produces a tendency to respond with one's face, despite completely different motor actions being involved. In three SRC experiments, participants were required to either smile or frown in response to a color cue, presented concurrently with stimuli of happy and angry facial (experiment 1), body (experiment 2), or vocal (experiment 3) expressions. Reaction times were measured using facial EMG. Whether presenting facial, body, or vocal expressions, we found faster responses in compatible, compared to incompatible trials. These results demonstrate that the SRC effect of emotional expressions does not require topographic matching. Our findings question interpretations of previous research and suggest further examination of the matched motor hypothesis.


Asunto(s)
Percepción Auditiva/fisiología , Emociones/fisiología , Expresión Facial , Reconocimiento Facial/fisiología , Gestos , Postura/fisiología , Percepción Social , Adolescente , Adulto , Ira/fisiología , Electromiografía , Femenino , Felicidad , Humanos , Masculino , Adulto Joven
13.
Cortex ; 126: 343-354, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32234565

RESUMEN

Emotion recognition deficits in Huntington's disease (HD) are well-established. However, most previous studies have measured emotion recognition using stereotypical and intense facial expressions, which are easily recognized and artificial in their appearance. By contrast, everyday expressions are often more challenging to recognize, as they are subtle and non-stereotypical. Therefore, previous studies may have inflated the performance of HD patients and it is difficult to generalize their results to facial expressions encountered in everyday social interactions. In the present study, we tested 21 symptomatic HD patients and 28 healthy controls with a traditional facial expression set, as well as a novel stimulus set which exhibits subtle and non-stereotypical facial expressions. While HD patients demonstrated poor emotion recognition in both sets, when tested with the novel, ecologically looking facial expressions, patients' performance declined to chance level. Intriguingly, patients' emotion recognition deficit was predicted only by the severity of their motor symptoms, not by their cognitive status. This suggests a possible mechanism for emotion recognition impairments in HD, in line with embodiment theories. From this point of view, poor motor control may affect patients' ability to subtly produce and simulate a perceived facial expression, which in turn may contribute to their impaired recognition.


Asunto(s)
Reconocimiento Facial , Enfermedad de Huntington , Emociones , Expresión Facial , Humanos , Reconocimiento en Psicología , Conducta Estereotipada
14.
Emotion ; 20(7): 1154-1164, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31282697

RESUMEN

Recent evidence shows that body context may alter the categorization of facial expressions. However, less is known about how facial expressions influence the categorization of emotional bodies. We hypothesized that context effects would be displayed bidirectionally, from bodies to faces and from faces to bodies. Participants viewed emotional face-body compounds and were required to categorize emotions of faces (Condition 1), bodies (Condition 2), or full persons (Condition 3). Results showed evidence for bidirectional context effects: faces were influenced by bodies, and bodies were influenced by faces. However, because the specific confusability patterns differ for faces and bodies (e.g., disgust and anger expressions are confusable in the face, but less so in the body) we found unique patterns of contextual influence in each expression channel. Together, the findings suggest that the emotional expressions of faces and bodies contextualize each other bidirectionally and that emotion categorization is sensitive to the perceptual focus determined by task instructions. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Emociones/fisiología , Expresión Facial , Cinésica , Adulto , Femenino , Humanos , Masculino
15.
J Exp Psychol Gen ; 148(10): 1842-1848, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-30589289

RESUMEN

A basic premise of emotion theories is that experienced feelings (whether specific emotions or broad valence) are expressed via vocalizations in a veridical and clear manner. By contrast, functional-contextual frameworks, rooted in animal communication research, view vocalizations as contextually flexible tools for social influence, not as expressions of emotion. Testing these theories has proved difficult because past research relied heavily on posed sounds which may lack ecological validity. Here, we test these theories by examining the perception of human affective vocalizations evoked during highly intense, real-life emotional situations. In Experiment 1a, we show that highly intense vocalizations of opposite valence (e.g., joyous reunions, fearful encounters) are perceptually confusable and their ambiguity increases with higher intensity. In Experiment 1b, we use authentic lottery winning reactions and show that increased hedonic intensity leads to lower, not higher valence. In Experiment 2, we demonstrate that visual context operates as a powerful mechanism for disambiguating real-life vocalizations, shifting perceived valence categorically. These results suggest affective vocalizations may be inherently ambiguous, demonstrate the role of intensity in driving affective ambiguity, and suggest a critical role for context in vocalization perception. Together, these findings challenge both basic emotion and dimensional theories of emotion expression and are better in line with a functional-contextual account which is externalist and by definition, context dependent. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Afecto/fisiología , Comunicación , Emociones/fisiología , Conducta Verbal/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
16.
Emotion ; 19(3): 558-562, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29985010

RESUMEN

Although positive and negative affect are assumed to be highly distinct, recent work has shown that facial valence of positive and negative situations may be highly confusable, especially when the emotions are intense. However, previous work has relied exclusively on static images, portraying a single peak frame of the emotional display. Dynamic expressions, on the other hand, convey a far broader representation of the emotional reaction, but are they diagnostic of the situational valence? Participants (N = 245) watched videos portraying reactions to real-life highly positive situations and evaluated the affective valence of the target. Video information was controlled by: (a) truncating the movies after 5, 10, or 20 seconds from the start, and by (b) digitally manipulating the videos such that only the face was visible with no context, only the context was visible with no face, or the face appeared in context. Results indicate that during real-life intense positive situations, facial expressions alone were rated as negative and failed to convey diagnostic information about the positive situational valence even at the most extended presentation durations. By contrast, when contextual information appeared alone or with the face, participants accurately rated the target as feeling positive, and this positivity increased with extended viewing duration. These findings suggests poor coupling between facial valence and felt emotions, supporting the notion that when emotions run high, the diagnostic power of facial expressions is reduced. Conversely, the findings demonstrate an inherent role for contextual information in the recognition of real-life intense faces. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Emociones/fisiología , Expresión Facial , Adulto , Femenino , Humanos , Masculino
17.
Psychol Aging ; 33(4): 660-666, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29902057

RESUMEN

Older adults have poor recognition of isolated facial expressions, yet outside the lab, such faces are typically perceived with contextual expressive bodies. In fact, recent work suggests that real-life facial expressions may be ambiguous while contextual information such as body language may be more diagnostic for decoding emotions. We examined the recognition of emotion from incongruent face-body composites and found that compared to young adults, older adults gave the body far more weight when recognizing emotion. These results are consistent with a social-expertise view and suggest that in real-life, older adults may employ an advantageous holistic approach to emotion perception. (PsycINFO Database Record


Asunto(s)
Envejecimiento/fisiología , Emociones/fisiología , Expresión Facial , Anciano , Anciano de 80 o más Años , Señales (Psicología) , Femenino , Humanos , Masculino , Persona de Mediana Edad
18.
Neuropsychologia ; 117: 26-35, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29723598

RESUMEN

Facial expressions are inherently dynamic cues that develop and change over time, unfolding their affective signal. Although facial dynamics are assumed important for emotion recognition, testing often involves intense and stereotypical expressions and little is known about the role of temporal information in the recognition of subtle, non-stereotypical expressions. In Experiment 1 we demonstrate that facial dynamics are critical for recognizing subtle and non-stereotypical facial expressions, but not for recognizing intense and stereotypical displays of emotion. In Experiment 2 we further examined whether the facilitative effect of motion can lead to improved emotion recognition in LG, an individual with developmental visual agnosia and prosopagnosia, who has poor emotion recognition when tested with static facial expressions. LG's emotion recognition improved when subtle, non-stereotypical faces were dynamic rather than static. However, compared to controls, his relative gain from temporal information was diminished. Furthermore, LG's eye-tracking data demonstrated atypical visual scanning of the dynamic faces, consisting of longer fixations and lower fixation rates for the dynamic-subtle facial expressions, comparing to the dynamic-intense facial expressions. We suggest that deciphering subtle dynamic expressions strongly relies on integrating broad facial regions across time, rather than focusing on local emotional cues, skills which are impaired in developmental visual agnosia.


Asunto(s)
Agnosia/fisiopatología , Emociones/fisiología , Expresión Facial , Reconocimiento Visual de Modelos/fisiología , Adulto , Análisis de Varianza , Movimientos Oculares , Femenino , Humanos , Masculino , Estimulación Luminosa , Adulto Joven
19.
Curr Opin Psychol ; 17: 47-54, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28950972

RESUMEN

According to mainstream views of emotion perception, facial expressions are powerful signals conveying specific emotional states. This approach, which endorsed the use of stereotypical-posed faces as stimuli, has typically ignored the role of context in emotion perception. We argue that this methodological tradition is flawed. Real-life facial expressions are often highly ambiguous, heavily relying on contextual information. We review recent work suggesting that context is an inherent part of real-life emotion perception, often leading to radical categorical changes. Contextual effects are not an obscurity at the fringe of facial emotion perception, rather, they are part of emotion perception itself.


Asunto(s)
Emociones , Reconocimiento Facial , Expresión Facial , Humanos , Individualidad , Percepción Social
20.
Emotion ; 17(8): 1187-1198, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28406679

RESUMEN

According to dominant theories of affect, humans innately and universally express a set of emotions using specific configurations of prototypical facial activity. Accordingly, thousands of studies have tested emotion recognition using sets of highly intense and stereotypical facial expressions, yet their incidence in real life is virtually unknown. In fact, a commonplace experience is that emotions are expressed in subtle and nonprototypical forms. Such facial expressions are at the focus of the current study. In Experiment 1, we present the development and validation of a novel stimulus set consisting of dynamic and subtle emotional facial displays conveyed without constraining expressers to using prototypical configurations. Although these subtle expressions were more challenging to recognize than prototypical dynamic expressions, they were still well recognized by human raters, and perhaps most importantly, they were rated as more ecological and naturalistic than the prototypical expressions. In Experiment 2, we examined the characteristics of subtle versus prototypical expressions by subjecting them to a software classifier, which used prototypical basic emotion criteria. Although the software was highly successful at classifying prototypical expressions, it performed very poorly at classifying the subtle expressions. Further validation was obtained from human expert face coders: Subtle stimuli did not contain many of the key facial movements present in prototypical expressions. Together, these findings suggest that emotions may be successfully conveyed to human viewers using subtle nonprototypical expressions. Although classic prototypical facial expressions are well recognized, they appear less naturalistic and may not capture the richness of everyday emotional communication. (PsycINFO Database Record


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Programas Informáticos , Adulto , Comunicación , Cara/anatomía & histología , Cara/fisiología , Reconocimiento Facial/fisiología , Miedo , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Estereotipo , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA