Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 760
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-39262120

RESUMEN

Cracking the non-verbal "code" of human emotions has been a chief interest of generations of scientists. Yet, despite much effort, a dictionary that clearly maps non-verbal behaviours onto meaning remains elusive. We suggest this is due to an over-reliance on language-related concepts and an under-appreciation of the evolutionary context in which a given non-verbal behaviour emerged. Indeed, work in other species emphasizes non-verbal effects (e.g. affiliation) rather than meaning (e.g. happiness) and differentiates between signals, for which communication benefits both sender and receiver, and cues, for which communication does not benefit senders. Against this backdrop, we develop a "non-verbal effecting" perspective for human research. This perspective extends the typical focus on facial expressions to a broadcasting of multisensory signals and cues that emerge from both social and non-social emotions. Moreover, it emphasizes the consequences or effects that signals and cues have for individuals and their social interactions. We believe that re-directing our attention from verbal emotion labels to non-verbal effects is a necessary step to comprehend scientifically how humans share what they feel.

2.
Psychol Res Behav Manag ; 17: 3111-3120, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39253353

RESUMEN

Background: Studies have shown that elderly individuals have significantly worse facial expression recognition scores than young adults. Some have suggested that this difference is due to perceptual degradation, while others suggest it is due to decreased attention of elderly individuals to the most informative regions of the face. Methods: To resolve this controversy, this study recruited 85 participants and used a behavioral task and eye-tracking techniques (EyeLink 1000 Plus eye tracker). It adopted the "study-recognition" paradigm, and a mixed experimental design of 3 (facial expressions: positive, neutral, negative) × 2 (subjects' age: young, old) × 3 (facial areas of interest: eyes, nose, and mouth) was used to explore whether there was perceptual degradation in older people's attention to facial expressions and investigate the differences in diagnostic areas between young and older people. Results: The behavioral results revealed that young participants had significantly higher facial expression recognition scores than older participants did; moreover, the eye-tracking results revealed that younger people generally fixated on faces significantly more than elderly people, demonstrating the perceptual degradation in elderly people. Young people primarily look at the eyes, followed by the nose and, finally, the mouth when examining facial expressions. The elderly participants primarily focus on the eyes, followed by the mouth and then the nose. Conclusion: The findings confirmed that young participants have better facial expression recognition performance than elderly participants, which may be related more to perceptual degradation than to decreased attention to informative areas of the face. For elderly people, the duration of gaze toward the facial diagnosis area (such as the eyes) should be increased when recognizing faces to compensate for the disadvantage of decreased facial recognition performance caused by perceptual aging.

3.
J Hist Behav Sci ; 60(4): e22322, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39252515

RESUMEN

This essay examines the detailed process of isolating facial data from the context of its emergence through the early work of psychologist Paul Ekman in the 1960s. It explores how Ekman's data practices have been developed, criticized, and compromised by situating them within the political and intellectual landscape of his early career. This essay follows Ekman's journey from the Langley Porter Neuropsychiatric Institute to New Guinea, highlighting his brief but notable collaborations with psychologist Charles E. Osgood and NIH researchers D. Carleton Gajdusek and E. Richard Sorenson. It argues that the different meanings assigned to the human face resulted in how each group developed their studies - examining facial expressions either in interaction, where they shape reciprocal actions in interpersonal communication, or in isolation, where faces surface from the individual's unconscious interior.


Asunto(s)
Expresión Facial , Historia del Siglo XX , Humanos , Estados Unidos , Cara , Psicología/historia
4.
Acta Psychiatr Scand ; 2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39135341

RESUMEN

BACKGROUND: Facial expressions are a core aspect of non-verbal communication. Reduced emotional expressiveness of the face is a common negative symptom of schizophrenia, however, quantifying negative symptoms can be clinically challenging and involves a considerable element of rater subjectivity. We used computer vision to investigate if (i) automated assessment of facial expressions captures negative as well as positive and general symptom domains, and (ii) if automated assessments are associated with treatment response in initially antipsychotic-naïve patients with first-episode psychosis. METHOD: We included 46 patients (mean age 25.4 (6.1); 65.2% males). Psychopathology was assessed at baseline and after 6 weeks of monotherapy with amisulpride using the Positive and Negative Syndrome Scale (PANSS). Baseline interview videos were recorded. Seventeen facial action units (AUs), that is, activation of muscles, from the Facial Action Coding System were extracted using OpenFace 2.0. A correlation matrix was calculated for each patient. Facial expressions were identified using spectral clustering at group-level. Associations between facial expressions and psychopathology were investigated using multiple linear regression. RESULTS: Three clusters of facial expressions were identified related to different locations of the face. Cluster 1 was associated with positive and general symptoms at baseline, Cluster 2 was associated with all symptom domains, showing the strongest association with the negative domain, and Cluster 3 was only associated with general symptoms. Cluster 1 was significantly associated with the clinically rated improvement in positive and general symptoms after treatment, and Cluster 2 was significantly associated with clinical improvement in all domains. CONCLUSION: Using automated computer vision of facial expressions during PANSS interviews did not only capture negative symptoms but also combinations of the three overall domains of psychopathology. Moreover, automated assessments of facial expressions at baseline were associated with initial antipsychotic treatment response. The findings underscore the clinical relevance of facial expressions and motivate further investigations of computer vision in clinical psychiatry.

5.
Eur J Neurosci ; 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39138605

RESUMEN

Actions are rarely devoid of emotional content. Thus, a more complete picture of the neural mechanisms underlying the mental simulation of observed actions requires more research using emotion information. The present study used high-density electroencephalography to investigate mental simulation associated with facial emotion categorisation. Alpha-mu rhythm modulation was measured at each frequency, from 8 Hz to 13 Hz, to infer the degree of sensorimotor simulation. Results suggest the sensitivity of the sensorimotor activity to emotional information, because (1) categorising static images of neutral faces as happy or sad was associated with stronger suppression in the central region than categorising clearly happy faces, (2) there was preliminary evidence indicating that the strongest suppression in the central region was in response to neutral faces, followed by sad and then happy faces and (3) in the control task, which required categorising images with the head oriented right, left, or forward as right or left, differences between conditions showed a pattern more indicative of task difficulty rather than sensorimotor engagement. Dissociable processing of emotional information in facial expressions and directionality information in head orientations was further captured in beta band activity (14-20 Hz). Stronger mu suppression to neutral faces indicates that sensorimotor simulation extends beyond crude motor mimicry. We propose that mu rhythm responses to facial expressions may serve as a biomarker for empathy circuit activation. Future research should investigate whether atypical or inconsistent mu rhythm responses to facial expressions indicate difficulties in understanding or sharing emotions.

6.
Artículo en Inglés | MEDLINE | ID: mdl-39100498

RESUMEN

MoodCapture presents a novel approach that assesses depression based on images automatically captured from the front-facing camera of smartphones as people go about their daily lives. We collect over 125,000 photos in the wild from N=177 participants diagnosed with major depressive disorder for 90 days. Images are captured naturalistically while participants respond to the PHQ-8 depression survey question: "I have felt down, depressed, or hopeless". Our analysis explores important image attributes, such as angle, dominant colors, location, objects, and lighting. We show that a random forest trained with face landmarks can classify samples as depressed or non-depressed and predict raw PHQ-8 scores effectively. Our post-hoc analysis provides several insights through an ablation study, feature importance analysis, and bias assessment. Importantly, we evaluate user concerns about using MoodCapture to detect depression based on sharing photos, providing critical insights into privacy concerns that inform the future design of in-the-wild image-based mental health assessment tools.

7.
Sci Rep ; 14(1): 17859, 2024 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-39090239

RESUMEN

Recent research shows that emotional facial expressions impact behavioral responses only when their valence is relevant to the task. Under such conditions, threatening faces delay attentional disengagement, resulting in slower reaction times and increased omission errors compared to happy faces. To investigate the neural underpinnings of this phenomenon, we used functional magnetic resonance imaging to record the brain activity of 23 healthy participants while they completed two versions of the go/no-go task. In the emotion task (ET), participants responded to emotional expressions (fearful or happy faces) and refrained from responding to neutral faces. In the gender task (GT), the same images were displayed, but participants had to respond based on the posers' gender. Our results confirmed previous behavioral findings and revealed a network of brain regions (including the angular gyrus, the ventral precuneus, the left posterior cingulate cortex, the right anterior superior frontal gyrus, and two face-responsive regions) displaying distinct activation patterns for the same facial emotional expressions in the ET compared to the GT. We propose that this network integrates internal representations of task rules with sensory characteristics of facial expressions to evaluate emotional stimuli and exert top-down control, guiding goal-directed actions according to the context.


Asunto(s)
Mapeo Encefálico , Encéfalo , Emociones , Expresión Facial , Imagen por Resonancia Magnética , Tiempo de Reacción , Humanos , Masculino , Femenino , Emociones/fisiología , Adulto , Adulto Joven , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Tiempo de Reacción/fisiología
8.
Front Vet Sci ; 11: 1436795, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39086767

RESUMEN

Facial expressions are essential for communication and emotional expression across species. Despite the improvements brought by tools like the Horse Grimace Scale (HGS) in pain recognition in horses, their reliance on human identification of characteristic traits presents drawbacks such as subjectivity, training requirements, costs, and potential bias. Despite these challenges, the development of facial expression pain scales for animals has been making strides. To address these limitations, Automated Pain Recognition (APR) powered by Artificial Intelligence (AI) offers a promising advancement. Notably, computer vision and machine learning have revolutionized our approach to identifying and addressing pain in non-verbal patients, including animals, with profound implications for both veterinary medicine and animal welfare. By leveraging the capabilities of AI algorithms, we can construct sophisticated models capable of analyzing diverse data inputs, encompassing not only facial expressions but also body language, vocalizations, and physiological signals, to provide precise and objective evaluations of an animal's pain levels. While the advancement of APR holds great promise for improving animal welfare by enabling better pain management, it also brings forth the need to overcome data limitations, ensure ethical practices, and develop robust ground truth measures. This narrative review aimed to provide a comprehensive overview, tracing the journey from the initial application of facial expression recognition for the development of pain scales in animals to the recent application, evolution, and limitations of APR, thereby contributing to understanding this rapidly evolving field.

9.
BMC Psychol ; 12(1): 459, 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39210484

RESUMEN

BACKGROUND: Attentional processes are influenced by both stimulus characteristics and individual factors such as mood or personal experience. Research has suggested that attentional biases to socially relevant stimuli may occur in individuals with a history of peer victimization in childhood and adolescence. Based on this, the present study aimed to examine attentional processes in response to emotional faces at both the behavioral and neurophysiological levels in participants with experiences of peer victimization. METHODS: In a sample of 60 adult participants with varying severity of retrospectively reported peer victimization in childhood and adolescence, the dot-probe task was administered with angry, disgusted, sad, and happy facial expressions. In addition to behavioral responses, physiological responses (i.e., event-related potentials) were analyzed. RESULTS: Analyses of mean P100 and P200 amplitudes revealed altered P200 amplitudes in individuals with higher degrees of peer victimization. Higher levels of relational peer victimization were associated with increased P200 amplitudes in response to facial expressions, particularly angry and disgusted facial expressions. Hierarchical regression analyses showed no evidence for an influence of peer victimization experiences on reaction times or P100 amplitudes in response to the different emotions. CONCLUSION: Cortical findings suggest that individuals with higher levels of peer victimization mobilize more attentional resources when confronted with negative emotional social stimuli. Peer victimization experiences in childhood and adolescence appear to influence cortical processes into adulthood.


Asunto(s)
Atención , Emociones , Potenciales Evocados , Expresión Facial , Humanos , Masculino , Femenino , Potenciales Evocados/fisiología , Adulto , Emociones/fisiología , Adulto Joven , Atención/fisiología , Electroencefalografía , Grupo Paritario , Acoso Escolar/psicología , Víctimas de Crimen/psicología , Reconocimiento Facial/fisiología , Estudios Retrospectivos , Adolescente
10.
Sci Rep ; 14(1): 16193, 2024 07 13.
Artículo en Inglés | MEDLINE | ID: mdl-39003314

RESUMEN

Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Fijación Ocular , Humanos , Reconocimiento Facial/fisiología , Femenino , Masculino , Adulto , Fijación Ocular/fisiología , Emociones/fisiología , Adulto Joven , Movimientos Oculares/fisiología , Estimulación Luminosa/métodos
11.
Cogn Emot ; : 1-17, 2024 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-38973174

RESUMEN

Previous research has demonstrated that individuals from Western cultures exhibit categorical perception (CP) in their judgments of emotional faces. However, the extent to which this phenomenon characterises the judgments of facial expressions among East Asians remains relatively unexplored. Building upon recent findings showing that East Asians are more likely than Westerners to see a mixture of emotions in facial expressions of anger and disgust, the present research aimed to investigate whether East Asians also display CP for angry and disgusted faces. To address this question, participants from Canada and China were recruited to discriminate pairs of faces along the anger-disgust continuum. The results revealed the presence of CP in both cultural groups, as participants consistently exhibited higher accuracy and faster response latencies when discriminating between-category pairs of expressions compared to within-category pairs. Moreover, the magnitude of CP did not vary significantly across cultures. These findings provide novel evidence supporting the existence of CP for facial expressions in both East Asian and Western cultures, suggesting that CP is a perceptual phenomenon that transcends cultural boundaries. This research contributes to the growing literature on cross-cultural perceptions of facial expressions by deepening our understanding of how facial expressions are perceived categorically across cultures.

12.
Front Psychol ; 15: 1350631, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38966733

RESUMEN

Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.

13.
Sci Justice ; 64(4): 421-442, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39025567

RESUMEN

In today's biometric and commercial settings, state-of-the-art image processing relies solely on artificial intelligence and machine learning which provides a high level of accuracy. However, these principles are deeply rooted in abstract, complex "black-box systems". When applied to forensic image identification, concerns about transparency and accountability emerge. This study explores the impact of two challenging factors in automated facial identification: facial expressions and head poses. The sample comprised 3D faces with nine prototype expressions, collected from 41 participants (13 males, 28 females) of European descent aged 19.96 to 50.89 years. Pre-processing involved converting 3D models to 2D color images (256 × 256 px). Probes included a set of 9 images per individual with head poses varying by 5° in both left-to-right (yaw) and up-and-down (pitch) directions for neutral expressions. A second set of 3,610 images per individual covered viewpoints in 5° increments from -45° to 45° for head movements and different facial expressions, forming the targets. Pair-wise comparisons using ArcFace, a state-of-the-art face identification algorithm yielded 54,615,690 dissimilarity scores. Results indicate that minor head deviations in probes have minimal impact. However, the performance diminished as targets deviated from the frontal position. Right-to-left movements were less influential than up and down, with downward pitch showing less impact than upward movements. The lowest accuracy was for upward pitch at 45°. Dissimilarity scores were consistently higher for males than for females across all studied factors. The performance particularly diverged in upward movements, starting at 15°. Among tested facial expressions, happiness and contempt performed best, while disgust exhibited the lowest AUC values.


Asunto(s)
Algoritmos , Reconocimiento Facial Automatizado , Expresión Facial , Humanos , Masculino , Femenino , Adulto , Reconocimiento Facial Automatizado/métodos , Adulto Joven , Persona de Mediana Edad , Imagenología Tridimensional , Procesamiento de Imagen Asistido por Computador/métodos , Identificación Biométrica/métodos , Cara/anatomía & histología , Movimientos de la Cabeza/fisiología , Postura/fisiología
14.
Neuropsychologia ; 202: 108963, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39069120

RESUMEN

The mean emotion from multiple facial expressions can be extracted rapidly and precisely. However, it remains debated whether mean emotion processing is automatic which can occur under no attention. To address this question, we used a passive oddball paradigm and recorded event-related brain potentials when participants discriminated the changes in the central fixation while a set of four faces was presented in the periphery. The face set consisted of one happy and three angry expressions (mean negative) or one angry and three happy expressions (mean positive), and the mean negative and mean positive face sets were shown with a probability of 20% (deviant) and 80% (standard) respectively in the sequence, or the vice versa. The cluster-based permutation analyses showed that the visual mismatch negativity started early at around 92 ms and was also observed in later time windows when the mean emotion was negative, while a mismatch positivity was observed at around 168-266 ms when the mean emotion was positive. The results suggest that there might be different mechanisms underlying the processing of mean negative and mean positive emotions. More importantly, the brain can detect the changes in the mean emotion automatically, and ensemble coding for multiple facial expressions can occur in an automatic fashion without attention.


Asunto(s)
Electroencefalografía , Emociones , Expresión Facial , Estimulación Luminosa , Humanos , Emociones/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Potenciales Evocados/fisiología , Tiempo de Reacción/fisiología , Encéfalo/fisiología , Atención/fisiología , Reconocimiento Facial/fisiología
15.
J Psychiatr Res ; 176: 9-17, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38830297

RESUMEN

Emotional deficits in psychosis are prevalent and difficult to treat. In particular, much remains unknown about facial expression abnormalities, and a key reason is that expressions are very labor-intensive to code. Automatic facial coding (AFC) can remove this barrier. The current study sought to both provide evidence for the utility of AFC in psychosis for research purposes and to provide evidence that AFC are valid measures of clinical constructs. Changes of facial expressions and head position of participants-39 with schizophrenia/schizoaffective disorder (SZ), 46 with other psychotic disorders (OP), and 108 never psychotic individuals (NP)-were assessed via FaceReader, a commercially available automated facial expression analysis software, using video recorded during a clinical interview. We first examined the behavioral measures of the psychotic disorder groups and tested if they can discriminate between the groups. Next, we evaluated links of behavioral measures with clinical symptoms, controlling for group membership. We found the SZ group was characterized by significantly less variation in neutral expressions, happy expressions, arousal, and head movements compared to NP. These measures discriminated SZ from NP well (AUC = 0.79, sensitivity = 0.79, specificity = 0.67) but discriminated SZ from OP less well (AUC = 0.66, sensitivity = 0.77, specificity = 0.46). We also found significant correlations between clinician-rated symptoms and most behavioral measures (particularly happy expressions, arousal, and head movements). Taken together, these results suggest that AFC can provide useful behavioral measures of psychosis, which could improve research on non-verbal expressions in psychosis and, ultimately, enhance treatment.


Asunto(s)
Expresión Facial , Trastornos Psicóticos , Grabación en Video , Humanos , Trastornos Psicóticos/fisiopatología , Trastornos Psicóticos/diagnóstico , Femenino , Masculino , Adulto , Persona de Mediana Edad , Esquizofrenia/fisiopatología , Esquizofrenia/diagnóstico , Escalas de Valoración Psiquiátrica , Movimientos de la Cabeza/fisiología , Adulto Joven , Emociones/fisiología
16.
Sensors (Basel) ; 24(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38894141

RESUMEN

One of the biggest challenges of computers is collecting data from human behavior, such as interpreting human emotions. Traditionally, this process is carried out by computer vision or multichannel electroencephalograms. However, they comprise heavy computational resources, far from final users or where the dataset was made. On the other side, sensors can capture muscle reactions and respond on the spot, preserving information locally without using robust computers. Therefore, the research subject is the recognition of the six primary human emotions using electromyography sensors in a portable device. They are placed on specific facial muscles to detect happiness, anger, surprise, fear, sadness, and disgust. The experimental results showed that when working with the CortexM0 microcontroller, enough computational capabilities were achieved to store a deep learning model with a classification store of 92%. Furthermore, we demonstrate the necessity of collecting data from natural environments and how they need to be processed by a machine learning pipeline.


Asunto(s)
Electromiografía , Expresión Facial , Aprendizaje Automático , Humanos , Electromiografía/métodos , Emociones/fisiología , Músculos Faciales/fisiología , Masculino , Femenino , Adulto
17.
IEEE Open J Eng Med Biol ; 5: 396-403, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38899017

RESUMEN

Goal: As an essential human-machine interactive task, emotion recognition has become an emerging area over the decades. Although previous attempts to classify emotions have achieved high performance, several challenges remain open: 1) How to effectively recognize emotions using different modalities remains challenging. 2) Due to the increasing amount of computing power required for deep learning, how to provide real-time detection and improve the robustness of deep neural networks is important. Method: In this paper, we propose a deep learning-based multimodal emotion recognition (MER) called Deep-Emotion, which can adaptively integrate the most discriminating features from facial expressions, speech, and electroencephalogram (EEG) to improve the performance of the MER. Specifically, the proposed Deep-Emotion framework consists of three branches, i.e., the facial branch, speech branch, and EEG branch. Correspondingly, the facial branch uses the improved GhostNet neural network proposed in this paper for feature extraction, which effectively alleviates the overfitting phenomenon in the training process and improves the classification accuracy compared with the original GhostNet network. For work on the speech branch, this paper proposes a lightweight fully convolutional neural network (LFCNN) for the efficient extraction of speech emotion features. Regarding the study of EEG branches, we proposed a tree-like LSTM (tLSTM) model capable of fusing multi-stage features for EEG emotion feature extraction. Finally, we adopted the strategy of decision-level fusion to integrate the recognition results of the above three modes, resulting in more comprehensive and accurate performance. Result and Conclusions: Extensive experiments on the CK+, EMO-DB, and MAHNOB-HCI datasets have demonstrated the advanced nature of the Deep-Emotion method proposed in this paper, as well as the feasibility and superiority of the MER approach.

18.
Orthod Craniofac Res ; 2024 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-38825845

RESUMEN

OBJECTIVE: In many medical disciplines, facial attractiveness is part of the diagnosis, yet its scoring might be confounded by facial expressions. The intent was to apply deep convolutional neural networks (CNN) to identify how facial expressions affect facial attractiveness and to explore whether a dedicated training of the CNN is able to reduce the bias of facial expressions. MATERIALS AND METHODS: Frontal facial images (n = 840) of 40 female participants (mean age 24.5 years) were taken adapting a neutral facial expression and the six universal facial expressions. Facial attractiveness was computed by means of a face detector, deep convolutional neural networks, standard support vector regression for facial beauty, visual regularized collaborative filtering and a regression technique for handling visual queries without rating history. CNN was first trained on random facial photographs from a dating website and then further trained on the Chicago Face Database (CFD) to increase its suitability to medical conditions. Both algorithms scored every image for attractiveness. RESULTS: Facial expressions affect facial attractiveness scores significantly. Scores from CNN additionally trained on CFD had less variability between the expressions (range 54.3-60.9 compared to range: 32.6-49.5) and less variance within the scores (P ≤ .05), but also caused a shift in the ranking of the expressions' facial attractiveness. CONCLUSION: Facial expressions confound attractiveness scores. Training on norming images generated scores less susceptible to distortion, but more difficult to interpret. Scoring facial attractiveness based on CNN seems promising, but AI solutions must be developed on CNN trained to recognize facial expressions as distractors.

19.
Front Psychiatry ; 15: 1384789, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38938454

RESUMEN

Emotion recognition is central in prosocial interaction, enabling the inference of mental and affective states. Individuals who have committed sexual offenses are known to exhibit socio-affective deficits, one of the four dynamic risk assessment dimensions found in the literature. Few research focused on emotion recognition. The available literature, exclusively on individuals in prison who have committed sexual offenses, showed contrasting results. Some found a global (across all emotions) or specific (e.g., anger, fear) deficit in emotion recognition. In contrast, others found no difference between individuals in prison who have committed sexual offenses and those who have committed non-sexual offenses. In addition, no such study has been undertaken among forensic inpatients who exhibit socio-affective deficits. This study aims to investigate the recognition of dynamic facial expressions of emotion in 112 male participants divided into three groups: forensic inpatients who have committed sexual offenses (n = 37), forensic inpatients who have committed non-sexual offenses (n = 25), and community members (n = 50), using the Signal Detection Theory indices: sensitivity (d') and response bias (c). In addition, measures related to reaction time, emotion labeling reflection time, task easiness, and easiness reflection time were also collected. Non-parametric analyses (Kruskall-Wallis' H, followed by Mann-Whitney's U with Dunn-Bonferroni correction) highlighted that the two forensic inpatient groups exhibited emotion recognition deficits when compared to community members. Forensic inpatients who have committed sexual offenses were more conservative in selecting the surprise label than community members. They also took significantly more time to react to stimuli and to select an emotional label. Despite emotion recognition deficits, the two forensic inpatient groups reported more stimuli easiness than community members.

20.
Sci Rep ; 14(1): 10607, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38719866

RESUMEN

Guilt is a negative emotion elicited by realizing one has caused actual or perceived harm to another person. One of guilt's primary functions is to signal that one is aware of the harm that was caused and regrets it, an indication that the harm will not be repeated. Verbal expressions of guilt are often deemed insufficient by observers when not accompanied by nonverbal signals such as facial expression, gesture, posture, or gaze. Some research has investigated isolated nonverbal expressions in guilt, however none to date has explored multiple nonverbal channels simultaneously. This study explored facial expression, gesture, posture, and gaze during the real-time experience of guilt when response demands are minimal. Healthy adults completed a novel task involving watching videos designed to elicit guilt, as well as comparison emotions. During the video task, participants were continuously recorded to capture nonverbal behaviour, which was then analyzed via automated facial expression software. We found that while feeling guilt, individuals engaged less in several nonverbal behaviours than they did while experiencing the comparison emotions. This may reflect the highly social aspect of guilt, suggesting that an audience is required to prompt a guilt display, or may suggest that guilt does not have clear nonverbal correlates.


Asunto(s)
Expresión Facial , Culpa , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Comunicación no Verbal/psicología , Emociones/fisiología , Gestos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...