Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 15.699
Filter
1.
Skin Res Technol ; 30(7): e13768, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38961690

ABSTRACT

BACKGROUND: The majority of conventional studies on skin aging have focused on static conditions. However, in daily life, the facial skin we encounter is constantly in motion due to conversational expressions and changes in facial expressions, causing the skin to alter its position and shape, resulting in a dynamic state. Consequently, it is hypothesized that characteristics of aging not apparent in static conditions may be present in the dynamic state of the skin. Therefore, this study investigates age-related changes in dynamic skin characteristics associated with facial expression alterations. METHODS: A motion capture system measured the dynamic characteristics (delay and stretchiness of skin movement associated with expression) of the cheek skin in response to facial expressions among 86 Japanese women aged between 20 and 69 years. RESULTS: The findings revealed an increase in the delay of cheek skin response to facial expressions (r = 0.24, p < 0.05) and a decrease in the stretchiness of the lower cheek area with age (r = 0.60, p < 0.01). An increasing variance in delay and stretchiness within the same age group was also observed with aging. CONCLUSION: The findings of this study revealed that skin aging encompasses both static characteristics, such as spots, wrinkles, and sagging, traditionally studied in aging research, and dynamic aging characteristics of the skin that emerge in response to facial expression changes. These dynamic aging characteristics could pave the way for the development of new methodologies in skin aging analysis and potentially improve our understanding and treatment of aging impressions that are visually perceptible in daily life but remain unexplored.


Subject(s)
Cheek , Facial Expression , Skin Aging , Humans , Female , Cheek/physiology , Middle Aged , Adult , Skin Aging/physiology , Aged , Japan , Young Adult , Movement/physiology , Skin , Aging/physiology , Skin Physiological Phenomena , East Asian People
2.
Dev Psychobiol ; 66(6): e22522, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38967122

ABSTRACT

Witnessing emotional expressions in others triggers physiological arousal in humans. The current study focused on pupil responses to emotional expressions in a community sample as a physiological index of arousal and attention. We explored the associations between parents' and offspring's responses to dynamic facial expressions of emotion, as well as the links between pupil responses and anxiety/depression. Children (N = 90, MAge = 10.13, range = 7.21-12.94, 47 girls) participated in this lab study with one of their parents (47 mothers). Pupil responses were assessed in a computer task with dynamic happy, angry, fearful, and sad expressions, while participants verbally labeled the emotion displayed on the screen as quickly as possible. Parents and children reported anxiety and depression symptoms in questionnaires. Both parents and children showed stronger pupillary responses to negative versus positive expressions, and children's responses were overall stronger than those of parents. We also found links between the pupil responses of parents and children to negative, especially to angry faces. Child pupil responses were related to their own and their parents' anxiety levels and to their parents' (but not their own) depression. We conclude that child pupils are sensitive to individual differences in parents' pupils and emotional dispositions in community samples.


Subject(s)
Anxiety , Depression , Emotions , Facial Expression , Parents , Pupil , Humans , Female , Male , Depression/physiopathology , Child , Anxiety/physiopathology , Adult , Pupil/physiology , Emotions/physiology , Facial Recognition/physiology
3.
BMJ Ment Health ; 27(1): 1-7, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38960412

ABSTRACT

BACKGROUND: Circadian rhythms influence cognitive performance which peaks in the morning for early chronotypes and evening for late chronotypes. It is unknown whether cognitive interventions are susceptible to such synchrony effects and could be optimised at certain times-of-day. OBJECTIVE: A pilot study testing whether the effectiveness of cognitive bias modification (CBM) for facial emotion processing was improved when delivered at a time-of-day that was synchronised to chronotype. METHODS: 173 healthy young adults (aged 18-25) with an early or late chronotype completed one online session of CBM training in either the morning (06:00 hours to 10:00 hours) or evening (18:00 hours to 22:00 hours). FINDINGS: Moderate evidence that participants learnt better (higher post-training balance point) when they completed CBM training in the synchronous (evening for late chronotypes, morning for early chronotypes) compared with asynchronous (morning for late chronotypes, evening for early chronotypes) condition, controlling for pre-training balance point, sleep quality and negative affect. There was also a group×condition interaction where late chronotypes learnt faster and more effectively in synchronous versus asynchronous conditions. CONCLUSIONS: Preliminary evidence that synchrony effects apply to this psychological intervention. Tailoring the delivery timing of CBM training to chronotype may optimise its effectiveness. This may be particularly important for late chronotypes who were less able to adapt to non-optimal times-of-day, possibly because they experience more social jetlag. CLINICAL IMPLICATIONS: To consider delivery timing of CBM training when administering to early and late chronotypes. This may generalise to other psychological interventions and be relevant for online interventions where the timing can be flexible.


Subject(s)
Circadian Rhythm , Cognitive Behavioral Therapy , Emotions , Humans , Pilot Projects , Male , Female , Young Adult , Adult , Adolescent , Circadian Rhythm/physiology , Cognitive Behavioral Therapy/methods , Emotions/physiology , Time Factors , Facial Expression , Chronotype
4.
Bioinformatics ; 40(Supplement_1): i110-i118, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38940144

ABSTRACT

Artificial intelligence (AI) is increasingly used in genomics research and practice, and generative AI has garnered significant recent attention. In clinical applications of generative AI, aspects of the underlying datasets can impact results, and confounders should be studied and mitigated. One example involves the facial expressions of people with genetic conditions. Stereotypically, Williams (WS) and Angelman (AS) syndromes are associated with a "happy" demeanor, including a smiling expression. Clinical geneticists may be more likely to identify these conditions in images of smiling individuals. To study the impact of facial expression, we analyzed publicly available facial images of approximately 3500 individuals with genetic conditions. Using a deep learning (DL) image classifier, we found that WS and AS images with non-smiling expressions had significantly lower prediction probabilities for the correct syndrome labels than those with smiling expressions. This was not seen for 22q11.2 deletion and Noonan syndromes, which are not associated with a smiling expression. To further explore the effect of facial expressions, we computationally altered the facial expressions for these images. We trained HyperStyle, a GAN-inversion technique compatible with StyleGAN2, to determine the vector representations of our images. Then, following the concept of InterfaceGAN, we edited these vectors to recreate the original images in a phenotypically accurate way but with a different facial expression. Through online surveys and an eye-tracking experiment, we examined how altered facial expressions affect the performance of human experts. We overall found that facial expression is associated with diagnostic accuracy variably in different genetic conditions.


Subject(s)
Facial Expression , Humans , Deep Learning , Artificial Intelligence , Genetics, Medical/methods , Williams Syndrome/genetics
6.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38884282

ABSTRACT

Humanoid robots have been designed to look more and more like humans to meet social demands. How do people empathize humanoid robots who look the same as but are essentially different from humans? We addressed this issue by examining subjective feelings, electrophysiological activities, and functional magnetic resonance imaging signals during perception of pain and neutral expressions of faces that were recognized as patients or humanoid robots. We found that healthy adults reported deceased feelings of understanding and sharing of humanoid robots' compared to patients' pain. Moreover, humanoid robot (vs. patient) identities reduced long-latency electrophysiological responses and blood oxygenation level-dependent signals in the left temporoparietal junction in response to pain (vs. neutral) expressions. Furthermore, we showed evidence that humanoid robot identities inhibited a causal input from the right ventral lateral prefrontal cortex to the left temporoparietal junction, contrasting the opposite effect produced by patient identities. These results suggest a neural model of modulations of empathy by humanoid robot identity through interactions between the cognitive and affective empathy networks, which provides a neurocognitive basis for understanding human-robot interactions.


Subject(s)
Brain Mapping , Brain , Empathy , Magnetic Resonance Imaging , Robotics , Humans , Empathy/physiology , Male , Female , Magnetic Resonance Imaging/methods , Adult , Young Adult , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods , Multimodal Imaging/methods , Electroencephalography , Facial Expression , Pain/psychology , Pain/diagnostic imaging , Pain/physiopathology
7.
Sci Rep ; 14(1): 13757, 2024 06 14.
Article in English | MEDLINE | ID: mdl-38877079

ABSTRACT

While perceiving the emotional state of others may be crucial for our behavior even when this information is present outside of central vision, emotion perception studies typically focus on central visual field. We have recently investigated emotional valence (pleasantness) perception across the parafovea (≤ 4°) and found that for briefly presented (200 ms) emotional face images (from the established KDEF image-set), positive (happy) valence was the least affected by eccentricity (distance from the central visual field) and negative (fearful) valence the most. Furthermore, we found that performance at 2° predicted performance at 4°. Here we tested (n = 37) whether these effects replicate with face stimuli of different identities from a different well-established image-set (NimStim). All our prior findings replicated and eccentricity-based modulation magnitude was smaller with NimStim (~ 16.6% accuracy reduction at 4°) than with KDEF stimuli (~ 27.3% reduction). Our current investigations support our earlier findings that for briefly presented parafoveal stimuli, positive and negative valence perception are differently affected by eccentricity and may be dissociated. Furthermore, our results highlight the importance of investigating emotions beyond central vision and demonstrate commonalities and differences across different image sets in the parafovea, emphasizing the contribution of replication studies to substantiate our knowledge about perceptual mechanisms.


Subject(s)
Emotions , Visual Fields , Humans , Male , Female , Adult , Young Adult , Emotions/physiology , Visual Fields/physiology , Facial Expression , Photic Stimulation , Facial Recognition/physiology , Fovea Centralis/physiology , Visual Perception/physiology , Adolescent
8.
Womens Health (Lond) ; 20: 17455057241259176, 2024.
Article in English | MEDLINE | ID: mdl-38877749

ABSTRACT

BACKGROUND: Premenstrual dysphoric disorder is a depressive disorder affecting 5%-8% of people with menstrual cycles. Despite evidence that facial emotion detection is altered in depressive disorders, with enhanced detection of negative emotions (negativity bias), minimal research exists on premenstrual dysphoric disorder. OBJECTIVES: The goal of this study was to investigate the effect of premenstrual dysphoric disorder symptoms and the premenstrual phase on accuracy and intensity at detection of facial emotions. DESIGN: Cross-sectional quasi-experimental design. METHOD: The Facial Emotion Detection Task was administered to 72 individuals assigned female at birth with no premenstrual dysphoric disorder (n = 30), and provisional PMDD (n = 42), based on a retrospective Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition-based measure of premenstrual dysphoric disorder. Facial emotion detection was examined both irrespective of menstrual cycle phase, and as a function of premenstrual phase (yes, no). The task used neutral-to-emotional facial expression morphs (15 images/morph). Participants indicated the emotion detected for each image within the progressive intensity morph. For all six basic emotions (sad, angry, fearful, happy, disgust, and surprise), two scores were calculated: accuracy of responses and the intensity within the morph at which the correct emotion was first detected (image number). RESULTS: Individuals reporting moderate/severe symptoms of premenstrual dysphoric disorder had more accurate and earlier detection of disgust, regardless of cycle phase. In addition, those with provisional premenstrual dysphoric disorder detected sad emotions earlier. A premenstrual dysphoric disorder group × cycle phase interaction also emerged: individuals reporting premenstrual dysphoric disorder symptoms were more accurate at detecting facial emotions during the premenstrual phase compared to the rest of the cycle, with a large effect size for sad emotions. CONCLUSION: The findings suggest enhanced facial emotion processing in individuals reporting symptoms of premenstrual dysphoric disorder, particularly for sadness and disgust. However, replication is required with larger samples and prospective designs. This premenstrual dysphoric disorder premenstrual emotion detection advantage suggests an adaptive cognitive mechanism in premenstrual syndrome/premenstrual dysphoric disorder, and challenges stigma surrounding premenstrual experiences.


Women with Severe Premenstrual Syndrome or Probable Premenstrual Dysphoric Disorder are Better at Identifying Emotional Expressions on People's Faces, Especially During the Premenstrual PhasePremenstrual dysphoric disorder is a depressive disorder affecting women where they experience emotional and physical symptoms during the premenstrual phase (i.e. the week before one's period). It is a severe form of premenstrual syndrome. Research indicates that depression can affect facial emotion recognition. Accurately recognizing other people's emotions is an important skill that helps us develop social connections and keep ourselves and others safe. Quick recognition of facial emotions allows us to understand and support others, and quickly identify dangerous situations by recognizing other people's emotional responses. The goal of this study was to examine how premenstrual dysphoric disorder symptoms and the premenstrual phase may affect the ability of women to recognize and identify emotions on other people's faces. A total of 72 women (42 with premenstrual dysphoric disorder, 30 without premenstrual dysphoric disorder) completed the Facial Emotion Detection Task. This task measured how accurate and early the women were able to detect happiness, sadness, anger, fear, surprise, and disgust in faces. Women with moderate/severe symptoms of premenstrual dysphoric disorder had more accurate and earlier detection of disgust, regardless of where they were in their menstrual cycle. Women with premenstrual dysphoric disorder detected sad emotions earlier. Furthermore, women with premenstrual dysphoric disorder were more accurate at detecting facial emotions when they were tested in the premenstrual phase, and were especially more accurate in detecting sad emotions. The findings suggest that women with premenstrual dysphoric disorder are better at detecting facial emotions and show a premenstrual dysphoric disorder premenstrual emotion detection advantage. This tendency for women with premenstrual dysphoric disorder to better detect emotions in others, particularly when they are in the premenstrual cycle phase, would have benefits. As one of the first reports of a potentially beneficial effect of premenstrual syndrome for women, the findings may help decrease stigma associated with premenstrual dysphoric disorder and premenstrual syndrome. Further research is needed to replicate and extend these findings.


Subject(s)
Emotions , Facial Expression , Menstrual Cycle , Premenstrual Dysphoric Disorder , Humans , Female , Premenstrual Dysphoric Disorder/psychology , Premenstrual Dysphoric Disorder/diagnosis , Cross-Sectional Studies , Adult , Menstrual Cycle/psychology , Menstrual Cycle/physiology , Young Adult , Premenstrual Syndrome/psychology , Premenstrual Syndrome/diagnosis
9.
Sensors (Basel) ; 24(11)2024 May 23.
Article in English | MEDLINE | ID: mdl-38894141

ABSTRACT

One of the biggest challenges of computers is collecting data from human behavior, such as interpreting human emotions. Traditionally, this process is carried out by computer vision or multichannel electroencephalograms. However, they comprise heavy computational resources, far from final users or where the dataset was made. On the other side, sensors can capture muscle reactions and respond on the spot, preserving information locally without using robust computers. Therefore, the research subject is the recognition of the six primary human emotions using electromyography sensors in a portable device. They are placed on specific facial muscles to detect happiness, anger, surprise, fear, sadness, and disgust. The experimental results showed that when working with the CortexM0 microcontroller, enough computational capabilities were achieved to store a deep learning model with a classification store of 92%. Furthermore, we demonstrate the necessity of collecting data from natural environments and how they need to be processed by a machine learning pipeline.


Subject(s)
Electromyography , Facial Expression , Machine Learning , Humans , Electromyography/methods , Emotions/physiology , Facial Muscles/physiology , Male , Female , Adult
10.
Sensors (Basel) ; 24(11)2024 May 28.
Article in English | MEDLINE | ID: mdl-38894274

ABSTRACT

Emotion recognition has become increasingly important in the field of Deep Learning (DL) and computer vision due to its broad applicability by using human-computer interaction (HCI) in areas such as psychology, healthcare, and entertainment. In this paper, we conduct a systematic review of facial and pose emotion recognition using DL and computer vision, analyzing and evaluating 77 papers from different sources under Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our review covers several topics, including the scope and purpose of the studies, the methods employed, and the used datasets. The scope of this work is to conduct a systematic review of facial and pose emotion recognition using DL methods and computer vision. The studies were categorized based on a proposed taxonomy that describes the type of expressions used for emotion detection, the testing environment, the currently relevant DL methods, and the datasets used. The taxonomy of methods in our review includes Convolutional Neural Network (CNN), Faster Region-based Convolutional Neural Network (R-CNN), Vision Transformer (ViT), and "Other NNs", which are the most commonly used models in the analyzed studies, indicating their trendiness in the field. Hybrid and augmented models are not explicitly categorized within this taxonomy, but they are still important to the field. This review offers an understanding of state-of-the-art computer vision algorithms and datasets for emotion recognition through facial expressions and body poses, allowing researchers to understand its fundamental components and trends.


Subject(s)
Deep Learning , Emotions , Facial Expression , Neural Networks, Computer , Humans , Emotions/physiology
11.
Sci Rep ; 14(1): 13090, 2024 06 07.
Article in English | MEDLINE | ID: mdl-38849381

ABSTRACT

Face recognition is a crucial aspect of self-image and social interactions. Previous studies have focused on static images to explore the boundary of self-face recognition. Our research, however, investigates the dynamics of face recognition in contexts involving motor-visual synchrony. We first validated our morphing face metrics for self-face recognition. We then conducted an experiment using state-of-the-art video processing techniques for real-time face identity morphing during facial movement. We examined self-face recognition boundaries under three conditions: synchronous, asynchronous, and static facial movements. Our findings revealed that participants recognized a narrower self-face boundary with moving facial images compared to static ones, with no significant differences between synchronous and asynchronous movements. The direction of morphing consistently biased the recognized self-face boundary. These results suggest that while motor information of the face is vital for self-face recognition, it does not rely on movement synchronization, and the sense of agency over facial movements does not affect facial identity judgment. Our methodology offers a new approach to exploring the 'self-face boundary in action', allowing for an independent examination of motion and identity.


Subject(s)
Facial Recognition , Humans , Female , Male , Facial Recognition/physiology , Adult , Young Adult , Face/physiology , Movement/physiology , Photic Stimulation/methods , Motion , Facial Expression
12.
Sci Rep ; 14(1): 13031, 2024 06 06.
Article in English | MEDLINE | ID: mdl-38844758

ABSTRACT

Valence (positive and negative) and content (embodied vs non-embodied) characteristics of visual stimuli have been shown to influence motor readiness, as tested with response time paradigms. Both embodiment and emotional processing are affected in Parkinson's disease (PD) due to basal ganglia dysfunction. Here we aimed to investigate, using a two-choice response time paradigm, motor readiness when processing embodied (emotional body language [EBL] and emotional facial expressions [FACS]) vs non-embodied (emotional scenes [IAPS]) stimuli with neutral, happy, and fearful content. We enrolled twenty-five patients with early-stage PD and twenty-five age matched healthy participants. Motor response during emotional processing was assessed by measuring response times (RTs) in a home-based, forced two-choice discrimination task where participants were asked to discriminate the emotional stimulus from the neutral one. Rating of valence and arousal was also performed. A clinical and neuropsychological evaluation was performed on PD patients. Results showed that RTs for PD patients were longer for all conditions compared to HC and that RTs were generally longer in both groups for EBL compared to FACS and IAPS, with the sole exception retrieved for PD, where in discriminating fearful stimuli, RTs for EBL were longer compared to FACS but not to IAPS. Furthermore, in PD only, when discriminating fearful respect to neutral stimuli, RTs were shorter when discriminating FACS compared to IAPS. This study shows that PD patients were faster in discriminating fearful embodied stimuli, allowing us to speculate on mechanisms involving an alternative, compensatory, emotional motor pathway for PD patients undergoing fear processing.


Subject(s)
Emotions , Facial Expression , Parkinson Disease , Reaction Time , Humans , Parkinson Disease/psychology , Parkinson Disease/physiopathology , Male , Female , Emotions/physiology , Reaction Time/physiology , Aged , Middle Aged , Photic Stimulation , Case-Control Studies
13.
Sci Rep ; 14(1): 12763, 2024 06 04.
Article in English | MEDLINE | ID: mdl-38834661

ABSTRACT

With the continuous progress of technology, the subject of life science plays an increasingly important role, among which the application of artificial intelligence in the medical field has attracted more and more attention. Bell facial palsy, a neurological ailment characterized by facial muscle weakness or paralysis, exerts a profound impact on patients' facial expressions and masticatory abilities, thereby inflicting considerable distress upon their overall quality of life and mental well-being. In this study, we designed a facial attribute recognition model specifically for individuals with Bell's facial palsy. The model utilizes an enhanced SSD network and scientific computing to perform a graded assessment of the patients' condition. By replacing the VGG network with a more efficient backbone, we improved the model's accuracy and significantly reduced its computational burden. The results show that the improved SSD network has an average precision of 87.9% in the classification of light, middle and severe facial palsy, and effectively performs the classification of patients with facial palsy, where scientific calculations also increase the precision of the classification. This is also one of the most significant contributions of this article, which provides intelligent means and objective data for future research on intelligent diagnosis and treatment as well as progressive rehabilitation.


Subject(s)
Bell Palsy , Humans , Bell Palsy/diagnosis , Bell Palsy/physiopathology , Neural Networks, Computer , Female , Male , Facial Expression , Adult , Artificial Intelligence , Middle Aged , Facial Paralysis/diagnosis , Facial Paralysis/physiopathology , Facial Paralysis/psychology , Facial Recognition , Automated Facial Recognition/methods
14.
Sci Rep ; 14(1): 12867, 2024 06 04.
Article in English | MEDLINE | ID: mdl-38834667

ABSTRACT

Online education has become increasingly popular in recent years, and video lectures have emerged as a common instructional format. While the importance of instructors' nonverbal social cues such as gaze, facial expression, and gestures for learning progress in face-to-face teaching is well-established, their impact on instructional videos is not fully understood. Most studies on nonverbal social cues in instructional videos focus on isolated cues rather than considering multimodal nonverbal behavior patterns and their effects on the learning progress. This study examines the role of instructors' nonverbal immediacy (a construct capturing multimodal nonverbal behaviors that reduce psychological distance) in video lectures with respect to learners' cognitive, affective, and motivational outcomes. We carried out an eye-tracking experiment with 87 participants (Mage = 24.11, SD = 4.80). Results of multilevel path analyses indicate that high nonverbal immediacy substantially increases learners' state motivation and enjoyment, but does not affect cognitive learning. Analyses of learners' eye movements show that learners allocate more attention to the instructor than to the learning material with increasing levels of nonverbal immediacy displayed by the instructor. The study highlights the importance of considering the role of multimodal nonverbal behavior patterns in online education and provides insights for effective video lecture design.


Subject(s)
Learning , Social Behavior , Humans , Male , Female , Learning/physiology , Adult , Young Adult , Nonverbal Communication/psychology , Video Recording , Motivation/physiology , Education, Distance/methods , Eye Movements/physiology , Facial Expression
15.
PLoS One ; 19(6): e0304726, 2024.
Article in English | MEDLINE | ID: mdl-38861570

ABSTRACT

The mechanisms that underpin human social behaviour are poorly understood, in part because natural social behaviour is challenging to study. The task of linking the mechanisms thought to drive social behaviour to specific social behaviours in a manner that maintains ecological validity poses an even greater challenge. Here we report evidence that the subjective value people assign to genuine smiles, as measured in the laboratory, determines their responsiveness to genuine smiles encountered in a naturalistic social interaction. Specifically, participants (university undergraduates; age 17 to 36) who valued genuine smiles to a greater degree also showed stronger attention capture effects to neutral faces that were previously associated with genuine smiles and faster reciprocity of a social partner's smiles in a real social interaction. Additionally, the faster participants responded to the partner's genuine smiles the higher the partner's ratings of interaction quality were after the interaction. These data suggest that individual differences in subjective value of genuine smiles, measured in the lab, is one element that underpins responsiveness to natural genuine smiles and subsequent social outcomes.


Subject(s)
Smiling , Social Behavior , Humans , Male , Female , Adult , Smiling/psychology , Adolescent , Young Adult , Social Interaction , Facial Expression , Interpersonal Relations , Attention/physiology
16.
Math Biosci Eng ; 21(4): 5007-5031, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38872524

ABSTRACT

In demanding application scenarios such as clinical psychotherapy and criminal interrogation, the accurate recognition of micro-expressions is of utmost importance but poses significant challenges. One of the main difficulties lies in effectively capturing weak and fleeting facial features and improving recognition performance. To address this fundamental issue, this paper proposed a novel architecture based on a multi-scale 3D residual convolutional neural network. The algorithm leveraged a deep 3D-ResNet50 as the skeleton model and utilized the micro-expression optical flow feature map as the input for the network model. Drawing upon the complex spatial and temporal features inherent in micro-expressions, the network incorporated multi-scale convolutional modules of varying sizes to integrate both global and local information. Furthermore, an attention mechanism feature fusion module was introduced to enhance the model's contextual awareness. Finally, to optimize the model's prediction of the optimal solution, a discriminative network structure with multiple output channels was constructed. The algorithm's performance was evaluated using the public datasets SMIC, SAMM, and CASME Ⅱ. The experimental results demonstrated that the proposed algorithm achieves recognition accuracies of 74.6, 84.77 and 91.35% on these datasets, respectively. This substantial improvement in efficiency compared to existing mainstream methods for extracting micro-expression subtle features effectively enhanced micro-expression recognition performance and increased the accuracy of high-precision micro-expression recognition. Consequently, this paper served as an important reference for researchers working on high-precision micro-expression recognition.


Subject(s)
Algorithms , Facial Expression , Neural Networks, Computer , Humans , Imaging, Three-Dimensional/methods , Face , Databases, Factual , Pattern Recognition, Automated/methods , Image Processing, Computer-Assisted/methods
17.
Acta Psychol (Amst) ; 247: 104330, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38852319

ABSTRACT

In the context of blindness, studies on the recognition of facial expressions of emotions by touch are essential to define the compensatory touch abilities and to create adapted tools on emotions. This study is the first to examine the effect of visual experience in the recognition of tactile drawings of facial expressions of emotions by children with different visual experiences. To this end, we compared the recognition rates of tactile drawings of emotions between blind children, children with low vision and sighted children aged 6-12 years. Results revealed no effect of visual experience on recognition rates. However, an effect of emotions and an interaction effect between emotions and visual experience were found. Indeed, while all children had a low average recognition rate, the drawings of fear, anger and disgust were particularly poorly recognized. Moreover, sighted children were significantly better at recognizing the drawings of surprise and sadness than the blind children who only showed high recognition rates for joy. The results of this study support the importance of developing emotion tools that can be understood by children with different visual experiences.


Subject(s)
Blindness , Emotions , Facial Expression , Humans , Child , Male , Female , Blindness/physiopathology , Blindness/psychology , Emotions/physiology , Vision, Low/physiopathology , Recognition, Psychology/physiology , Touch Perception/physiology , Facial Recognition/physiology
18.
Sci Rep ; 14(1): 12798, 2024 06 13.
Article in English | MEDLINE | ID: mdl-38871925

ABSTRACT

Individuals vary in how they move their faces in everyday social interactions. In a first large-scale study, we measured variation in dynamic facial behaviour during social interaction and examined dyadic outcomes and impression formation. In Study 1, we recorded semi-structured video calls with 52 participants interacting with a confederate across various everyday contexts. Video clips were rated by 176 independent participants. In Study 2, we examined video calls of 1315 participants engaging in unstructured video-call interactions. Facial expressivity indices were extracted using automated Facial Action Coding Scheme analysis and measures of personality and partner impressions were obtained by self-report. Facial expressivity varied considerably across participants, but little across contexts, social partners or time. In Study 1, more facially expressive participants were more well-liked, agreeable, and successful at negotiating (if also more agreeable). Participants who were more facially competent, readable, and perceived as readable were also more well-liked. In Study 2, we replicated the findings that facial expressivity was associated with agreeableness and liking by their social partner, and additionally found it to be associated with extraversion and neuroticism. Findings suggest that facial behaviour is a stable individual difference that proffers social advantages, pointing towards an affiliative, adaptive function.


Subject(s)
Facial Expression , Social Interaction , Humans , Male , Female , Adult , Young Adult , Personality , Interpersonal Relations , Social Behavior , Adolescent , Middle Aged
19.
Psych J ; 13(3): 398-406, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38830603

ABSTRACT

Facial expressions in infants have been noted to create a spatial attention bias when compared with adult faces. Yet, there is limited understanding of how adults perceive the timing of infant facial expressions. To investigate this, we used both infant and adult facial expressions in a temporal bisection task. In Experiment 1, we compared duration judgments of neutral infant and adult faces. The results revealed that participants felt that neutral infant faces lasted for a shorter time than neutral adult faces, independent of participant sex. Experiment 2 employed sad (crying) facial expressions. Here, the female participants perceived that the infants' faces were displayed for a longer duration than the adults' faces, whereas this distinction was not evident among the male participants. These findings highlight the influence of the babyface schema on time perception, nuanced by emotional context and sex-based individual variances.


Subject(s)
Crying , Facial Expression , Time Perception , Humans , Female , Male , Adult , Infant , Facial Recognition/physiology , Emotions , Attention , Sex Factors
20.
Cogn Res Princ Implic ; 9(1): 43, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38935222

ABSTRACT

The presence of face masks can significantly impact processes related to trait impressions from faces. In the present research, we focused on trait impressions from faces either wearing a mask or not by addressing how contextual factors may shape such inferences. In Study 1, we compared trait impressions from faces in a phase of the COVID-19 pandemic in which wearing masks was a normative behavior (T1) with those assessed one year later when wearing masks was far less common (T2). Results at T2 showed a reduced positivity in the trait impressions elicited by faces covered by a mask. In Study 2, it was found that trait impressions from faces were modulated by the background visual context in which the target face was embedded so that faces wearing a mask elicited more positive traits when superimposed on an indoor rather than outdoor visual context. Overall, the present studies indicate that wearing face masks may affect trait impressions from faces, but also that such impressions are highly flexible and can significantly fluctuate across time and space.


Subject(s)
COVID-19 , Facial Recognition , Masks , Humans , Female , Male , COVID-19/prevention & control , Adult , Young Adult , Facial Recognition/physiology , Social Perception , Facial Expression
SELECTION OF CITATIONS
SEARCH DETAIL
...