Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.289
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(14): e2313665121, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38530896

RESUMO

Facial emotion expressions play a central role in interpersonal interactions; these displays are used to predict and influence the behavior of others. Despite their importance, quantifying and analyzing the dynamics of brief facial emotion expressions remains an understudied methodological challenge. Here, we present a method that leverages machine learning and network modeling to assess the dynamics of facial expressions. Using video recordings of clinical interviews, we demonstrate the utility of this approach in a sample of 96 people diagnosed with psychotic disorders and 116 never-psychotic adults. Participants diagnosed with schizophrenia tended to move from neutral expressions to uncommon expressions (e.g., fear, surprise), whereas participants diagnosed with other psychoses (e.g., mood disorders with psychosis) moved toward expressions of sadness. This method has broad applications to the study of normal and altered expressions of emotion and can be integrated with telemedicine to improve psychiatric assessment and treatment.


Assuntos
Transtornos Psicóticos , Esquizofrenia , Adulto , Humanos , Expressão Facial , Emoções , Esquizofrenia/diagnóstico , Medo
2.
Cereb Cortex ; 34(3)2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38466112

RESUMO

Alexithymia is characterized by difficulties in emotional information processing. However, the underlying reasons for emotional processing deficits in alexithymia are not fully understood. The present study aimed to investigate the mechanism underlying emotional deficits in alexithymia. Using the Toronto Alexithymia Scale-20, we recruited college students with high alexithymia (n = 24) or low alexithymia (n = 24) in this study. Participants judged the emotional consistency of facial expressions and contextual sentences while recording their event-related potentials. Behaviorally, the high alexithymia group showed longer response times versus the low alexithymia group in processing facial expressions. The event-related potential results showed that the high alexithymia group had more negative-going N400 amplitudes compared with the low alexithymia group in the incongruent condition. More negative N400 amplitudes are also associated with slower responses to facial expressions. Furthermore, machine learning analyses based on N400 amplitudes could distinguish the high alexithymia group from the low alexithymia group in the incongruent condition. Overall, these findings suggest worse facial emotion perception for the high alexithymia group, potentially due to difficulty in spontaneously activating emotion concepts. Our findings have important implications for the affective science and clinical intervention of alexithymia-related affective disorders.


Assuntos
Sintomas Afetivos , Eletroencefalografia , Humanos , Feminino , Masculino , Expressão Facial , Potenciais Evocados , Emoções
3.
Cereb Cortex ; 34(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38566513

RESUMO

The perception of facial expression plays a crucial role in social communication, and it is known to be influenced by various facial cues. Previous studies have reported both positive and negative biases toward overweight individuals. It is unclear whether facial cues, such as facial weight, bias facial expression perception. Combining psychophysics and event-related potential technology, the current study adopted a cross-adaptation paradigm to examine this issue. The psychophysical results of Experiments 1A and 1B revealed a bidirectional cross-adaptation effect between overweight and angry faces. Adapting to overweight faces decreased the likelihood of perceiving ambiguous emotional expressions as angry compared to adapting to normal-weight faces. Likewise, exposure to angry faces subsequently caused normal-weight faces to appear thinner. These findings were corroborated by bidirectional event-related potential results, showing that adaptation to overweight faces relative to normal-weight faces modulated the event-related potential responses of emotionally ambiguous facial expression (Experiment 2A); vice versa, adaptation to angry faces relative to neutral faces modulated the event-related potential responses of ambiguous faces in facial weight (Experiment 2B). Our study provides direct evidence associating overweight faces with facial expression, suggesting at least partly common neural substrates for the perception of overweight and angry faces.


Assuntos
Expressão Facial , Preconceito de Peso , Humanos , Sobrepeso , Ira/fisiologia , Potenciais Evocados/fisiologia , Emoções/fisiologia
4.
J Neurosci ; 43(23): 4291-4303, 2023 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-37142430

RESUMO

According to a classical view of face perception (Bruce and Young, 1986; Haxby et al., 2000), face identity and facial expression recognition are performed by separate neural substrates (ventral and lateral temporal face-selective regions, respectively). However, recent studies challenge this view, showing that expression valence can also be decoded from ventral regions (Skerry and Saxe, 2014; Li et al., 2019), and identity from lateral regions (Anzellotti and Caramazza, 2017). These findings could be reconciled with the classical view if regions specialized for one task (either identity or expression) contain a small amount of information for the other task (that enables above-chance decoding). In this case, we would expect representations in lateral regions to be more similar to representations in deep convolutional neural networks (DCNNs) trained to recognize facial expression than to representations in DCNNs trained to recognize face identity (the converse should hold for ventral regions). We tested this hypothesis by analyzing neural responses to faces varying in identity and expression. Representational dissimilarity matrices (RDMs) computed from human intracranial recordings (n = 11 adults; 7 females) were compared with RDMs from DCNNs trained to label either identity or expression. We found that RDMs from DCNNs trained to recognize identity correlated with intracranial recordings more strongly in all regions tested-even in regions classically hypothesized to be specialized for expression. These results deviate from the classical view, suggesting that face-selective ventral and lateral regions contribute to the representation of both identity and expression.SIGNIFICANCE STATEMENT Previous work proposed that separate brain regions are specialized for the recognition of face identity and facial expression. However, identity and expression recognition mechanisms might share common brain regions instead. We tested these alternatives using deep neural networks and intracranial recordings from face-selective brain regions. Deep neural networks trained to recognize identity and networks trained to recognize expression learned representations that correlate with neural recordings. Identity-trained representations correlated with intracranial recordings more strongly in all regions tested, including regions hypothesized to be expression specialized in the classical hypothesis. These findings support the view that identity and expression recognition rely on common brain regions. This discovery may require reevaluation of the roles that the ventral and lateral neural pathways play in processing socially relevant stimuli.


Assuntos
Eletrocorticografia , Reconhecimento Facial , Adulto , Feminino , Humanos , Encéfalo , Redes Neurais de Computação , Reconhecimento Facial/fisiologia , Lobo Temporal/fisiologia , Mapeamento Encefálico , Imageamento por Ressonância Magnética/métodos
5.
BMC Med ; 22(1): 382, 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39256825

RESUMO

BACKGROUND: Childhood adversity has been associated with alterations in threat-related information processing, including heightened perceptual sensitivity and attention bias towards threatening facial expressions, as well as hostile attributions of neutral faces, although there is a large degree of variability and inconsistency in reported findings. METHODS: Here, we aimed to implicitly measure neural facial expression processing in 120 adolescents between 12 and 16 years old with and without exposure to childhood adversity. Participants were excluded if they had any major medical or neurological disorder or intellectual disability, were pregnant, used psychotropic medication or reported acute suicidality or an ongoing abusive situation. We combined fast periodic visual stimulation with electroencephalography in two separate paradigms to assess the neural sensitivity and responsivity towards neutral and expressive, i.e. happy and angry, faces. Linear mixed effects models were used to assess the impact of childhood adversity on facial expression processing. RESULTS: Sixty-six girls, 53 boys and one adolescent who identified as 'other', between 12 and 16 years old (M = 13.93), participated in the current study. Of those, 64 participants were exposed to childhood adversity. In contrast to our hypotheses, adolescents exposed to adversity show lower expression-discrimination responses for angry faces presented in between neutral faces and higher expression-discrimination responses for happy faces presented in between neutral faces than unexposed controls. Moreover, adolescents exposed to adversity, but not unexposed controls, showed lower neural responsivity to both angry and neutral faces that were simultaneously presented. CONCLUSIONS: We therefore conclude that childhood adversity is associated with a hostile attribution of neutral faces, thereby reducing the dissimilarity between neutral and angry faces. This reduced threat-safety discrimination may increase risk for psychopathology in individuals exposed to childhood adversity.


Assuntos
Experiências Adversas da Infância , Expressão Facial , Humanos , Feminino , Adolescente , Masculino , Criança , Eletroencefalografia
6.
FASEB J ; 37(9): e23137, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37566489

RESUMO

The anatomical underpinnings of primate facial expressions are essential to exploring their evolution. Traditionally, it has been accepted that the primate face exhibits a "scala natura" morphocline, ranging from primitive to derived characteristics. At the primitive end, the face consists of undifferentiated muscular sheets, while at the derived end there is greater complexity with more muscles and insertion points. Among these, the role of the human modiolus ("knoten" in German) has been emphasized. Recent studies have challenged this view by revealing significant complexity in the faces of several non-human primates, thereby rejecting the linear notion of facial evolution. However, our knowledge of the facial architecture in gorillas, the second closest living relatives to modern humans, remains a significant gap in the literature. Here, we present new findings based on dissection and histological analysis of one gorilla craniofacial specimen, alongside 30 human hemifaces. Our results indicate that while the number and overall arrangement of facial muscles in the gorilla are comparable to those of chimpanzees and modern humans, several orofacial features distinguish the gorilla's anatomy from that of hominins. Among these are the absence of a modiolus, the continuity of muscular fibers over the region of the mouth corner, the flat (uncurving) sheet of the orbicularis oris muscle, and the insertion of direct labial tractors both anterior and posterior to it. Collectively, the anatomical characteristics observed in the gorilla suggest that the complex anatomy of the hominin face should be considered synapomorphic (shared-derived) within the Pan-Homo clade.


Assuntos
Hominidae , Animais , Gorilla gorilla/anatomia & histologia , Músculos Faciais/anatomia & histologia , Músculos Faciais/fisiologia , Face , Pan troglodytes/anatomia & histologia
7.
Arch Sex Behav ; 53(1): 223-233, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37626260

RESUMO

This study explored the facial expression stereotypes of adult men and women within the Chinese cultural context and investigated whether adult participants had facial expression stereotypes of children aged 6 and 10 years old. Three experiments were conducted with 156 adult Chinese university student participants. Experiment 1 explored whether adult participants had facial expression stereotypes of adult men and women. In Experiment 1a, the participants imagined a happy or angry adult face and stated the gender of the imagined face. In Experiment 1b, the participants were asked to quickly judge the gender of happy or angry adult faces, and their response time was recorded. Experiments 2 and 3 explored whether adults apply the stereotypes of adult men and women to 10-year-old and 6-year-old children. Experiment 1 revealed that the participants associated angry facial expressions with men and happy facial expressions with women. Experiment 2 showed that the participants associated angry facial expressions with 10-year-old boys and happy expressions with 10-year-old girls. Finally, Experiment 3 revealed that the participants associated happy facial expressions with 6-year-old girls but did not associate angry facial expressions with 6-year-old boys. These results showed that, within the Chinese cultural context, adults had gender-based facial expression stereotypes of adults and 10-year-old children; however, the adult participants did not have gender-based facial expression stereotypes of 6-year-old male children. This study has important implications for future research, as adults' perceptions of children is an important aspect in the study of social cognition in children.


Assuntos
Emoções , Expressão Facial , Adulto , Criança , Feminino , Humanos , Masculino , Emoções/fisiologia , Felicidade , Tempo de Reação , População do Leste Asiático
8.
BMC Psychiatry ; 24(1): 226, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38532335

RESUMO

BACKGROUND: Patients with schizophrenia (SCZ) exhibit difficulties deficits in recognizing facial expressions with unambiguous valence. However, only a limited number of studies have examined how these patients fare in interpreting facial expressions with ambiguous valence (for example, surprise). Thus, we aimed to explore the influence of emotional background information on the recognition of ambiguous facial expressions in SCZ. METHODS: A 3 (emotion: negative, neutral, and positive) × 2 (group: healthy controls and SCZ) experimental design was adopted in the present study. The experimental materials consisted of 36 images of negative emotions, 36 images of neutral emotions, 36 images of positive emotions, and 36 images of surprised facial expressions. In each trial, a briefly presented surprised face was preceded by an affective image. Participants (36 SCZ and 36 healthy controls (HC)) were required to rate their emotional experience induced by the surprised facial expressions. Participants' emotional experience was measured using the 9-point rating scale. The experimental data have been analyzed by conducting analyses of variances (ANOVAs) and correlation analysis. RESULTS: First, the SCZ group reported a more positive emotional experience under the positive cued condition compared to the negative cued condition. Meanwhile, the HC group reported the strongest positive emotional experience in the positive cued condition, a moderate experience in the neutral cued condition, and the weakest in the negative cue condition. Second, the SCZ (vs. HC) group showed longer reaction times (RTs) for recognizing surprised facial expressions. The severity of schizophrenia symptoms in the SCZ group was negatively correlated with their rating scores for emotional experience under neutral and positive cued condition. CONCLUSIONS: Recognition of surprised facial expressions was influenced by background information in both SCZ and HC, and the negative symptoms in SCZ. The present study indicates that the role of background information should be fully considered when examining the ability of SCZ to recognize ambiguous facial expressions.


Assuntos
Reconhecimento Facial , Esquizofrenia , Humanos , Emoções , Reconhecimento Psicológico , Expressão Facial , China
9.
BMC Psychiatry ; 24(1): 184, 2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38448877

RESUMO

BACKGROUND: Eye contact is a fundamental part of social interaction. In clinical studies, it has been observed that patients suffering from depression make less eye contact during interviews than healthy individuals, which could be a factor contributing to their social functioning impairments. Similarly, results from mood induction studies with healthy persons indicate that attention to the eyes diminishes as a function of sad mood. The present screen-based eye-tracking study examined whether depressive symptoms in healthy individuals are associated with reduced visual attention to other persons' direct gaze during free viewing. METHODS: Gaze behavior of 44 individuals with depressive symptoms and 49 individuals with no depressive symptoms was analyzed in a free viewing task. Grouping was based on the Beck Depression Inventory using the cut-off proposed by Hautzinger et al. (2006). Participants saw pairs of faces with direct gaze showing emotional or neutral expressions. One-half of the face pairs was shown without face masks, whereas the other half was presented with face masks. Participants' dwell times and first fixation durations were analyzed. RESULTS: In case of unmasked facial expressions, participants with depressive symptoms looked shorter at the eyes compared to individuals without symptoms across all expression conditions. No group difference in first fixation duration on the eyes of masked and unmasked faces was observed. Individuals with depressive symptoms dwelled longer on the mouth region of unmasked faces. For masked faces, no significant group differences in dwell time on the eyes were found. Moreover, when specifically examining dwell time on the eyes of faces with an emotional expression there were also no significant differences between groups. Overall, participants gazed significantly longer at the eyes in masked compared to unmasked faces. CONCLUSIONS: For faces without mask, our results suggest that depressiveness in healthy individuals goes along with less visual attention to other persons' eyes but not with less visual attention to others' faces. When factors come into play that generally amplify the attention directed to the eyes such as face masks or emotions then no relationship between depressiveness and visual attention to the eyes can be established.


Assuntos
Afeto , Depressão , Humanos , Emoções , Nível de Saúde , Escalas de Graduação Psiquiátrica
10.
J Exp Child Psychol ; 243: 105928, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38643735

RESUMO

Previous studies have shown that adults exhibit the strongest attentional bias toward neutral infant faces when viewing faces with different expressions at different attentional processing stages due to different stimulus presentation times. However, it is not clear how the characteristics of the temporal processing associated with the strongest effect change over time. Thus, we combined a free-viewing task with eye-tracking technology to measure adults' attentional bias toward infant and adult faces with happy, neutral, and sad expressions of the same face. The results of the analysis of the total time course indicated that the strongest effect occurred during the strategic processing stage. However, the results of the analysis of the split time course revealed that sad infant faces first elicited adults' attentional bias at 0 to 500 ms, whereas the strongest effect of attentional bias toward neutral infant faces was observed at 1000 to 3000 ms, peaking at 1500 to 2000 ms. In addition, women and men had no differences in their responses to different expressions. In summary, this study provides further evidence that adults' attentional bias toward infant faces across stages of attention processing is modulated by expressions. Specifically, during automatic processing adults' attentional bias was directed toward sad infant faces, followed by a shift to the processing of neutral infant faces during strategic processing, which ultimately resulted in the strongest effect. These findings highlight that this strongest effect is dynamic and associated with a specific time window in the strategic process.


Assuntos
Viés de Atenção , Expressão Facial , Reconhecimento Facial , Humanos , Feminino , Masculino , Viés de Atenção/fisiologia , Adulto Jovem , Adulto , Reconhecimento Facial/fisiologia , Lactente , Tecnologia de Rastreamento Ocular , Atenção , Fatores de Tempo
11.
Proc Natl Acad Sci U S A ; 118(33)2021 08 17.
Artigo em Inglês | MEDLINE | ID: mdl-34385326

RESUMO

The last two decades have established that a network of face-selective areas in the temporal lobe of macaque monkeys supports the visual processing of faces. Each area within the network contains a large fraction of face-selective cells. And each area encodes facial identity and head orientation differently. A recent brain-imaging study discovered an area outside of this network selective for naturalistic facial motion, the middle dorsal (MD) face area. This finding offers the opportunity to determine whether coding principles revealed inside the core network would generalize to face areas outside the core network. We investigated the encoding of static faces and objects, facial identity, and head orientation, dimensions which had been studied in multiple areas of the core face-processing network before, as well as facial expressions and gaze. We found that MD populations form a face-selective cluster with a degree of selectivity comparable to that of areas in the core face-processing network. MD encodes facial identity robustly across changes in head orientation and expression, it encodes head orientation robustly against changes in identity and expression, and it encodes expression robustly across changes in identity and head orientation. These three dimensions are encoded in a separable manner. Furthermore, MD also encodes the direction of gaze in addition to head orientation. Thus, MD encodes both structural properties (identity) and changeable ones (expression and gaze) and thus provides information about another animal's direction of attention (head orientation and gaze). MD contains a heterogeneous population of cells that establish a multidimensional code for faces.


Assuntos
Expressão Facial , Reconhecimento Facial/fisiologia , Fixação Ocular/fisiologia , Percepção Visual/fisiologia , Animais , Fenômenos Eletrofisiológicos , Humanos , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Reconhecimento Visual de Modelos/fisiologia
12.
Artigo em Inglês | MEDLINE | ID: mdl-39313880

RESUMO

BACKGROUND: Prader-Willi syndrome (PWS) is a congenital disease caused by a rare and generally non-inherited genetic disorder. The inability to recognise facial expressions of emotion is an apparent social cognition deficit in people diagnosed with PWS. The main objective of the present study is to compare the ability to recognise emotional facial expression, in both non-contextualised and contextualised scenarios, among the main subtypes of PWS and a control group. METHODS: The sample consisted of 46 children divided into three groups: deletion (n = 10), maternal uniparental disomy (mUPD) (n = 13) and control (n = 23). The protocol included the Facially Expressed Emotion Labeling and the Deusto-e-Motion 1.0. RESULTS: The control group recognised facial emotions more accurately and quickly in both non-contextualised and contextualised scenarios than children with PWS, regardless of genetic subtype. Despite no differences being detected between PWS subtypes when non-contextualised scenarios were analysed, in contextualised situations, a longer reaction time was observed in children with the mUPD subtype. CONCLUSIONS: This is the first study to assess the ability to recognise emotional facial expressions in contextualised situations among PWS subtypes and a control group. The findings suggest that some of the social cognitive deficits evidenced in children with mUPD PWS may be similar to those in autism spectrum disorder.

13.
Cogn Emot ; 38(1): 187-197, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37731376

RESUMO

This study investigated the emotional and behavioural effects of looming threats using both recalled (self-reported valence) and real-time response measurements (facial expressions). The looming bias refers to the tendency to underestimate the time of arrival of rapidly approaching (looming) stimuli, providing additional time for defensive reactions. While previous research has shown negative emotional responses to looming threats based on self-reports after stimulus exposure, facial expressions offer valuable insights into emotional experiences and non-verbal behaviour during stimulus exposure. A face reading experiment examined responses to threats in motion, considering stimulus direction (looming versus receding motion) and threat strength (more versus less threatening stimuli). We also explored the added value of facial expression recognition compared to self-reported valence. Results indicated that looming threats elicit more negative facial expressions than receding threats, supporting previous findings on the looming bias. Further, more (vs. less) threatening stimuli evoked more negative facial expressions, but only when the threats were looming rather than receding. Interestingly, facial expressions of valence and self-reported valence showed opposing results, suggesting the importance of incorporating facial expression recognition to understand defensive responses to looming threats more comprehensively.


Assuntos
Reconhecimento Facial , Medo , Humanos
14.
Sensors (Basel) ; 24(13)2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-39000930

RESUMO

Convolutional neural networks (CNNs) have made significant progress in the field of facial expression recognition (FER). However, due to challenges such as occlusion, lighting variations, and changes in head pose, facial expression recognition in real-world environments remains highly challenging. At the same time, methods solely based on CNN heavily rely on local spatial features, lack global information, and struggle to balance the relationship between computational complexity and recognition accuracy. Consequently, the CNN-based models still fall short in their ability to address FER adequately. To address these issues, we propose a lightweight facial expression recognition method based on a hybrid vision transformer. This method captures multi-scale facial features through an improved attention module, achieving richer feature integration, enhancing the network's perception of key facial expression regions, and improving feature extraction capabilities. Additionally, to further enhance the model's performance, we have designed the patch dropping (PD) module. This module aims to emulate the attention allocation mechanism of the human visual system for local features, guiding the network to focus on the most discriminative features, reducing the influence of irrelevant features, and intuitively lowering computational costs. Extensive experiments demonstrate that our approach significantly outperforms other methods, achieving an accuracy of 86.51% on RAF-DB and nearly 70% on FER2013, with a model size of only 3.64 MB. These results demonstrate that our method provides a new perspective for the field of facial expression recognition.


Assuntos
Expressão Facial , Redes Neurais de Computação , Humanos , Reconhecimento Facial Automatizado/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Face , Reconhecimento Automatizado de Padrão/métodos
15.
Sensors (Basel) ; 24(17)2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39275635

RESUMO

In this paper, we study facial expression recognition (FER) using three modalities obtained from a light field camera: sub-aperture (SA), depth map, and all-in-focus (AiF) images. Our objective is to construct a more comprehensive and effective FER system by investigating multimodal fusion strategies. For this purpose, we employ EfficientNetV2-S, pre-trained on AffectNet, as our primary convolutional neural network. This model, combined with a BiGRU, is used to process SA images. We evaluate various fusion techniques at both decision and feature levels to assess their effectiveness in enhancing FER accuracy. Our findings show that the model using SA images surpasses state-of-the-art performance, achieving 88.13% ± 7.42% accuracy under the subject-specific evaluation protocol and 91.88% ± 3.25% under the subject-independent evaluation protocol. These results highlight our model's potential in enhancing FER accuracy and robustness, outperforming existing methods. Furthermore, our multimodal fusion approach, integrating SA, AiF, and depth images, demonstrates substantial improvements over unimodal models. The decision-level fusion strategy, particularly using average weights, proved most effective, achieving 90.13% ± 4.95% accuracy under the subject-specific evaluation protocol and 93.33% ± 4.92% under the subject-independent evaluation protocol. This approach leverages the complementary strengths of each modality, resulting in a more comprehensive and accurate FER system.


Assuntos
Expressão Facial , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Facial Automatizado/métodos , Algoritmos , Reconhecimento Automatizado de Padrão/métodos
16.
Sensors (Basel) ; 24(18)2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39338612

RESUMO

Facial expression recognition using convolutional neural networks (CNNs) is a prevalent research area, and the network's complexity poses obstacles for deployment on devices with limited computational resources, such as mobile devices. To address these challenges, researchers have developed lightweight networks with the aim of reducing model size and minimizing parameters without compromising accuracy. The LiteFer method introduced in this study incorporates depth-separable convolution and a lightweight attention mechanism, effectively reducing network parameters. Moreover, through comprehensive comparative experiments on the RAFDB and FERPlus datasets, its superior performance over various state-of-the-art lightweight expression-recognition methods is evident.


Assuntos
Redes Neurais de Computação , Humanos , Algoritmos , Expressão Facial , Reconhecimento Automatizado de Padrão/métodos
17.
Sensors (Basel) ; 24(7)2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38610510

RESUMO

The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.


Assuntos
Reconhecimento Facial , Humanos , Reprodutibilidade dos Testes , Acústica , Som , Emoções
18.
Sensors (Basel) ; 24(16)2024 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-39205085

RESUMO

In recent years, significant progress has been made in facial expression recognition methods. However, tasks related to facial expression recognition in real environments still require further research. This paper proposes a tri-cross-attention transformer with a multi-feature fusion network (TriCAFFNet) to improve facial expression recognition performance under challenging conditions. By combining LBP (Local Binary Pattern) features, HOG (Histogram of Oriented Gradients) features, landmark features, and CNN (convolutional neural network) features from facial images, the model is provided with a rich input to improve its ability to discern subtle differences between images. Additionally, tri-cross-attention blocks are designed to facilitate information exchange between different features, enabling mutual guidance among different features to capture salient attention. Extensive experiments on several widely used datasets show that our TriCAFFNet achieves the SOTA performance on RAF-DB with 92.17%, AffectNet (7 cls) with 67.40%, and AffectNet (8 cls) with 63.49%, respectively.


Assuntos
Expressão Facial , Redes Neurais de Computação , Humanos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Face/anatomia & histologia , Reconhecimento Facial Automatizado/métodos , Reconhecimento Automatizado de Padrão/métodos
19.
Eur Eat Disord Rev ; 32(5): 917-929, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38708578

RESUMO

OBJECTIVE: The study investigated interpersonal distance in patients with anorexia nervosa (AN), focussing on the role of other's facial expression and morphology, also assessing physiological and subjective responses. METHOD: Twenty-nine patients with AN and 30 controls (CTL) were exposed to virtual characters either with an angry, neutral, or happy facial expression or with an overweight, normal-weight, or underweight morphology presented either in the near or far space while we recorded electrodermal activity. Participants had to judge their preferred interpersonal distance with the characters and rated them in terms of valence and arousal. RESULTS: Unlike CTL, patients with AN exhibited heightened electrodermal activity for morphological stimuli only, when presented in the near space. They also preferred larger and smaller interpersonal distances with overweight and underweight characters respectively, although rating both negatively. Finally, and similar to CTL, they preferred larger interpersonal distance with angry than neutral or happy characters. DISCUSSION: Although patients with AN exhibited behavioural response to emotional stimuli similar to CTL, they lacked corresponding physiological response, indicating emotional blunting towards emotional social stimuli. Moreover, they showed distinct behavioural and physiological adjustments in response to body shape, confirming the specific emotional significance attached to body shape.


Assuntos
Anorexia Nervosa , Emoções , Expressão Facial , Humanos , Anorexia Nervosa/psicologia , Feminino , Adulto , Emoções/fisiologia , Adulto Jovem , Imagem Corporal/psicologia , Relações Interpessoais , Resposta Galvânica da Pele/fisiologia , Adolescente , Distância Psicológica
20.
Vet Anaesth Analg ; 51(5): 531-538, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39142979

RESUMO

OBJECTIVE: To clinically evaluate previously developed pain scales [Donkey Chronic Pain Composite Pain Scale (DCP-CPS), Donkey Chronic Pain Facial Assessment of Pain (DCP-FAP) and combined Donkey Chronic Pain Scale (DCPS)], including behavioural and facial expression-based variables, for the assessment of chronic pain in donkeys. STUDY DESIGN: Prospective, blinded clinical study. ANIMAL: A group of 77 donkeys (34 patients and 43 healthy control animals). METHODS: Animals were assessed by two observers that were blinded to the condition of the animals. RESULTS: Both DCP-CPS and DCP-FAP, and resulting combined DCPS scores, showed good interobserver reliability [intraclass correlation coefficient (ICC) = 0.91, 95% confidence interval (CI) = 0.86-0.95, p < 0.001; ICC = 0.71, CI = 0.50-0.83, p < 0.001 and ICC = 0.84, CI = 0.72-0.91, p < 0.001, respectively]. All scores (DCP-CPS, DCP-FAP and the resulting combined DCPS) were significantly higher for patients than for controls at all time points (p < 0.001 for all three scales). Sensitivity and specificity for identification of pain (cut-off value >3) was 73.0% and 65.1% for DCP-CPS, and 60.9% and 83.3% for DCP-FAP, respectively. For the combined DCPS, sensitivity was 87.0% and specificity 90.9% (cut-off value >6). CONCLUSIONS AND CLINICAL RELEVANCE: Based on behavioural and facial expression-based variables, DCPS proved a promising and reproducible tool to assess different types of chronic pain in donkeys. The combination of behavioural and facial expression-based variables showed the best discriminatory characteristics in the current study. Further studies are needed for refinement of these tools.


Assuntos
Dor Crônica , Equidae , Medição da Dor , Animais , Dor Crônica/veterinária , Medição da Dor/veterinária , Medição da Dor/métodos , Feminino , Masculino , Estudos Prospectivos , Expressão Facial , Comportamento Animal , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA