Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 774
Filtrar
Más filtros

Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 119(45): e2201380119, 2022 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-36322724

RESUMEN

Emotional communication relies on a mutual understanding, between expresser and viewer, of facial configurations that broadcast specific emotions. However, we do not know whether people share a common understanding of how emotional states map onto facial expressions. This is because expressions exist in a high-dimensional space too large to explore in conventional experimental paradigms. Here, we address this by adapting genetic algorithms and combining them with photorealistic three-dimensional avatars to efficiently explore the high-dimensional expression space. A total of 336 people used these tools to generate facial expressions that represent happiness, fear, sadness, and anger. We found substantial variability in the expressions generated via our procedure, suggesting that different people associate different facial expressions to the same emotional state. We then examined whether variability in the facial expressions created could account for differences in performance on standard emotion recognition tasks by asking people to categorize different test expressions. We found that emotion categorization performance was explained by the extent to which test expressions matched the expressions generated by each individual. Our findings reveal the breadth of variability in people's representations of facial emotions, even among typical adult populations. This has profound implications for the interpretation of responses to emotional stimuli, which may reflect individual differences in the emotional category people attribute to a particular facial expression, rather than differences in the brain mechanisms that produce emotional responses.


Asunto(s)
Reconocimiento Facial , Individualidad , Adulto , Humanos , Expresión Facial , Emociones/fisiología , Ira/fisiología , Algoritmos
2.
J Neurosci ; 43(19): 3477-3494, 2023 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-37001990

RESUMEN

The correct identification of facial expressions is critical for understanding the intention of others during social communication in the daily life of all primates. Here we used ultra-high-field fMRI at 9.4 T to investigate the neural network activated by facial expressions in awake New World common marmosets from both male and female sex, and to determine the effect of facial motions on this network. We further explored how the face-patch network is involved in the processing of facial expressions. Our results show that dynamic and static facial expressions activate face patches in temporal and frontal areas (O, PV, PD, MD, AD, and PL) as well as in the amygdala, with stronger responses for negative faces, also associated with an increase of the respiration rates of the monkey. Processing of dynamic facial expressions involves an extended network recruiting additional regions not known to be part of the face-processing network, suggesting that face motions may facilitate the recognition of facial expressions. We report for the first time in New World marmosets that the perception and identification of changeable facial expressions, vital for social communication, recruit face-selective brain patches also involved in face detection processing and are associated with an increase of arousal.SIGNIFICANCE STATEMENT Recent research in humans and nonhuman primates has highlighted the importance to correctly recognize and process facial expressions to understand others' emotions in social interactions. The current study focuses on the fMRI responses of emotional facial expressions in the common marmoset (Callithrix jacchus), a New World primate species sharing several similarities of social behavior with humans. Our results reveal that temporal and frontal face patches are involved in both basic face detection and facial expression processing. The specific recruitment of these patches for negative faces associated with an increase of the arousal level show that marmosets process facial expressions of their congener, vital for social communication.


Asunto(s)
Callithrix , Expresión Facial , Humanos , Animales , Masculino , Femenino , Mapeo Encefálico , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Emociones/fisiología , Imagen por Resonancia Magnética
3.
Eur J Neurosci ; 60(6): 5217-5233, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39138605

RESUMEN

Actions are rarely devoid of emotional content. Thus, a more complete picture of the neural mechanisms underlying the mental simulation of observed actions requires more research using emotion information. The present study used high-density electroencephalography to investigate mental simulation associated with facial emotion categorisation. Alpha-mu rhythm modulation was measured at each frequency, from 8 Hz to 13 Hz, to infer the degree of sensorimotor simulation. Results suggest the sensitivity of the sensorimotor activity to emotional information, because (1) categorising static images of neutral faces as happy or sad was associated with stronger suppression in the central region than categorising clearly happy faces, (2) there was preliminary evidence indicating that the strongest suppression in the central region was in response to neutral faces, followed by sad and then happy faces and (3) in the control task, which required categorising images with the head oriented right, left, or forward as right or left, differences between conditions showed a pattern more indicative of task difficulty rather than sensorimotor engagement. Dissociable processing of emotional information in facial expressions and directionality information in head orientations was further captured in beta band activity (14-20 Hz). Stronger mu suppression to neutral faces indicates that sensorimotor simulation extends beyond crude motor mimicry. We propose that mu rhythm responses to facial expressions may serve as a biomarker for empathy circuit activation. Future research should investigate whether atypical or inconsistent mu rhythm responses to facial expressions indicate difficulties in understanding or sharing emotions.


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Humanos , Femenino , Masculino , Adulto , Emociones/fisiología , Adulto Joven , Reconocimiento Facial/fisiología , Electroencefalografía/métodos
4.
Hum Brain Mapp ; 45(5): e26673, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38590248

RESUMEN

The amygdala is important for human fear processing. However, recent research has failed to reveal specificity, with evidence that the amygdala also responds to other emotions. A more nuanced understanding of the amygdala's role in emotion processing, particularly relating to fear, is needed given the importance of effective emotional functioning for everyday function and mental health. We studied 86 healthy participants (44 females), aged 18-49 (mean 26.12 ± 6.6) years, who underwent multiband functional magnetic resonance imaging. We specifically examined the reactivity of four amygdala subregions (using regions of interest analysis) and related brain connectivity networks (using generalized psycho-physiological interaction) to fear, angry, and happy facial stimuli using an emotional face-matching task. All amygdala subregions responded to all stimuli (p-FDR < .05), with this reactivity strongly driven by the superficial and centromedial amygdala (p-FDR < .001). Yet amygdala subregions selectively showed strong functional connectivity with other occipitotemporal and inferior frontal brain regions with particular sensitivity to fear recognition and strongly driven by the basolateral amygdala (p-FDR < .05). These findings suggest that amygdala specialization to fear may not be reflected in its local activity but in its connectivity with other brain regions within a specific face-processing network.


Asunto(s)
Encéfalo , Emociones , Femenino , Humanos , Emociones/fisiología , Miedo/psicología , Amígdala del Cerebelo/fisiología , Felicidad , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética , Expresión Facial
5.
Hum Brain Mapp ; 45(14): e70040, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39394899

RESUMEN

Growing evidence suggests that conceptual knowledge influences emotion perception, yet the neural mechanisms underlying this effect are not fully understood. Recent studies have shown that brain representations of facial emotion categories in visual-perceptual areas are predicted by conceptual knowledge, but it remains to be seen if auditory regions are similarly affected. Moreover, it is not fully clear whether these conceptual influences operate at a modality-independent level. To address these questions, we conducted a functional magnetic resonance imaging study presenting participants with both facial and vocal emotional stimuli. This dual-modality approach allowed us to investigate effects on both modality-specific and modality-independent brain regions. Using univariate and representational similarity analyses, we found that brain representations in both visual (middle and lateral occipital cortices) and auditory (superior temporal gyrus) regions were predicted by conceptual understanding of emotions for faces and voices, respectively. Additionally, we discovered that conceptual knowledge also influenced supra-modal representations in the superior temporal sulcus. Dynamic causal modeling revealed a brain network showing both bottom-up and top-down flows, suggesting a complex interplay of modality-specific and modality-independent regions in emotional processing. These findings collectively indicate that the neural representations of emotions in both sensory-perceptual and modality-independent regions are likely shaped by each individual's conceptual knowledge.


Asunto(s)
Mapeo Encefálico , Emociones , Imagen por Resonancia Magnética , Humanos , Emociones/fisiología , Femenino , Masculino , Adulto Joven , Adulto , Reconocimiento Facial/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Formación de Concepto/fisiología , Expresión Facial , Percepción Visual/fisiología
6.
Cerebellum ; 23(2): 545-553, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37285048

RESUMEN

Recent studies have bolstered the important role of the cerebellum in high-level socio-affective functions. In particular, neuroscientific evidence shows that the posterior cerebellum is involved in social cognition and emotion processing, presumably through its involvement in temporal processing and in predicting the outcomes of social sequences. We used cerebellar transcranial random noise stimulation (ctRNS) targeting the posterior cerebellum to affect the performance of 32 healthy participants during an emotion discrimination task, including both static and dynamic facial expressions (i.e., transitioning from a static neutral image to a happy/sad emotion). ctRNS, compared to the sham condition, significantly reduced the participants' accuracy to discriminate static sad facial expressions, but it increased participants' accuracy to discriminate dynamic sad facial expressions. No effects emerged with happy faces. These findings may suggest the existence of two different circuits in the posterior cerebellum for the processing of negative emotional stimuli: a first-time-independent mechanism which can be selectively disrupted by ctRNS, and a second time-dependent mechanism of predictive "sequence detection" which can be selectively enhanced by ctRNS. This latter mechanism might be included among the cerebellar operational models constantly engaged in the rapid adjustment of social predictions based on dynamic behavioral information inherent to others' actions. We speculate that it might be one of the basic principles underlying the understanding of other individuals' social and emotional behaviors during interactions.


Asunto(s)
Cerebelo , Emociones , Humanos , Emociones/fisiología , Cerebelo/fisiología , Expresión Facial
7.
Acta Psychiatr Scand ; 2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39135341

RESUMEN

BACKGROUND: Facial expressions are a core aspect of non-verbal communication. Reduced emotional expressiveness of the face is a common negative symptom of schizophrenia, however, quantifying negative symptoms can be clinically challenging and involves a considerable element of rater subjectivity. We used computer vision to investigate if (i) automated assessment of facial expressions captures negative as well as positive and general symptom domains, and (ii) if automated assessments are associated with treatment response in initially antipsychotic-naïve patients with first-episode psychosis. METHOD: We included 46 patients (mean age 25.4 (6.1); 65.2% males). Psychopathology was assessed at baseline and after 6 weeks of monotherapy with amisulpride using the Positive and Negative Syndrome Scale (PANSS). Baseline interview videos were recorded. Seventeen facial action units (AUs), that is, activation of muscles, from the Facial Action Coding System were extracted using OpenFace 2.0. A correlation matrix was calculated for each patient. Facial expressions were identified using spectral clustering at group-level. Associations between facial expressions and psychopathology were investigated using multiple linear regression. RESULTS: Three clusters of facial expressions were identified related to different locations of the face. Cluster 1 was associated with positive and general symptoms at baseline, Cluster 2 was associated with all symptom domains, showing the strongest association with the negative domain, and Cluster 3 was only associated with general symptoms. Cluster 1 was significantly associated with the clinically rated improvement in positive and general symptoms after treatment, and Cluster 2 was significantly associated with clinical improvement in all domains. CONCLUSION: Using automated computer vision of facial expressions during PANSS interviews did not only capture negative symptoms but also combinations of the three overall domains of psychopathology. Moreover, automated assessments of facial expressions at baseline were associated with initial antipsychotic treatment response. The findings underscore the clinical relevance of facial expressions and motivate further investigations of computer vision in clinical psychiatry.

8.
Cereb Cortex ; 33(13): 8421-8430, 2023 06 20.
Artículo en Inglés | MEDLINE | ID: mdl-37154618

RESUMEN

Increasing evidence indicates that the brain predicts sensory input based on past experiences, importantly constraining how we experience the world. Despite a growing interest on this framework, known as predictive coding, most of such approaches to multiple psychological domains continue to be theoretical or primarily provide correlational evidence. We here explored the neural basis of predictive processing using noninvasive brain stimulation and provide causal evidence of frequency-specific modulations in humans. Participants received 20 Hz (associated with top-down/predictions), 50 Hz (associated with bottom-up/prediction errors), or sham transcranial alternating current stimulation on the left dorsolateral prefrontal cortex while performing a social perception task in which facial expression predictions were induced and subsequently confirmed or violated. Left prefrontal 20 Hz stimulation reinforced stereotypical predictions. In contrast, 50 Hz and sham stimulation failed to yield any significant behavioral effects. Moreover, the frequency-specific effect observed was further supported by electroencephalography data, which showed a boost of brain activity at the stimulated frequency band. These observations provide causal evidence for how predictive processing may be enabled in the human brain, setting up a needed framework to understand how it may be disrupted across brain-related conditions and potentially restored through noninvasive methods.


Asunto(s)
Encéfalo , Estimulación Transcraneal de Corriente Directa , Humanos , Encéfalo/fisiología , Electroencefalografía/métodos , Corteza Prefontal Dorsolateral , Corteza Prefrontal/fisiología
9.
Arch Sex Behav ; 53(1): 223-233, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37626260

RESUMEN

This study explored the facial expression stereotypes of adult men and women within the Chinese cultural context and investigated whether adult participants had facial expression stereotypes of children aged 6 and 10 years old. Three experiments were conducted with 156 adult Chinese university student participants. Experiment 1 explored whether adult participants had facial expression stereotypes of adult men and women. In Experiment 1a, the participants imagined a happy or angry adult face and stated the gender of the imagined face. In Experiment 1b, the participants were asked to quickly judge the gender of happy or angry adult faces, and their response time was recorded. Experiments 2 and 3 explored whether adults apply the stereotypes of adult men and women to 10-year-old and 6-year-old children. Experiment 1 revealed that the participants associated angry facial expressions with men and happy facial expressions with women. Experiment 2 showed that the participants associated angry facial expressions with 10-year-old boys and happy expressions with 10-year-old girls. Finally, Experiment 3 revealed that the participants associated happy facial expressions with 6-year-old girls but did not associate angry facial expressions with 6-year-old boys. These results showed that, within the Chinese cultural context, adults had gender-based facial expression stereotypes of adults and 10-year-old children; however, the adult participants did not have gender-based facial expression stereotypes of 6-year-old male children. This study has important implications for future research, as adults' perceptions of children is an important aspect in the study of social cognition in children.


Asunto(s)
Emociones , Expresión Facial , Adulto , Niño , Femenino , Humanos , Masculino , Emociones/fisiología , Felicidad , Tiempo de Reacción , Pueblos del Este de Asia
10.
Perception ; 53(1): 3-16, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37709269

RESUMEN

Emotional facial expressions convey crucial information in nonverbal communication and serve as a mediator in face-to-face relationships. Their recognition would rely on specific facial traits depending on the perceived emotion. During the COVID-19 pandemic, wearing a facemask has thus disrupted the human ability to read emotions from faces. Yet, these effects are usually assessed across studies from faces expressing stereotypical and exaggerated emotions, which is far removed from real-life conditions. The objective of the present study was to evaluate the impact of facemasks through an emotion categorization task using morphs ranging from a neutral face and an expressive face (anger, disgust, fear, happiness, and sadness) (from 0% neutral to 100% expressive in 20% steps). Our results revealed a strong impact of facemasks on the recognition of expressions of disgust, happiness, and sadness, resulting in a decrease in performance and an increase in misinterpretations, both for low and high levels of intensity. In contrast, the recognition of anger and fear, as well as neutral expression, was found to be less impacted by mask-wearing. Future studies should address this issue from a more ecological point of view with the aim of taking concrete adaptive measures in the context of daily interactions.


Asunto(s)
Máscaras , Pandemias , Humanos , Emociones , Ira , Expresión Facial , Percepción
11.
Orthod Craniofac Res ; 2024 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-38825845

RESUMEN

OBJECTIVE: In many medical disciplines, facial attractiveness is part of the diagnosis, yet its scoring might be confounded by facial expressions. The intent was to apply deep convolutional neural networks (CNN) to identify how facial expressions affect facial attractiveness and to explore whether a dedicated training of the CNN is able to reduce the bias of facial expressions. MATERIALS AND METHODS: Frontal facial images (n = 840) of 40 female participants (mean age 24.5 years) were taken adapting a neutral facial expression and the six universal facial expressions. Facial attractiveness was computed by means of a face detector, deep convolutional neural networks, standard support vector regression for facial beauty, visual regularized collaborative filtering and a regression technique for handling visual queries without rating history. CNN was first trained on random facial photographs from a dating website and then further trained on the Chicago Face Database (CFD) to increase its suitability to medical conditions. Both algorithms scored every image for attractiveness. RESULTS: Facial expressions affect facial attractiveness scores significantly. Scores from CNN additionally trained on CFD had less variability between the expressions (range 54.3-60.9 compared to range: 32.6-49.5) and less variance within the scores (P ≤ .05), but also caused a shift in the ranking of the expressions' facial attractiveness. CONCLUSION: Facial expressions confound attractiveness scores. Training on norming images generated scores less susceptible to distortion, but more difficult to interpret. Scoring facial attractiveness based on CNN seems promising, but AI solutions must be developed on CNN trained to recognize facial expressions as distractors.

12.
Cogn Emot ; : 1-17, 2024 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-38973174

RESUMEN

Previous research has demonstrated that individuals from Western cultures exhibit categorical perception (CP) in their judgments of emotional faces. However, the extent to which this phenomenon characterises the judgments of facial expressions among East Asians remains relatively unexplored. Building upon recent findings showing that East Asians are more likely than Westerners to see a mixture of emotions in facial expressions of anger and disgust, the present research aimed to investigate whether East Asians also display CP for angry and disgusted faces. To address this question, participants from Canada and China were recruited to discriminate pairs of faces along the anger-disgust continuum. The results revealed the presence of CP in both cultural groups, as participants consistently exhibited higher accuracy and faster response latencies when discriminating between-category pairs of expressions compared to within-category pairs. Moreover, the magnitude of CP did not vary significantly across cultures. These findings provide novel evidence supporting the existence of CP for facial expressions in both East Asian and Western cultures, suggesting that CP is a perceptual phenomenon that transcends cultural boundaries. This research contributes to the growing literature on cross-cultural perceptions of facial expressions by deepening our understanding of how facial expressions are perceived categorically across cultures.

13.
Cogn Emot ; 38(2): 267-275, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37997901

RESUMEN

This study explored how congruency between facial mimicry and observed expressions affects the stability of conscious facial expression representations. Focusing on the congruency effect between proprioceptive/sensorimotor signals and visual stimuli for happy expressions, participants underwent a binocular rivalry task displaying neutral and happy faces. Mimicry was either facilitated with a chopstick or left unrestricted. Key metrics included Initial Percept (bias indicator), Onset Resolution Time (time from onset to Initial Percept), and Cumulative Time (content stabilization measure). Results indicated that mimicry manipulation significantly impacted Cumulative Time for happy faces, highlighting the importance of congruent mimicry in stabilizing conscious awareness of facial expressions. This supports embodied cognition models, showing the integration of proprioceptive information significantly biases conscious visual perception of facial expressions.


Asunto(s)
Expresión Facial , Felicidad , Humanos , Percepción Visual , Cara , Emociones
14.
Cogn Emot ; 38(3): 296-314, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38678446

RESUMEN

Social exclusion is an emotionally painful experience that leads to various alterations in socio-emotional processing. The perceptual and emotional consequences that may arise from experiencing social exclusion can vary depending on the paradigm used to manipulate it. Exclusion paradigms can vary in terms of the severity and duration of the leading exclusion experience, thereby classifying it as either a short-term or long-term experience. The present study aimed to study the impact of exclusion on socio-emotional processing using different paradigms that caused experiencing short-term and imagining long-term exclusion. Ambiguous facial emotions were used as socio-emotional cues. In study 1, the Ostracism Online paradigm was used to manipulate short-term exclusion. In study 2, a new sample of participants imagined long-term exclusion through the future life alone paradigm. Participants of both studies then completed a facial emotion recognition task consisting of morphed ambiguous facial emotions. By means of Point of Subjective Equivalence analyses, our results indicate that the experience of short-term exclusion hinders recognising happy facial expressions. In contrast, imagining long-term exclusion causes difficulties in recognising sad facial expressions. These findings extend the current literature, suggesting that not all social exclusion paradigms affect socio-emotional processing similarly.


Asunto(s)
Emociones , Expresión Facial , Humanos , Femenino , Masculino , Adulto Joven , Adulto , Reconocimiento Facial , Distancia Psicológica , Aislamiento Social/psicología , Reconocimiento en Psicología , Adolescente
15.
J Adv Nurs ; 80(9): 3846-3855, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38334268

RESUMEN

AIM: To identify specific facial expressions associated with pain behaviors using the PainChek application in residents with dementia. DESIGN: This is a secondary analysis from a study exploring the feasibility of PainChek to evaluate the effectiveness of a social robot (PARO) intervention on pain for residents with dementia from June to November 2021. METHODS: Participants experienced PARO individually five days per week for 15 min (once or twice) per day for three consecutive weeks. The PainChek app assessed each resident's pain levels before and after each session. The association between nine facial expressions and the adjusted PainChek scores was analyzed using a linear mixed model. RESULTS: A total of 1820 assessments were completed with 46 residents. Six facial expressions were significantly associated with a higher adjusted PainChek score. Horizontal mouth stretch showed the strongest association with the score, followed by brow lowering parting lips, wrinkling of the nose, raising of the upper lip and closing eyes. However, the presence of cheek raising, tightening of eyelids and pulling at the corner lip were not significantly associated with the score. Limitations of using the PainChek app were identified. CONCLUSION: Six specific facial expressions were associated with observational pain scores in residents with dementia. Results indicate that automated real-time facial analysis is a promising approach to assessing pain in people with dementia. However, it requires further validation by human observers before it can be used for decision-making in clinical practice. IMPACT: Pain is common in people with dementia, while assessing pain is challenging in this group. This study generated new evidence of facial expressions of pain in residents with dementia. Results will inform the development of valid artificial intelligence-based algorithms that will support healthcare professionals in identifying pain in people with dementia in clinical situations. REPORTING METHOD: The study adheres to the CONSORT reporting guidelines. PATIENT OR PUBLIC CONTRIBUTION: One resident with dementia and two family members of people with dementia were consulted and involved in the study design, where they provided advice on the protocol, information sheets and consent forms, and offered valuable insights to ensure research quality and relevance. TRIAL REGISTRATION: Australian and New Zealand Clinical Trials Registry number (ACTRN12621000837820).


Asunto(s)
Dolor Crónico , Demencia , Expresión Facial , Dimensión del Dolor , Humanos , Demencia/complicaciones , Masculino , Femenino , Anciano , Anciano de 80 o más Años , Dolor Crónico/psicología
16.
Sensors (Basel) ; 24(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38894141

RESUMEN

One of the biggest challenges of computers is collecting data from human behavior, such as interpreting human emotions. Traditionally, this process is carried out by computer vision or multichannel electroencephalograms. However, they comprise heavy computational resources, far from final users or where the dataset was made. On the other side, sensors can capture muscle reactions and respond on the spot, preserving information locally without using robust computers. Therefore, the research subject is the recognition of the six primary human emotions using electromyography sensors in a portable device. They are placed on specific facial muscles to detect happiness, anger, surprise, fear, sadness, and disgust. The experimental results showed that when working with the CortexM0 microcontroller, enough computational capabilities were achieved to store a deep learning model with a classification store of 92%. Furthermore, we demonstrate the necessity of collecting data from natural environments and how they need to be processed by a machine learning pipeline.


Asunto(s)
Electromiografía , Expresión Facial , Aprendizaje Automático , Humanos , Electromiografía/métodos , Emociones/fisiología , Músculos Faciales/fisiología , Masculino , Femenino , Adulto
17.
Sensors (Basel) ; 24(8)2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38676067

RESUMEN

Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.


Asunto(s)
Expresión Facial , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Cara/fisiología , Emociones/fisiología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
18.
Sensors (Basel) ; 24(17)2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39275593

RESUMEN

It is estimated that 10% to 20% of road accidents are related to fatigue, with accidents caused by drowsiness up to twice as deadly as those caused by other factors. In order to reduce these numbers, strategies such as advertising campaigns, the implementation of driving recorders in vehicles used for road transport of goods and passengers, or the use of drowsiness detection systems in cars have been implemented. Within the scope of the latter area, the technologies used are diverse. They can be based on the measurement of signals such as steering wheel movement, vehicle position on the road, or driver monitoring. Driver monitoring is a technology that has been exploited little so far and can be implemented in many different approaches. This work addresses the evaluation of a multidimensional drowsiness index based on the recording of facial expressions, gaze direction, and head position and studies the feasibility of its implementation in a low-cost electronic package. Specifically, the aim is to determine the driver's state by monitoring their facial expressions, such as the frequency of blinking, yawning, eye-opening, gaze direction, and head position. For this purpose, an algorithm capable of detecting drowsiness has been developed. Two approaches are compared: Facial recognition based on Haar features and facial recognition based on Histograms of Oriented Gradients (HOG). The implementation has been carried out on a Raspberry Pi, a low-cost device that allows the creation of a prototype that can detect drowsiness and interact with peripherals such as cameras or speakers. The results show that the proposed multi-index methodology performs better in detecting drowsiness than algorithms based on one-index detection.


Asunto(s)
Algoritmos , Conducción de Automóvil , Humanos , Expresión Facial , Reconocimiento Facial/fisiología , Fases del Sueño/fisiología , Accidentes de Tránsito/prevención & control , Masculino , Adulto , Reconocimiento Facial Automatizado/métodos , Femenino
19.
Ergonomics ; : 1-20, 2024 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-39436833

RESUMEN

Driving anger is a serious global issue that poses risks to road safety, thus necessitating the development of effective detection and intervention methods. This study investigated the feasibility of using smartphones to capture facial expressions to detect event-related driving anger. Sixty drivers completed the driving tasks in scenarios with and without multi-stage road events and were induced to angry and neutral states, respectively. Their physiological signals, facial expressions, and subjective data were collected. Four feature combinations and six machine learning algorithms were used to construct driving anger detection models. The model combining facial features and the XGBoost algorithm outperformed models using physiological features or other algorithms, achieving an accuracy of 87.04% and an F1-score of 85.06%. Eyes, mouth, and brows were identified as anger-sensitive facial areas. Additionally, incorporating individual characteristics into models further improved classification performance. This study provides a contactless and highly accessible approach for event-related driving anger detection.Practitioner Summary: This study proposed a cost-effective and contactless approach for event-related and real-time driving anger detection and could potentially provide insights into the design of emotional interactions in intelligent vehicles.

20.
Behav Res Methods ; 56(1): 468-484, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36703002

RESUMEN

Previous research found that when people are instructed to smile toward liked objects and show negative facial expressions toward disliked objects, their facial response is faster and more intense than when they are required to smile toward disliked objects and express negative facial response toward liked objects. The present research tested a technologically innovative indirect evaluation measure that was based on that finding. Participants completed an implicit association test (IAT)-a common indirect measure of evaluation, responding with their emotional facial expressions, rather than by pressing response keys. In two web studies, using emotional facial expression detection through a webcam, we found that the Facial Response IAT (FR-IAT) is a reliable and valid measure of evaluations, comparable to the keyboard IAT. Because facial responses provide more information than key responses, pursuing future improvements of the FR-IAT's methodology, software, and data analysis is a promising direction for enhancing the quality of indirect evaluation measurement. The same methodology and technology may also enhance other indirect measures of evaluation and cognitive tests related to emotion and judgment.


Asunto(s)
Emociones , Expresión Facial , Humanos , Pruebas Neuropsicológicas , Juicio
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA