Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.442
Filtrar
1.
Sensors (Basel) ; 24(18)2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39338738

RESUMO

Accurate face detection and subsequent localization of facial landmarks are mandatory steps in many computer vision applications, such as emotion recognition, age estimation, and gender identification. Thanks to advancements in deep learning, numerous facial applications have been developed for human faces. However, most have to employ multiple models to accomplish several tasks simultaneously. As a result, they require more memory usage and increased inference time. Also, less attention is paid to other domains, such as animals and cartoon characters. To address these challenges, we propose an input-agnostic face model, AnyFace++, to perform multiple face-related tasks concurrently. The tasks are face detection and prediction of facial landmarks for human, animal, and cartoon faces, including age estimation, gender classification, and emotion recognition for human faces. We trained the model using deep multi-task, multi-domain learning with a heterogeneous cost function. The experimental results demonstrate that AnyFace++ generates outcomes comparable to cutting-edge models designed for specific domains.


Assuntos
Aprendizado Profundo , Face , Humanos , Face/fisiologia , Face/anatomia & histologia , Emoções/fisiologia , Feminino , Algoritmos , Masculino
2.
Sensors (Basel) ; 24(15)2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39123832

RESUMO

The objective of the article is to recognize users' emotions by classifying facial electromyographic (EMG) signals. A biomedical signal amplifier, equipped with eight active electrodes positioned in accordance with the Facial Action Coding System, was used to record the EMG signals. These signals were registered during a procedure where users acted out various emotions: joy, sadness, surprise, disgust, anger, fear, and neutral. Recordings were made for 16 users. The mean power of the EMG signals formed the feature set. We utilized these features to train and evaluate various classifiers. In the subject-dependent model, the average classification accuracies were 96.3% for KNN, 94.9% for SVM with a linear kernel, 94.6% for SVM with a cubic kernel, and 93.8% for LDA. In the subject-independent model, the classification results varied depending on the tested user, ranging from 91.4% to 48.6% for the KNN classifier, with an average accuracy of 67.5%. The SVM with a cubic kernel performed slightly worse, achieving an average accuracy of 59.1%, followed by the SVM with a linear kernel at 53.9%, and the LDA classifier at 41.2%. Additionally, the study identified the most effective electrodes for distinguishing between pairs of emotions.


Assuntos
Eletromiografia , Emoções , Humanos , Eletromiografia/métodos , Emoções/fisiologia , Masculino , Feminino , Adulto , Expressão Facial , Processamento de Sinais Assistido por Computador , Máquina de Vetores de Suporte , Algoritmos , Músculos Faciais/fisiologia , Adulto Jovem , Face/fisiologia , Eletrodos
3.
Sci Rep ; 14(1): 19563, 2024 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-39174675

RESUMO

Information about the concordance between dynamic emotional experiences and objective signals is practically useful. Previous studies have shown that valence dynamics can be estimated by recording electrical activity from the muscles in the brows and cheeks. However, whether facial actions based on video data and analyzed without electrodes can be used for sensing emotion dynamics remains unknown. We investigated this issue by recording video of participants' faces and obtaining dynamic valence and arousal ratings while they observed emotional films. Action units (AUs) 04 (i.e., brow lowering) and 12 (i.e., lip-corner pulling), detected through an automated analysis of the video data, were negatively and positively correlated with dynamic ratings of subjective valence, respectively. Several other AUs were also correlated with dynamic valence or arousal ratings. Random forest regression modeling, interpreted using the SHapley Additive exPlanation tool, revealed non-linear associations between the AUs and dynamic ratings of valence or arousal. These results suggest that an automated analysis of facial expression video data can be used to estimate dynamic emotional states, which could be applied in various fields including mental health diagnosis, security monitoring, and education.


Assuntos
Nível de Alerta , Emoções , Expressão Facial , Humanos , Emoções/fisiologia , Nível de Alerta/fisiologia , Feminino , Masculino , Adulto , Adulto Jovem , Gravação em Vídeo , Músculos Faciais/fisiologia , Face/fisiologia
4.
Codas ; 36(5): e20230016, 2024.
Artigo em Português, Inglês | MEDLINE | ID: mdl-39166599

RESUMO

PURPOSE: Propose and verify the efficiency of myofunctional intervention program to attenuate facial aging signs and balance the orofacial functions. METHODS: Thirty women, aged 50 to 60 years, randomly divided into: therapy group (TG) submitted to Orofacial Myofunctional Therapy and electromyographic biofeedback group (EBG), submitted to the same program associated with electromyographic biofeedback for chewing, swallowing, and smiling functions training. Aesthetic and oromyofunctional aspects were assessed from photographs, videos, MBGR Protocol and scales for assessing facial aging signs, described in the literature. 50-minute sessions were held weekly for nine weeks and monthly for six months after washout period. Three assessments, identical to the initial one, were performed in the tenth week, eighth week after washout and conclusion of the research. The participants responded to the Satisfaction Questionnaire on the tenth week. RESULTS: The statistical analysis using the ANOVA, Tukey and Mann Whitney tests, for inter and intragroup comparison, showed that: intervention promoted attenuation of facial aging signs mainly in TG group, balance of chewing and swallowing functions in both groups; there was an impact of electromyographic biofeedback on the degree of participants' satisfaction, greater for EBG; interruption of the program for eight weeks resulted in aesthetic losses, mainly in TG, yet not functional losses, in both groups; the six monthly sessions had a limited impact on overcoming the esthetic losses that occurred after washout. CONCLUSION: The proposed program resulted in attenuation of aging signs, mainly in the TG group and improvement in orofacial functions, in both groups.


OBJETIVO: Propor e verificar a eficiência de um programa de intervenção miofuncional para atenuar sinais do envelhecimento facial e equilibrar as funções orofaciais. MÉTODO: 30 mulheres, entre 50 e 60 anos, divididas aleatoriamente em: grupo terapia (GT), submetido ao programa de terapia miofuncional orofacial e grupo biofeedback eletromiográfico (GBE), submetido ao mesmo programa associado ao biofeedback eletromiográfico para treinamento da mastigação, deglutição e sorriso. Aspectos estéticos e oromiofuncionais foram avaliados a partir da documentação das fotografias e vídeos, do Protocolo de avaliação miofuncional orofacial MBGR e escalas de avaliação dos sinais de envelhecimento facial descritas na literatura. Sessões de 50 minutos foram realizadas semanalmente, durante nove semanas e mensalmente, durante seis meses, após washout. Três avaliações, idênticas à inicial, foram realizadas na décima semana, oitava semana após washout e conclusão da pesquisa. As participantes responderam ao Questionário de Satisfação na décima semana. RESULTADOS: A análise estatística realizada, por meio dos testes ANOVA, Tukey e Mann Whitney, para comparação inter e intragrupos, demonstrou que: houve atenuação dos sinais do envelhecimento facial, principalmente no GT e equilíbrio das funções mastigação e deglutição nos dois grupos; houve impacto do biofeedback eletromiográfico sobre o grau de satisfação das participantes, sendo maior no GBE; a interrupção do programa durante oito semanas resultou em perdas estéticas, principalmente no GT, mas não em perdas funcionais, nos dois grupos; as seis sessões realizadas mensalmente tiveram impacto limitado para superação das perdas estéticas ocorridas após washout. CONCLUSÃO: O programa proposto resultou em atenuação dos sinais de envelhecimento, principalmente no grupo GT e melhoria nas funções orofaciais, nos dois grupos.


Assuntos
Terapia Miofuncional , Humanos , Feminino , Terapia Miofuncional/métodos , Pessoa de Meia-Idade , Mastigação/fisiologia , Eletromiografia , Envelhecimento/fisiologia , Músculos Faciais/fisiologia , Músculos Faciais/fisiopatologia , Deglutição/fisiologia , Biorretroalimentação Psicológica/métodos , Satisfação do Paciente , Face/fisiologia , Resultado do Tratamento
5.
Soc Cogn Affect Neurosci ; 19(1)2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39167473

RESUMO

Human facial features (eyes, nose, and mouth) allow us to communicate with others. Observing faces triggers physiological responses, including pupil dilation. Still, the relative influence of social and motion content of a visual stimulus on pupillary reactivity has never been elucidated. A total of 30 adults aged 18-33 years old were recorded with an eye tracker. We analysed the event-related pupil dilation in response to stimuli distributed along a gradient of social salience (non-social to social, going from objects to avatars to real faces) and dynamism (static to micro- to macro-motion). Pupil dilation was larger in response to social (faces and avatars) compared to non-social stimuli (objects), with surprisingly a larger response for avatars. Pupil dilation was also larger in response to macro-motion compared to static. After quantifying each stimulus' real quantity of motion, we found that the higher the quantity of motion, the larger the pupil dilated. However, the slope of this relationship was not higher for social stimuli. Overall, pupil dilation was more sensitive to the real quantity of motion than to the social component of motion, highlighting the relevance of ecological stimulations. Physiological response to faces results from specific contributions of both motion and social processing.


Assuntos
Reconhecimento Facial , Percepção de Movimento , Pupila , Humanos , Pupila/fisiologia , Adulto Jovem , Adulto , Masculino , Feminino , Adolescente , Percepção de Movimento/fisiologia , Reconhecimento Facial/fisiologia , Percepção Social , Estimulação Luminosa/métodos , Face/fisiologia , Tecnologia de Rastreamento Ocular
6.
IEEE Trans Image Process ; 33: 5045-5059, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39186413

RESUMO

Facial action units (AUs) focus on a comprehensive set of atomic facial muscle movements for human expression understanding. Based on supervised learning, discriminative AU representation can be achieved from local patches where the AUs are located. Unfortunately, accurate AU localization and characterization are challenged by the tremendous manual annotations, which limits the performance of AU recognition in realistic scenarios. In this study, we propose an end-to-end self-supervised AU representation learning model (SsupAU) to learn AU representations from unlabeled facial videos. Specifically, the input face is decomposed into six components using auto-encoders: five photo-geometric meaningful components, together with 2D flow field AUs. By constructing the canonical neutral face, posed neutral face, and posed expressional face gradually, these components can be disentangled without supervision, therefore the AU representations can be learned. To construct the canonical neutral face without manually labeled ground truth of emotion state or AU intensity, two priori knowledge based assumptions are proposed: 1) identity consistency, which explores the identical albedos and depths of different frames in a face video, and helps to learn the camera color mode as an extra cue for canonical neutral face recovery. 2) average face, which enables the model to discover a 'neutral facial expression' of the canonical neutral face and decouple the AUs in representation learning. To the best of our knowledge, this is the first attempt to design self-supervised AU representation learning method based on the definition of AUs. Substantial experiments on benchmark datasets have demonstrated the superior performance of the proposed work in comparison to other state-of-the-art approaches, as well as an outstanding capability of decomposing input face into meaningful factors for its reconstruction. The code is made available at https://github.com/Sunner4nwpu/SsupAU.


Assuntos
Algoritmos , Face , Expressão Facial , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado , Gravação em Vídeo , Humanos , Face/diagnóstico por imagem , Face/fisiologia , Gravação em Vídeo/métodos , Processamento de Imagem Assistida por Computador/métodos , Músculos Faciais/fisiologia , Bases de Dados Factuais
7.
Proc Natl Acad Sci U S A ; 121(28): e2321346121, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38954551

RESUMO

How does the brain process the faces of familiar people? Neuropsychological studies have argued for an area of the temporal pole (TP) linking faces with person identities, but magnetic susceptibility artifacts in this region have hampered its study with fMRI. Using data acquisition and analysis methods optimized to overcome this artifact, we identify a familiar face response in TP, reliably observed in individual brains. This area responds strongly to visual images of familiar faces over unfamiliar faces, objects, and scenes. However, TP did not just respond to images of faces, but also to a variety of high-level social cognitive tasks, including semantic, episodic, and theory of mind tasks. The response profile of TP contrasted with a nearby region of the perirhinal cortex that responded specifically to faces, but not to social cognition tasks. TP was functionally connected with a distributed network in the association cortex associated with social cognition, while PR was functionally connected with face-preferring areas of the ventral visual cortex. This work identifies a missing link in the human face processing system that specifically processes familiar faces, and is well placed to integrate visual information about faces with higher-order conceptual information about other people. The results suggest that separate streams for person and face processing reach anterior temporal areas positioned at the top of the cortical hierarchy.


Assuntos
Imageamento por Ressonância Magnética , Lobo Temporal , Humanos , Imageamento por Ressonância Magnética/métodos , Lobo Temporal/fisiologia , Lobo Temporal/diagnóstico por imagem , Masculino , Feminino , Adulto , Reconhecimento Facial/fisiologia , Mapeamento Encefálico/métodos , Reconhecimento Psicológico/fisiologia , Face/fisiologia , Adulto Jovem , Reconhecimento Visual de Modelos/fisiologia
8.
Sci Rep ; 14(1): 15135, 2024 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956123

RESUMO

The behavioral and neural responses to social exclusion were examined in women randomized to four conditions, varying in levels of attractiveness and friendliness. Informed by evolutionary theory, we predicted that being socially excluded by attractive unfriendly women would be more distressing than being excluded by unattractive women, irrespective of their friendliness level. Our results contradicted most of our predictions but provide important insights into women's responses to interpersonal conflict. Accounting for rejection sensitivity, P300 event-related potential amplitudes were largest when women were excluded by unattractive unfriendly women. This may be due to an expectancy violation or an annoyance with being excluded by women low on social desirability. An examination of anger rumination rates by condition suggests the latter. Only attractive women's attractiveness ratings were lowered in the unfriendly condition, indicating they were specifically punished for their exclusionary behavior. Women were more likely to select attractive women to compete against with one exception-they selected the Black attractive opponent less often than the White attractive opponent when presented as unfriendly. Finally, consistent with studies on retaliation in relation to social exclusion, women tended to rate competitors who rejected them as being more rude, more competitive, less attractive, less nice, and less happy than non-competitors. The ubiquity of social exclusion and its pointed emotional and physiological impact on women demands more research on this topic.


Assuntos
Beleza , Humanos , Feminino , Adulto Jovem , Adulto , Distância Psicológica , Desejabilidade Social , Amigos/psicologia , Potenciais Evocados P300/fisiologia , Adolescente , Face/fisiologia
9.
Sensors (Basel) ; 24(14)2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39065979

RESUMO

By leveraging artificial intelligence and big data to analyze and assess classroom conditions, we can significantly enhance teaching quality. Nevertheless, numerous existing studies primarily concentrate on evaluating classroom conditions for student groups, often neglecting the need for personalized instructional support for individual students. To address this gap and provide a more focused analysis of individual students in the classroom environment, we implemented an embedded application design using face recognition technology and target detection algorithms. The Insightface face recognition algorithm was employed to identify students by constructing a classroom face dataset and training it; simultaneously, classroom behavioral data were collected and trained, utilizing the YOLOv5 algorithm to detect students' body regions and correlate them with their facial regions to identify students accurately. Subsequently, these modeling algorithms were deployed onto an embedded device, the Atlas 200 DK, for application development, enabling the recording of both overall classroom conditions and individual student behaviors. Test results show that the detection precision for various types of behaviors is above 0.67. The average false detection rate for face recognition is 41.5%. The developed embedded application can reliably detect student behavior in a classroom setting, identify students, and capture image sequences of body regions associated with negative behavior for better management. These data empower teachers to gain a deeper understanding of their students, which is crucial for enhancing teaching quality and addressing the individual needs of students.


Assuntos
Algoritmos , Humanos , Estudantes , Inteligência Artificial , Face/fisiologia , Reconhecimento Facial/fisiologia , Reconhecimento Facial Automatizado/métodos , Processamento de Imagem Assistida por Computador/métodos , Feminino , Reconhecimento Automatizado de Padrão/métodos
10.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000993

RESUMO

As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its market success and related emotional benefit to users, the specific design of the eye and mouth shape of a social robot in eliciting trustworthiness has received only limited attention. In order to address this research gap, our study conducted a 2 (eye shape) × 3 (mouth shape) full factorial between-subject experiment. A total of 211 participants were recruited and randomly assigned to the six scenarios in the study. After exposure to the stimuli, perceived trustworthiness and robot attitude were measured accordingly. The results showed that round eyes (vs. narrow eyes) and an upturned-shape mouth or neutral mouth (vs. downturned-shape mouth) for social robots could significantly improve people's trustworthiness and attitude towards social robots. The effect of eye and mouth shape on robot attitude are all mediated by the perceived trustworthiness. Trustworthy human facial features could be applied to the robot's face, eliciting a similar trustworthiness perception and attitude. In addition to empirical contributions to HRI, this finding could shed light on the design practice for a trustworthy-looking social robot.


Assuntos
Robótica , Confiança , Humanos , Robótica/métodos , Masculino , Feminino , Adulto , Face/anatomia & histologia , Face/fisiologia , Adulto Jovem , Inteligência Artificial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA