Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.437
Filter
1.
Codas ; 36(5): e20230016, 2024.
Article in Portuguese, English | MEDLINE | ID: mdl-39166599

ABSTRACT

PURPOSE: Propose and verify the efficiency of myofunctional intervention program to attenuate facial aging signs and balance the orofacial functions. METHODS: Thirty women, aged 50 to 60 years, randomly divided into: therapy group (TG) submitted to Orofacial Myofunctional Therapy and electromyographic biofeedback group (EBG), submitted to the same program associated with electromyographic biofeedback for chewing, swallowing, and smiling functions training. Aesthetic and oromyofunctional aspects were assessed from photographs, videos, MBGR Protocol and scales for assessing facial aging signs, described in the literature. 50-minute sessions were held weekly for nine weeks and monthly for six months after washout period. Three assessments, identical to the initial one, were performed in the tenth week, eighth week after washout and conclusion of the research. The participants responded to the Satisfaction Questionnaire on the tenth week. RESULTS: The statistical analysis using the ANOVA, Tukey and Mann Whitney tests, for inter and intragroup comparison, showed that: intervention promoted attenuation of facial aging signs mainly in TG group, balance of chewing and swallowing functions in both groups; there was an impact of electromyographic biofeedback on the degree of participants' satisfaction, greater for EBG; interruption of the program for eight weeks resulted in aesthetic losses, mainly in TG, yet not functional losses, in both groups; the six monthly sessions had a limited impact on overcoming the esthetic losses that occurred after washout. CONCLUSION: The proposed program resulted in attenuation of aging signs, mainly in the TG group and improvement in orofacial functions, in both groups.


OBJETIVO: Propor e verificar a eficiência de um programa de intervenção miofuncional para atenuar sinais do envelhecimento facial e equilibrar as funções orofaciais. MÉTODO: 30 mulheres, entre 50 e 60 anos, divididas aleatoriamente em: grupo terapia (GT), submetido ao programa de terapia miofuncional orofacial e grupo biofeedback eletromiográfico (GBE), submetido ao mesmo programa associado ao biofeedback eletromiográfico para treinamento da mastigação, deglutição e sorriso. Aspectos estéticos e oromiofuncionais foram avaliados a partir da documentação das fotografias e vídeos, do Protocolo de avaliação miofuncional orofacial MBGR e escalas de avaliação dos sinais de envelhecimento facial descritas na literatura. Sessões de 50 minutos foram realizadas semanalmente, durante nove semanas e mensalmente, durante seis meses, após washout. Três avaliações, idênticas à inicial, foram realizadas na décima semana, oitava semana após washout e conclusão da pesquisa. As participantes responderam ao Questionário de Satisfação na décima semana. RESULTADOS: A análise estatística realizada, por meio dos testes ANOVA, Tukey e Mann Whitney, para comparação inter e intragrupos, demonstrou que: houve atenuação dos sinais do envelhecimento facial, principalmente no GT e equilíbrio das funções mastigação e deglutição nos dois grupos; houve impacto do biofeedback eletromiográfico sobre o grau de satisfação das participantes, sendo maior no GBE; a interrupção do programa durante oito semanas resultou em perdas estéticas, principalmente no GT, mas não em perdas funcionais, nos dois grupos; as seis sessões realizadas mensalmente tiveram impacto limitado para superação das perdas estéticas ocorridas após washout. CONCLUSÃO: O programa proposto resultou em atenuação dos sinais de envelhecimento, principalmente no grupo GT e melhoria nas funções orofaciais, nos dois grupos.


Subject(s)
Myofunctional Therapy , Humans , Female , Myofunctional Therapy/methods , Middle Aged , Mastication/physiology , Electromyography , Aging/physiology , Facial Muscles/physiology , Facial Muscles/physiopathology , Deglutition/physiology , Biofeedback, Psychology/methods , Patient Satisfaction , Face/physiology , Treatment Outcome
2.
Sci Rep ; 14(1): 19563, 2024 08 22.
Article in English | MEDLINE | ID: mdl-39174675

ABSTRACT

Information about the concordance between dynamic emotional experiences and objective signals is practically useful. Previous studies have shown that valence dynamics can be estimated by recording electrical activity from the muscles in the brows and cheeks. However, whether facial actions based on video data and analyzed without electrodes can be used for sensing emotion dynamics remains unknown. We investigated this issue by recording video of participants' faces and obtaining dynamic valence and arousal ratings while they observed emotional films. Action units (AUs) 04 (i.e., brow lowering) and 12 (i.e., lip-corner pulling), detected through an automated analysis of the video data, were negatively and positively correlated with dynamic ratings of subjective valence, respectively. Several other AUs were also correlated with dynamic valence or arousal ratings. Random forest regression modeling, interpreted using the SHapley Additive exPlanation tool, revealed non-linear associations between the AUs and dynamic ratings of valence or arousal. These results suggest that an automated analysis of facial expression video data can be used to estimate dynamic emotional states, which could be applied in various fields including mental health diagnosis, security monitoring, and education.


Subject(s)
Arousal , Emotions , Facial Expression , Humans , Emotions/physiology , Arousal/physiology , Female , Male , Adult , Young Adult , Video Recording , Facial Muscles/physiology , Face/physiology
3.
Sensors (Basel) ; 24(15)2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39123832

ABSTRACT

The objective of the article is to recognize users' emotions by classifying facial electromyographic (EMG) signals. A biomedical signal amplifier, equipped with eight active electrodes positioned in accordance with the Facial Action Coding System, was used to record the EMG signals. These signals were registered during a procedure where users acted out various emotions: joy, sadness, surprise, disgust, anger, fear, and neutral. Recordings were made for 16 users. The mean power of the EMG signals formed the feature set. We utilized these features to train and evaluate various classifiers. In the subject-dependent model, the average classification accuracies were 96.3% for KNN, 94.9% for SVM with a linear kernel, 94.6% for SVM with a cubic kernel, and 93.8% for LDA. In the subject-independent model, the classification results varied depending on the tested user, ranging from 91.4% to 48.6% for the KNN classifier, with an average accuracy of 67.5%. The SVM with a cubic kernel performed slightly worse, achieving an average accuracy of 59.1%, followed by the SVM with a linear kernel at 53.9%, and the LDA classifier at 41.2%. Additionally, the study identified the most effective electrodes for distinguishing between pairs of emotions.


Subject(s)
Electromyography , Emotions , Humans , Electromyography/methods , Emotions/physiology , Male , Female , Adult , Facial Expression , Signal Processing, Computer-Assisted , Support Vector Machine , Algorithms , Facial Muscles/physiology , Young Adult , Face/physiology , Electrodes
4.
Sensors (Basel) ; 24(14)2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39065979

ABSTRACT

By leveraging artificial intelligence and big data to analyze and assess classroom conditions, we can significantly enhance teaching quality. Nevertheless, numerous existing studies primarily concentrate on evaluating classroom conditions for student groups, often neglecting the need for personalized instructional support for individual students. To address this gap and provide a more focused analysis of individual students in the classroom environment, we implemented an embedded application design using face recognition technology and target detection algorithms. The Insightface face recognition algorithm was employed to identify students by constructing a classroom face dataset and training it; simultaneously, classroom behavioral data were collected and trained, utilizing the YOLOv5 algorithm to detect students' body regions and correlate them with their facial regions to identify students accurately. Subsequently, these modeling algorithms were deployed onto an embedded device, the Atlas 200 DK, for application development, enabling the recording of both overall classroom conditions and individual student behaviors. Test results show that the detection precision for various types of behaviors is above 0.67. The average false detection rate for face recognition is 41.5%. The developed embedded application can reliably detect student behavior in a classroom setting, identify students, and capture image sequences of body regions associated with negative behavior for better management. These data empower teachers to gain a deeper understanding of their students, which is crucial for enhancing teaching quality and addressing the individual needs of students.


Subject(s)
Algorithms , Humans , Students , Artificial Intelligence , Face/physiology , Facial Recognition/physiology , Automated Facial Recognition/methods , Image Processing, Computer-Assisted/methods , Female , Pattern Recognition, Automated/methods
5.
Proc Natl Acad Sci U S A ; 121(28): e2321346121, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38954551

ABSTRACT

How does the brain process the faces of familiar people? Neuropsychological studies have argued for an area of the temporal pole (TP) linking faces with person identities, but magnetic susceptibility artifacts in this region have hampered its study with fMRI. Using data acquisition and analysis methods optimized to overcome this artifact, we identify a familiar face response in TP, reliably observed in individual brains. This area responds strongly to visual images of familiar faces over unfamiliar faces, objects, and scenes. However, TP did not just respond to images of faces, but also to a variety of high-level social cognitive tasks, including semantic, episodic, and theory of mind tasks. The response profile of TP contrasted with a nearby region of the perirhinal cortex that responded specifically to faces, but not to social cognition tasks. TP was functionally connected with a distributed network in the association cortex associated with social cognition, while PR was functionally connected with face-preferring areas of the ventral visual cortex. This work identifies a missing link in the human face processing system that specifically processes familiar faces, and is well placed to integrate visual information about faces with higher-order conceptual information about other people. The results suggest that separate streams for person and face processing reach anterior temporal areas positioned at the top of the cortical hierarchy.


Subject(s)
Magnetic Resonance Imaging , Temporal Lobe , Humans , Magnetic Resonance Imaging/methods , Temporal Lobe/physiology , Temporal Lobe/diagnostic imaging , Male , Female , Adult , Facial Recognition/physiology , Brain Mapping/methods , Recognition, Psychology/physiology , Face/physiology , Young Adult , Pattern Recognition, Visual/physiology
6.
Sci Rep ; 14(1): 15135, 2024 07 02.
Article in English | MEDLINE | ID: mdl-38956123

ABSTRACT

The behavioral and neural responses to social exclusion were examined in women randomized to four conditions, varying in levels of attractiveness and friendliness. Informed by evolutionary theory, we predicted that being socially excluded by attractive unfriendly women would be more distressing than being excluded by unattractive women, irrespective of their friendliness level. Our results contradicted most of our predictions but provide important insights into women's responses to interpersonal conflict. Accounting for rejection sensitivity, P300 event-related potential amplitudes were largest when women were excluded by unattractive unfriendly women. This may be due to an expectancy violation or an annoyance with being excluded by women low on social desirability. An examination of anger rumination rates by condition suggests the latter. Only attractive women's attractiveness ratings were lowered in the unfriendly condition, indicating they were specifically punished for their exclusionary behavior. Women were more likely to select attractive women to compete against with one exception-they selected the Black attractive opponent less often than the White attractive opponent when presented as unfriendly. Finally, consistent with studies on retaliation in relation to social exclusion, women tended to rate competitors who rejected them as being more rude, more competitive, less attractive, less nice, and less happy than non-competitors. The ubiquity of social exclusion and its pointed emotional and physiological impact on women demands more research on this topic.


Subject(s)
Beauty , Humans , Female , Young Adult , Adult , Psychological Distance , Social Desirability , Friends/psychology , Event-Related Potentials, P300/physiology , Adolescent , Face/physiology
7.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000993

ABSTRACT

As a technical application in artificial intelligence, a social robot is one of the branches of robotic studies that emphasizes socially communicating and interacting with human beings. Although both robot and behavior research have realized the significance of social robot design for its market success and related emotional benefit to users, the specific design of the eye and mouth shape of a social robot in eliciting trustworthiness has received only limited attention. In order to address this research gap, our study conducted a 2 (eye shape) × 3 (mouth shape) full factorial between-subject experiment. A total of 211 participants were recruited and randomly assigned to the six scenarios in the study. After exposure to the stimuli, perceived trustworthiness and robot attitude were measured accordingly. The results showed that round eyes (vs. narrow eyes) and an upturned-shape mouth or neutral mouth (vs. downturned-shape mouth) for social robots could significantly improve people's trustworthiness and attitude towards social robots. The effect of eye and mouth shape on robot attitude are all mediated by the perceived trustworthiness. Trustworthy human facial features could be applied to the robot's face, eliciting a similar trustworthiness perception and attitude. In addition to empirical contributions to HRI, this finding could shed light on the design practice for a trustworthy-looking social robot.


Subject(s)
Robotics , Trust , Humans , Robotics/methods , Male , Female , Adult , Face/anatomy & histology , Face/physiology , Young Adult , Artificial Intelligence
8.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-39001147

ABSTRACT

With the development of data mining technology, the analysis of event-related potential (ERP) data has evolved from statistical analysis of time-domain features to data-driven techniques based on supervised and unsupervised learning. However, there are still many challenges in understanding the relationship between ERP components and the representation of familiar and unfamiliar faces. To address this, this paper proposes a model based on Dynamic Multi-Scale Convolution for group recognition of familiar and unfamiliar faces. This approach uses generated weight masks for cross-subject familiar/unfamiliar face recognition using a multi-scale model. The model employs a variable-length filter generator to dynamically determine the optimal filter length for time-series samples, thereby capturing features at different time scales. Comparative experiments are conducted to evaluate the model's performance against SOTA models. The results demonstrate that our model achieves impressive outcomes, with a balanced accuracy rate of 93.20% and an F1 score of 88.54%, outperforming the methods used for comparison. The ERP data extracted from different time regions in the model can also provide data-driven technical support for research based on the representation of different ERP components.


Subject(s)
Evoked Potentials , Facial Recognition , Humans , Evoked Potentials/physiology , Facial Recognition/physiology , Electroencephalography/methods , Algorithms , Face/physiology
9.
Sci Rep ; 14(1): 14600, 2024 06 25.
Article in English | MEDLINE | ID: mdl-38918449

ABSTRACT

Spontaneous touches of one's face (sFST) were suggested to serve cognitive-emotional regulation processes. During the pandemic, refraining from face-touching was recommended, yet, accompanying effects and the influence of personal attributes remain unclear. Ninety participants (45 female, 45 male) filled out a questionnaire concerning personality, anxiety screening and ADHD screening. Subsequently, they performed a delayed verbal memory recall task four times. After two times, sixty participants were instructed to refrain from face-touching (experimental group). Thirty participants did not receive behavioral instructions (control group). To identify face-touches and conduct further analysis, Video, EMG, and EEG data were recorded. Two samples were formed, depending on the adherence to completely refrain from face-touching (adherent, non-adherent sample) and compared to each other and the control group. EEG analyses uncovered that refraining from face-touching is accompanied by increased beta-power at sensorimotor sites and, exclusively in the non-adherent sample, at frontal sites. Decreased memory performance was found exclusively in subsamples, who non-adherently touched their face while retaining words. In terms of questionnaire results, lower Conscientiousness and higher ADHD screening scores were revealed by the non-adherent compared to the adherent sample. No differences were found among the subsamples. The presented results indicate that refraining from face-touching is related to personal attributes, accompanied by neurophysiological shifts and for a portion of humans by lower memory performance, supporting the notion that sFST serve processes beyond sensorimotor.


Subject(s)
Electroencephalography , Personality , Humans , Female , Male , Personality/physiology , Adult , Young Adult , Memory/physiology , Face/physiology , Touch/physiology , Surveys and Questionnaires
10.
Sci Rep ; 14(1): 12629, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824168

ABSTRACT

Moral judgements about people based on their actions is a key component that guides social decision making. It is currently unknown how positive or negative moral judgments associated with a person's face are processed and stored in the brain for a long time. Here, we investigate the long-term memory of moral values associated with human faces using simultaneous EEG-fMRI data acquisition. Results show that only a few exposures to morally charged stories of people are enough to form long-term memories a day later for a relatively large number of new faces. Event related potentials (ERPs) showed a significant differentiation of remembered good vs bad faces over centerofrontal electrode sites (value ERP). EEG-informed fMRI analysis revealed a subcortical cluster centered on the left caudate tail (CDt) as a correlate of the face value ERP. Importantly neither this analysis nor a conventional whole-brain analysis revealed any significant coding of face values in cortical areas, in particular the fusiform face area (FFA). Conversely an fMRI-informed EEG source localization using accurate subject-specific EEG head models also revealed activation in the left caudate tail. Nevertheless, the detected caudate tail region was found to be functionally connected to the FFA, suggesting FFA to be the source of face-specific information to CDt. A further psycho-physiological interaction analysis also revealed task-dependent coupling between CDt and dorsomedial prefrontal cortex (dmPFC), a region previously identified as retaining emotional working memories. These results identify CDt as a main site for encoding the long-term value memories of faces in humans suggesting that moral value of faces activates the same subcortical basal ganglia circuitry involved in processing reward value memory for objects in primates.


Subject(s)
Electroencephalography , Evoked Potentials , Magnetic Resonance Imaging , Morals , Humans , Magnetic Resonance Imaging/methods , Female , Male , Adult , Evoked Potentials/physiology , Young Adult , Caudate Nucleus/physiology , Caudate Nucleus/diagnostic imaging , Brain Mapping/methods , Face/physiology , Memory/physiology , Judgment/physiology
11.
IEEE J Biomed Health Inform ; 28(8): 4613-4624, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38743531

ABSTRACT

Remote photoplethysmography (rPPG) is a non-contact method that employs facial videos for measuring physiological parameters. Existing rPPG methods have achieved remarkable performance. However, the success mainly profits from supervised learning over massive labeled data. On the other hand, existing unsupervised rPPG methods fail to fully utilize spatio-temporal features and encounter challenges in low-light or noise environments. To address these problems, we propose an unsupervised contrast learning approach, ST-Phys. We incorporate a low-light enhancement module, a temporal dilated module, and a spatial enhanced module to better deal with long-term dependencies under the random low-light conditions. In addition, we design a circular margin loss, wherein rPPG signals originating from identical videos are attracted, while those from distinct videos are repelled. Our method is assessed on six openly accessible datasets, including RGB and NIR videos. Extensive experiments reveal the superior performance of our proposed ST-Phys over state-of-the-art unsupervised rPPG methods. Moreover, it offers advantages in parameter reduction and noise robustness.


Subject(s)
Photoplethysmography , Signal Processing, Computer-Assisted , Unsupervised Machine Learning , Video Recording , Humans , Photoplethysmography/methods , Video Recording/methods , Face/physiology , Face/diagnostic imaging , Algorithms , Image Processing, Computer-Assisted/methods , Databases, Factual , Remote Sensing Technology/methods
12.
Sensors (Basel) ; 24(9)2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38732846

ABSTRACT

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Gestures , Humans , Electroencephalography/methods , Face/physiology , Algorithms , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Brain/physiology , Male
13.
Sensors (Basel) ; 24(9)2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38732856

ABSTRACT

Biometric authentication plays a vital role in various everyday applications with increasing demands for reliability and security. However, the use of real biometric data for research raises privacy concerns and data scarcity issues. A promising approach using synthetic biometric data to address the resulting unbalanced representation and bias, as well as the limited availability of diverse datasets for the development and evaluation of biometric systems, has emerged. Methods for a parameterized generation of highly realistic synthetic data are emerging and the necessary quality metrics to prove that synthetic data can compare to real data are open research tasks. The generation of 3D synthetic face data using game engines' capabilities of generating varied realistic virtual characters is explored as a possible alternative for generating synthetic face data while maintaining reproducibility and ground truth, as opposed to other creation methods. While synthetic data offer several benefits, including improved resilience against data privacy concerns, the limitations and challenges associated with their usage are addressed. Our work shows concurrent behavior in comparing semi-synthetic data as a digital representation of a real identity with their real datasets. Despite slight asymmetrical performance in comparison with a larger database of real samples, a promising performance in face data authentication is shown, which lays the foundation for further investigations with digital avatars and the creation and analysis of fully synthetic data. Future directions for improving synthetic biometric data generation and their impact on advancing biometrics research are discussed.


Subject(s)
Face , Video Games , Humans , Face/anatomy & histology , Face/physiology , Biometry/methods , Biometric Identification/methods , Imaging, Three-Dimensional/methods , Male , Female , Algorithms , Reproducibility of Results
14.
PLoS One ; 19(5): e0303400, 2024.
Article in English | MEDLINE | ID: mdl-38739635

ABSTRACT

Visual abilities tend to vary predictably across the visual field-for simple low-level stimuli, visibility is better along the horizontal vs. vertical meridian and in the lower vs. upper visual field. In contrast, face perception abilities have been reported to show either distinct or entirely idiosyncratic patterns of variation in peripheral vision, suggesting a dissociation between the spatial properties of low- and higher-level vision. To assess this link more clearly, we extended methods used in low-level vision to develop an acuity test for face perception, measuring the smallest size at which facial gender can be reliably judged in peripheral vision. In 3 experiments, we show the characteristic inversion effect, with better acuity for upright faces than inverted, demonstrating the engagement of high-level face-selective processes in peripheral vision. We also observe a clear advantage for gender acuity on the horizontal vs. vertical meridian and a smaller-but-consistent lower- vs. upper-field advantage. These visual field variations match those of low-level vision, indicating that higher-level face processing abilities either inherit or actively maintain the characteristic patterns of spatial selectivity found in early vision. The commonality of these spatial variations throughout the visual hierarchy means that the location of faces in our visual field systematically influences our perception of them.


Subject(s)
Facial Recognition , Visual Fields , Humans , Visual Fields/physiology , Female , Male , Adult , Facial Recognition/physiology , Young Adult , Photic Stimulation , Visual Perception/physiology , Visual Acuity/physiology , Face/physiology
15.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38714314

ABSTRACT

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Subject(s)
Face , Trust , Voice , Humans , Female , Voice/physiology , Young Adult , Adult , Face/physiology , Speech Perception/physiology , Pitch Perception/physiology , Facial Recognition/physiology , Cues , Adolescent
16.
Sci Rep ; 14(1): 10040, 2024 05 02.
Article in English | MEDLINE | ID: mdl-38693189

ABSTRACT

Investigation of visual illusions helps us understand how we process visual information. For example, face pareidolia, the misperception of illusory faces in objects, could be used to understand how we process real faces. However, it remains unclear whether this illusion emerges from errors in face detection or from slower, cognitive processes. Here, our logic is straightforward; if examples of face pareidolia activate the mechanisms that rapidly detect faces in visual environments, then participants will look at objects more quickly when the objects also contain illusory faces. To test this hypothesis, we sampled continuous eye movements during a fast saccadic choice task-participants were required to select either faces or food items. During this task, pairs of stimuli were positioned close to the initial fixation point or further away, in the periphery. As expected, the participants were faster to look at face targets than food targets. Importantly, we also discovered an advantage for food items with illusory faces but, this advantage was limited to the peripheral condition. These findings are among the first to demonstrate that the face pareidolia illusion persists in the periphery and, thus, it is likely to be a consequence of erroneous face detection.


Subject(s)
Illusions , Humans , Female , Male , Adult , Illusions/physiology , Young Adult , Visual Perception/physiology , Photic Stimulation , Face/physiology , Facial Recognition/physiology , Eye Movements/physiology , Pattern Recognition, Visual/physiology
17.
Curr Biol ; 34(9): R346-R348, 2024 05 06.
Article in English | MEDLINE | ID: mdl-38714161

ABSTRACT

Animals including humans often react to sounds by involuntarily moving their face and body. A new study shows that facial movements provide a simple and reliable readout of a mouse's hearing ability that is more sensitive than traditional measurements.


Subject(s)
Face , Animals , Mice , Face/physiology , Auditory Perception/physiology , Hearing/physiology , Sound , Movement/physiology , Humans
18.
PLoS One ; 19(5): e0304150, 2024.
Article in English | MEDLINE | ID: mdl-38805447

ABSTRACT

When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.


Subject(s)
Language , Phonetics , Speech Perception , Humans , Female , Male , Adult , Speech Perception/physiology , Young Adult , Face/physiology , Visual Perception/physiology , Eye Movements/physiology , Speech/physiology , Eye-Tracking Technology
19.
Sensors (Basel) ; 24(8)2024 Apr 11.
Article in English | MEDLINE | ID: mdl-38676067

ABSTRACT

Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.


Subject(s)
Facial Expression , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Face/physiology , Emotions/physiology , Algorithms , Image Processing, Computer-Assisted/methods
20.
Sensors (Basel) ; 24(8)2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38676235

ABSTRACT

Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants' emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera.


Subject(s)
Deep Learning , Emotions , Facial Expression , Heart Rate , Humans , Emotions/physiology , Heart Rate/physiology , Video Recording/methods , Image Processing, Computer-Assisted/methods , Face/physiology , Female , Male
SELECTION OF CITATIONS
SEARCH DETAIL