Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 4.497
Filtrer
1.
Sci Rep ; 14(1): 17802, 2024 Aug 01.
Article de Anglais | MEDLINE | ID: mdl-39090101

RÉSUMÉ

The PI20 is a self-report questionnaire that assesses the presence of lifelong face recognition difficulties. The items on this scale ask respondents to assess their face recognition ability relative to the rest of the population, either explicitly or implicitly. Recent reports suggest that the PI20 scores of autistic participants exhibit little or no correlation with their performance on the Cambridge Face Memory Test-a key measure of face recognition ability. These reports are suggestive of a meta-cognitive deficit whereby autistic individuals are unable to infer whether their face recognition is impaired relative to the wider population. In the present study, however, we observed significant correlations between the PI20 scores of 77 autistic adults and their performance on two variants of the Cambridge Face Memory Test. These findings indicate that autistic individuals can infer whether their face recognition ability is impaired. Consistent with previous research, we observed a wide spread of face recognition abilities within our autistic sample. While some individuals approached ceiling levels of performance, others met the prevailing diagnostic criteria for developmental prosopagnosia. This variability showed little or no association with non-verbal intelligence, autism severity, or the presence of co-occurring alexithymia or ADHD.


Sujet(s)
Trouble autistique , Reconnaissance faciale , Humains , Mâle , Femelle , Adulte , Trouble autistique/psychologie , Jeune adulte , Adulte d'âge moyen , Adolescent , Enquêtes et questionnaires , , Prosopagnosie/psychologie , Prosopagnosie/physiopathologie
2.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article de Anglais | MEDLINE | ID: mdl-39001147

RÉSUMÉ

With the development of data mining technology, the analysis of event-related potential (ERP) data has evolved from statistical analysis of time-domain features to data-driven techniques based on supervised and unsupervised learning. However, there are still many challenges in understanding the relationship between ERP components and the representation of familiar and unfamiliar faces. To address this, this paper proposes a model based on Dynamic Multi-Scale Convolution for group recognition of familiar and unfamiliar faces. This approach uses generated weight masks for cross-subject familiar/unfamiliar face recognition using a multi-scale model. The model employs a variable-length filter generator to dynamically determine the optimal filter length for time-series samples, thereby capturing features at different time scales. Comparative experiments are conducted to evaluate the model's performance against SOTA models. The results demonstrate that our model achieves impressive outcomes, with a balanced accuracy rate of 93.20% and an F1 score of 88.54%, outperforming the methods used for comparison. The ERP data extracted from different time regions in the model can also provide data-driven technical support for research based on the representation of different ERP components.


Sujet(s)
Potentiels évoqués , Reconnaissance faciale , Humains , Potentiels évoqués/physiologie , Reconnaissance faciale/physiologie , Électroencéphalographie/méthodes , Algorithmes , Face/physiologie
3.
Philos Trans R Soc Lond B Biol Sci ; 379(1908): 20230248, 2024 Aug 26.
Article de Anglais | MEDLINE | ID: mdl-39005042

RÉSUMÉ

We present novel research on the cortical dynamics of atypical perceptual and emotional processing in people with symptoms of depersonalization-derealization disorder (DP-DR). We used electroencephalography (EEG)/event-related potentials (ERPs) to delineate the early perceptual mechanisms underlying emotional face recognition and mirror touch in adults with low and high levels of DP-DR symptoms (low-DP and high-DP groups). Face-sensitive visual N170 showed markedly less differentiation for emotional versus neutral face-voice stimuli in the high- than in the low-DP group. This effect was related to self-reported bodily symptoms like disembodiment. Emotional face-voice primes altered mirror touch at somatosensory cortical components P45 and P100 differently in the two groups. In the high-DP group, mirror touch occurred only when seeing touch after being confronted with angry face-voice primes. Mirror touch in the low-DP group, however, was unaffected by preceding emotions. Modulation of mirror touch following angry others was related to symptoms of self-other confusion. Results suggest that others' negative emotions affect somatosensory processes in those with an altered sense of bodily self. Our findings are in line with the idea that disconnecting from one's body and self (core symptom of DP-DR) may be a defence mechanism to protect from the threat of negative feelings, which may be exacerbated through self-other confusion. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.


Sujet(s)
Dépersonnalisation , Électroencéphalographie , Émotions , Potentiels évoqués , Humains , Émotions/physiologie , Mâle , Femelle , Adulte , Dépersonnalisation/psychologie , Dépersonnalisation/physiopathologie , Jeune adulte , Reconnaissance faciale/physiologie , Perception du toucher/physiologie
4.
Rev Neurol ; 79(3): 71-76, 2024 Aug 01.
Article de Espagnol | MEDLINE | ID: mdl-39007858

RÉSUMÉ

INTRODUCTION: Parkinson's disease is characterised by the presence of motor symptoms including hypomimia, and by non-motor symptoms including alterations in facial recognition of basic emotions. Few studies have investigated this alteration and its relationship to the severity of hypomimia. OBJECTIVE: The objective is to study the relationship between hypomimia and the facial recognition of basic emotions in subjects with Parkinson's disease. SUBJECTS AND METHODS: Twenty-three patients and 29 controls were evaluated with the test battery for basic emotion facial recognition. The patients were divided into two subgroups according to the intensity of their hypomimia. RESULTS: The comparison in battery test performance between the minimal/mild hypomimia and moderate/severe hypomimia groups was statistically significant in favour of the former group. CONCLUSIONS: This finding shows a close relationship between expression and facial recognition of emotions, which could be explained through the mechanism of motor simulation.


TITLE: Relación entre la gravedad de la hipomimia y el reconocimiento de emociones básicas en la enfermedad de Parkinson.Introducción. La enfermedad de Parkinson se caracteriza por la presencia de síntomas motores, entre los que es significativa la presencia de hipomimia, y por síntomas no motores, en los que se destaca la alteración en el reconocimiento facial de emociones básicas. Son pocos los estudios que investiguen dicha alteración relacionada con la gravedad de la hipomimia. Objetivo. El objetivo es estudiar la relación entre la hipomimia y el reconocimiento facial de emociones básicas en sujetos con enfermedad de Parkinson. Sujetos y métodos. Se evaluó a 23 pacientes y 29 controles con la batería de reconocimiento facial de emociones básicas. El grupo de pacientes se dividió en dos subgrupos según la intensidad de la hipomimia. Resultados. La comparación en el rendimiento de las pruebas de la batería entre el grupo de hipomimia mínima/leve e hipomimia moderada/grave resultó estadísticamente significativa a favor del primer grupo. Conclusiones. Este hallazgo evidencia una estrecha relación entre la expresión y el reconocimiento facial de emociones, que podría explicarse a través del mecanismo de simulación motora.


Sujet(s)
Émotions , Reconnaissance faciale , Maladie de Parkinson , Indice de gravité de la maladie , Humains , Maladie de Parkinson/psychologie , Maladie de Parkinson/complications , Maladie de Parkinson/physiopathologie , Mâle , Femelle , Adulte d'âge moyen , Sujet âgé , Reconnaissance faciale/physiologie , Expression faciale
5.
Cereb Cortex ; 34(7)2024 Jul 03.
Article de Anglais | MEDLINE | ID: mdl-38990517

RÉSUMÉ

Aberrations in non-verbal social cognition have been reported to coincide with major depressive disorder. Yet little is known about the role of the eyes. To fill this gap, the present study explores whether and, if so, how reading language of the eyes is altered in depression. For this purpose, patients and person-by-person matched typically developing individuals were administered the Emotions in Masked Faces task and Reading the Mind in the Eyes Test, modified, both of which contained a comparable amount of visual information available. For achieving group homogeneity, we set a focus on females as major depressive disorder displays a gender-specific profile. The findings show that facial masks selectively affect inferring emotions: recognition of sadness and anger are more heavily compromised in major depressive disorder as compared with typically developing controls, whereas the recognition of fear, happiness, and neutral expressions remains unhindered. Disgust, the forgotten emotion of psychiatry, is the least recognizable emotion in both groups. On the Reading the Mind in the Eyes Test patients exhibit lower accuracy on positive expressions than their typically developing peers, but do not differ on negative items. In both depressive and typically developing individuals, the ability to recognize emotions behind a mask and performance on the Reading the Mind in the Eyes Test are linked to each other in processing speed, but not recognition accuracy. The outcome provides a blueprint for understanding the complexities of reading language of the eyes within and beyond the COVID-19 pandemic.


Sujet(s)
Trouble dépressif majeur , Émotions , Expression faciale , Humains , Femelle , Adulte , Émotions/physiologie , Trouble dépressif majeur/psychologie , Trouble dépressif majeur/physiopathologie , Jeune adulte , Reconnaissance faciale/physiologie , Adulte d'âge moyen , COVID-19/psychologie , Lecture
6.
PLoS One ; 19(7): e0301908, 2024.
Article de Anglais | MEDLINE | ID: mdl-38990958

RÉSUMÉ

Real-time security surveillance and identity matching using face detection and recognition are central research areas within computer vision. The classical facial detection techniques include Haar-like, MTCNN, AdaBoost, and others. These techniques employ template matching and geometric facial features for detecting faces, striving for a balance between detection time and accuracy. To address this issue, the current research presents an enhanced FaceNet network. The RetinaFace is employed to perform expeditious face detection and alignment. Subsequently, FaceNet, with an improved loss function is used to achieve face verification and recognition with high accuracy. The presented work involves a comparative evaluation of the proposed network framework against both traditional and deep learning techniques in terms of face detection and recognition performance. The experimental findings demonstrate that an enhanced FaceNet can successfully meet the real-time facial recognition requirements, and the accuracy of face recognition is 99.86% which fulfills the actual requirement. Consequently, the proposed solution holds significant potential for applications in face detection and recognition within the education sector for real-time security surveillance.


Sujet(s)
Apprentissage profond , Humains , Face , Sécurité informatique , Mesures de sécurité , Reconnaissance faciale automatique/méthodes , Reconnaissance faciale , Algorithmes
7.
PLoS One ; 19(7): e0304669, 2024.
Article de Anglais | MEDLINE | ID: mdl-38985745

RÉSUMÉ

Against the backdrop of increasingly mature intelligent driving assistance systems, effective monitoring of driver alertness during long-distance driving becomes especially crucial. This study introduces a novel method for driver fatigue detection aimed at enhancing the safety and reliability of intelligent driving assistance systems. The core of this method lies in the integration of advanced facial recognition technology using deep convolutional neural networks (CNN), particularly suited for varying lighting conditions in real-world scenarios, significantly improving the robustness of fatigue detection. Innovatively, the method incorporates emotion state analysis, providing a multi-dimensional perspective for assessing driver fatigue. It adeptly identifies subtle signs of fatigue in rapidly changing lighting and other complex environmental conditions, thereby strengthening traditional facial recognition techniques. Validation on two independent experimental datasets, specifically the Yawn and YawDDR datasets, reveals that our proposed method achieves a higher detection accuracy, with an impressive 95.3% on the YawDDR dataset, compared to 90.1% without the implementation of Algorithm 2. Additionally, our analysis highlights the method's adaptability to varying brightness levels, improving detection accuracy by up to 0.05% in optimal lighting conditions. Such results underscore the effectiveness of our advanced data preprocessing and dynamic brightness adaptation techniques in enhancing the accuracy and computational efficiency of fatigue detection systems. These achievements not only showcase the potential application of advanced facial recognition technology combined with emotional analysis in autonomous driving systems but also pave new avenues for enhancing road safety and driver welfare.


Sujet(s)
Conduite automobile , Fatigue , Éclairage , Humains , Éclairage/méthodes , Reconnaissance faciale/physiologie , , Mâle , Femelle , Adulte , Algorithmes
8.
Dev Psychobiol ; 66(6): e22522, 2024 Sep.
Article de Anglais | MEDLINE | ID: mdl-38967122

RÉSUMÉ

Witnessing emotional expressions in others triggers physiological arousal in humans. The current study focused on pupil responses to emotional expressions in a community sample as a physiological index of arousal and attention. We explored the associations between parents' and offspring's responses to dynamic facial expressions of emotion, as well as the links between pupil responses and anxiety/depression. Children (N = 90, MAge = 10.13, range = 7.21-12.94, 47 girls) participated in this lab study with one of their parents (47 mothers). Pupil responses were assessed in a computer task with dynamic happy, angry, fearful, and sad expressions, while participants verbally labeled the emotion displayed on the screen as quickly as possible. Parents and children reported anxiety and depression symptoms in questionnaires. Both parents and children showed stronger pupillary responses to negative versus positive expressions, and children's responses were overall stronger than those of parents. We also found links between the pupil responses of parents and children to negative, especially to angry faces. Child pupil responses were related to their own and their parents' anxiety levels and to their parents' (but not their own) depression. We conclude that child pupils are sensitive to individual differences in parents' pupils and emotional dispositions in community samples.


Sujet(s)
Anxiété , Dépression , Émotions , Expression faciale , Parents , Pupille , Humains , Femelle , Mâle , Dépression/physiopathologie , Enfant , Anxiété/physiopathologie , Adulte , Pupille/physiologie , Émotions/physiologie , Reconnaissance faciale/physiologie
9.
J Psychiatr Res ; 176: 422-429, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38959825

RÉSUMÉ

Facial mimicry serves as an evolutionarily rooted important interpersonal communication process that touches on the concepts of socialization and empathy. Facial electromyography (EMG) of the corrugator muscle and the zygomaticus muscle was recorded while male forensic psychopathic patients and controls watched morphed angry or happy facial expressions. We tested the hypothesis that psychopathic patients would show weaker short latency facial mimicry (that is, within 600 ms after stimulus onset) than controls. Exclusively in the group of 20 psychopathic patients, we tested in a placebo-controlled crossover within-subject design the hypothesis that oxytocin would enhance short-latency facial mimicry. Compared with placebo, we found no oxytocin-related significant short-latency responses of the corrugator and the zygomaticus. However, compared with 19 normal controls, psychopathic patients in the placebo condition showed significantly weaker short-latency zygomaticus responses to happy faces, while there was a trend toward significantly weaker short-latency corrugator responses to angry faces. These results are consistent with a recent study of facial EMG responses in adolescents with psychopathic traits. We therefore posit a lifetime developmental deficit in psychopathy pertaining short-latency mimicry of emotional facial expressions. Ultimately, this deficit in mimicking angry and happy expressions may hinder the elicitation of empathy, which is known to be impaired in psychopathy.


Sujet(s)
Trouble de la personnalité de type antisocial , Électromyographie , Expression faciale , Muscles de la face , Ocytocine , Humains , Mâle , Ocytocine/administration et posologie , Ocytocine/pharmacologie , Adulte , Trouble de la personnalité de type antisocial/physiopathologie , Muscles de la face/effets des médicaments et des substances chimiques , Muscles de la face/physiologie , Muscles de la face/physiopathologie , Jeune adulte , Émotions/physiologie , Émotions/effets des médicaments et des substances chimiques , Études croisées , Comportement d'imitation/physiologie , Stimulation lumineuse , Reconnaissance faciale/physiologie , Reconnaissance faciale/effets des médicaments et des substances chimiques , Temps de réaction/effets des médicaments et des substances chimiques , Temps de réaction/physiologie
10.
Sci Rep ; 14(1): 15473, 2024 07 05.
Article de Anglais | MEDLINE | ID: mdl-38969734

RÉSUMÉ

The face serves as a crucial cue for self-identification, while the sense of agency plays a significant role in determining our influence through actions in the environment. The current study investigates how self-identification through facial recognition may influence the perception of control via motion. We propose that self-identification might engender a belief in having control over one's own face, leading to a more acute detection and greater emphasis on discrepancies between their actions and the sensory feedback in control judgments. We refer to the condition governed by the belief in having control as the exploitation mode. Conversely, when manipulating another individual's face, the belief in personal control is absent. In such cases, individuals are likely to rely on the regularity between actions and sensory input for control judgments, exhibiting behaviors that are exploratory in nature to glean such information. This condition is termed the explorative mode. The study utilized a face-motion mixing paradigm, employing a deep generative model to enable participants to interact with either their own or another person's face through facial and head movements. During the experiment, participants observed either their own face or someone else's face (self-face vs. other-face) on the screen. The motion of the face was driven either purely by their own facial and head motion or by an average of the participant's and the experimenter's motion (full control vs. partial control). The results showed that participants reported a higher sense of agency over the other-face than the self-face, while their self-identification rating was significantly higher for the self-face. More importantly, controlling someone else's face resulted in more movement diversity than controlling one's own face. These findings support our exploration-exploitation theory: When participants had a strong belief in control triggered by the self-face, they became highly sensitive to any sensorimotor prediction errors, leading to a lower sense of agency. In contrast, when the belief of control was absent, the exploration mode triggered more explorative behaviors, allowing participants to efficiently gather information to establish a sense of agency.


Sujet(s)
Reconnaissance faciale , Humains , Mâle , Femelle , Adulte , Jeune adulte , Reconnaissance faciale/physiologie , Face
11.
Commun Biol ; 7(1): 888, 2024 Jul 20.
Article de Anglais | MEDLINE | ID: mdl-39033247

RÉSUMÉ

Functional neuroimaging has contributed substantially to understanding brain function but is dominated by group analyses that index only a fraction of the variation in these data. It is increasingly clear that parsing the underlying heterogeneity is crucial to understand individual differences and the impact of different task manipulations. We estimate large-scale (N = 7728) normative models of task-evoked activation during the Emotional Face Matching Task, which enables us to bind heterogeneous datasets to a common reference and dissect heterogeneity underlying group-level analyses. We apply this model to a heterogenous patient cohort, to map individual differences between patients with one or more mental health diagnoses relative to the reference cohort and determine multivariate associations with transdiagnostic symptom domains. For the face>shapes contrast, patients have a higher frequency of extreme deviations which are spatially heterogeneous. In contrast, normative models for faces>baseline have greater predictive value for individuals' transdiagnostic functioning. Taken together, we demonstrate that normative modelling of fMRI task-activation can be used to illustrate the influence of different task choices and map replicable individual differences, and we encourage its application to other neuroimaging tasks in future studies.


Sujet(s)
Émotions , Imagerie par résonance magnétique , Humains , Imagerie par résonance magnétique/méthodes , Femelle , Mâle , Émotions/physiologie , Adulte , Encéphale/imagerie diagnostique , Encéphale/physiologie , Cartographie cérébrale/méthodes , Jeune adulte , Adulte d'âge moyen , Expression faciale , Reconnaissance faciale/physiologie
12.
J Vis ; 24(7): 13, 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-39046722

RÉSUMÉ

Super recognizers (SRs) are people that exhibit a naturally occurring superiority for processing facial identity. Despite the increase of SR research, the mechanisms underlying their exceptional abilities remain unclear. Here, we investigated whether the enhanced facial identity processing of SRs could be attributed to the lack of sequential effects, such as serial dependence. In serial dependence, perception of stimulus features is assimilated toward stimuli presented in previous trials. This constant error in visual perception has been proposed as a mechanism that promotes perceptual stability in everyday life. We hypothesized that an absence of this constant source of error in SRs could account for their superior processing-potentially in a domain-general fashion. We tested SRs (n = 17) identified via a recently proposed diagnostic framework (Ramon, 2021) and age-matched controls (n = 20) with two experiments probing serial dependence in the face and shape domains. In each experiment, observers were presented with randomly morphed face identities or shapes and were asked to adjust a face's identity or a shape to match the stimulus they saw. We found serial dependence in controls and SRs alike, with no difference in its magnitude across groups. Interestingly, we found that serial dependence impacted the performance of SRs more than that of controls. Taken together, our results show that enhanced face identity processing skills in SRs cannot be attributed to the lack of serial dependence. Rather, serial dependence, a beneficial nested error in our visual system, may in fact further stabilize the perception of SRs and thus enhance their visual processing proficiency.


Sujet(s)
Reconnaissance faciale , Stimulation lumineuse , Humains , Femelle , Mâle , Adulte , Jeune adulte , Reconnaissance faciale/physiologie , Stimulation lumineuse/méthodes , Perception de la forme/physiologie , Adulte d'âge moyen
13.
PLoS One ; 19(7): e0301940, 2024.
Article de Anglais | MEDLINE | ID: mdl-39018294

RÉSUMÉ

Insula damage results in substantial impairments in facial emotion recognition. In particular, left hemispheric damage appears to be associated with poorer recognition of aversively rated facial expressions. Functional imaging can provide information on differences in the processing of these stimuli in patients with insula lesions when compared to healthy matched controls (HCs). We therefore investigated 17 patients with insula lesions in the chronic stage following stroke and 13 HCs using a passive-viewing task with pictures of facial expressions testing the blood oxygenation dependent (BOLD) effect in predefined regions of interest (ROIs). We expected a decrease in functional activation in an area modulating emotional response (left ventral striatum) but not in the facial recognition areas in the left inferior fusiform gyrus. Quantification of BOLD-response in ROIs but also voxel-based statistics confirmed this hypothesis. The voxel-based analysis demonstrated that the decrease in BOLD in the left ventral striatum was driven by left hemispheric damaged patients (n = 10). In our patient group, insula activation was strongly associated with the intensity rating of facial expressions. In conclusion, the combination of performance testing and functional imaging in patients following circumscribed brain damage is a challenging method for understanding emotion processing in the human brain.


Sujet(s)
Émotions , Expression faciale , Imagerie par résonance magnétique , Striatum ventral , Humains , Mâle , Femelle , Adulte d'âge moyen , Émotions/physiologie , Striatum ventral/imagerie diagnostique , Striatum ventral/physiopathologie , Sujet âgé , Cortex insulaire/imagerie diagnostique , Cortex insulaire/physiopathologie , Adulte , Cartographie cérébrale , Études cas-témoins , Reconnaissance faciale/physiologie
15.
Sci Rep ; 14(1): 16790, 2024 Jul 22.
Article de Anglais | MEDLINE | ID: mdl-39039112

RÉSUMÉ

Own child's face is one of the most socially salient stimuli for parents, and a faster search for it than for other children's faces may help provide warmer and more sensitive care. However, it has not been experimentally examined whether parents find their child's face faster. In addition, although own child's face is specially processed, the search time for own child's face may be similar to that for other socially salient stimuli, such as own or spouse's faces. This study tested these possibilities using a visual search paradigm. Participants (parents) searched for their child's, own, spouse's, other child's, same-sex adult's, or opposite-sex adult's faces as search targets. Our findings indicate that both mothers and fathers identified their child's face more quickly than other children's faces. Similarly, parents found their own and spouse's faces more quickly than other adults' faces. Moreover, the search time for family members' faces increased with the number of faces on the search display, suggesting an attentional serial search. These results suggest that robust face representations learned within families and close relationships can support reduced search times for family members' faces.


Sujet(s)
Reconnaissance faciale , Humains , Mâle , Femelle , Adulte , Enfant , Face , Apprentissage , Famille/psychologie , Temps de réaction , Parents/psychologie
16.
Sci Rep ; 14(1): 17275, 2024 Jul 27.
Article de Anglais | MEDLINE | ID: mdl-39068186

RÉSUMÉ

Telemedicine and video-based diagnosis have raised significant concerns regarding the protection of facial privacy. Effective de-identification methods require the preservation of diagnostic information related to normal and pathological facial movements, which play a crucial role in the diagnosis of various movement, neurological, and psychiatric disorders. In this work, we have developed FaceMotionPreserve , a deep generative model-based approach that transforms patients' facial identities while preserving facial dynamics with a novel face dynamic similarity module to enhance facial landmark consistency. We collected test videos from patients with Parkinson's disease recruited via telemedicine for evaluation of model performance and clinical applicability. The performance of FaceMotionPreserve was quantitatively evaluated based on neurologist diagnostic consistency, critical facial behavior fidelity, and correlation of general facial dynamics. In addition, we further validated the robustness and advancements of our model in preserving medical information with clinical examination videos from a different cohort of patients. FaceMotionPreserve is applicable to real-time integration, safeguarding facial privacy while retaining crucial medical information associated with facial movements to address concerns in telemedicine, and facilitating safer and more collaborative medical data sharing.


Sujet(s)
Maladie de Parkinson , Humains , Maladie de Parkinson/diagnostic , Télémédecine , Face , Mâle , Femelle , Enregistrement sur magnétoscope , Expression faciale , Reconnaissance faciale , Adulte d'âge moyen
17.
Sensors (Basel) ; 24(14)2024 Jul 15.
Article de Anglais | MEDLINE | ID: mdl-39065979

RÉSUMÉ

By leveraging artificial intelligence and big data to analyze and assess classroom conditions, we can significantly enhance teaching quality. Nevertheless, numerous existing studies primarily concentrate on evaluating classroom conditions for student groups, often neglecting the need for personalized instructional support for individual students. To address this gap and provide a more focused analysis of individual students in the classroom environment, we implemented an embedded application design using face recognition technology and target detection algorithms. The Insightface face recognition algorithm was employed to identify students by constructing a classroom face dataset and training it; simultaneously, classroom behavioral data were collected and trained, utilizing the YOLOv5 algorithm to detect students' body regions and correlate them with their facial regions to identify students accurately. Subsequently, these modeling algorithms were deployed onto an embedded device, the Atlas 200 DK, for application development, enabling the recording of both overall classroom conditions and individual student behaviors. Test results show that the detection precision for various types of behaviors is above 0.67. The average false detection rate for face recognition is 41.5%. The developed embedded application can reliably detect student behavior in a classroom setting, identify students, and capture image sequences of body regions associated with negative behavior for better management. These data empower teachers to gain a deeper understanding of their students, which is crucial for enhancing teaching quality and addressing the individual needs of students.


Sujet(s)
Algorithmes , Humains , Étudiants , Intelligence artificielle , Face/physiologie , Reconnaissance faciale/physiologie , Reconnaissance faciale automatique/méthodes , Traitement d'image par ordinateur/méthodes , Femelle , Reconnaissance automatique des formes/méthodes
18.
Proc Natl Acad Sci U S A ; 121(28): e2321346121, 2024 Jul 09.
Article de Anglais | MEDLINE | ID: mdl-38954551

RÉSUMÉ

How does the brain process the faces of familiar people? Neuropsychological studies have argued for an area of the temporal pole (TP) linking faces with person identities, but magnetic susceptibility artifacts in this region have hampered its study with fMRI. Using data acquisition and analysis methods optimized to overcome this artifact, we identify a familiar face response in TP, reliably observed in individual brains. This area responds strongly to visual images of familiar faces over unfamiliar faces, objects, and scenes. However, TP did not just respond to images of faces, but also to a variety of high-level social cognitive tasks, including semantic, episodic, and theory of mind tasks. The response profile of TP contrasted with a nearby region of the perirhinal cortex that responded specifically to faces, but not to social cognition tasks. TP was functionally connected with a distributed network in the association cortex associated with social cognition, while PR was functionally connected with face-preferring areas of the ventral visual cortex. This work identifies a missing link in the human face processing system that specifically processes familiar faces, and is well placed to integrate visual information about faces with higher-order conceptual information about other people. The results suggest that separate streams for person and face processing reach anterior temporal areas positioned at the top of the cortical hierarchy.


Sujet(s)
Imagerie par résonance magnétique , Lobe temporal , Humains , Imagerie par résonance magnétique/méthodes , Lobe temporal/physiologie , Lobe temporal/imagerie diagnostique , Mâle , Femelle , Adulte , Reconnaissance faciale/physiologie , Cartographie cérébrale/méthodes , /physiologie , Face/physiologie , Jeune adulte , Reconnaissance visuelle des formes/physiologie
19.
PLoS One ; 19(7): e0306872, 2024.
Article de Anglais | MEDLINE | ID: mdl-39046931

RÉSUMÉ

We used a reverse-correlation image-classification paradigm to visualize facial representations of immigrants and citizens in the United States. Visualizations of immigrants' faces were judged by independent raters as less trustworthy and less competent and were more likely to be categorized as a non-White race/ethnicity than were visualizations of citizens' faces. Additionally, image generators' personal characteristics (e.g., implicit and explicit evaluations of immigrants, nativity status) did not reliably track with independent judges' ratings of image generators' representations of immigrants. These findings suggest that anti-immigrant sentiment and racial/ethnic assumptions characterize facial representations of immigrants in the United States, even among people who harbor positivity toward immigrants.


Sujet(s)
Émigrants et immigrants , Face , Adolescent , Adulte , Femelle , Humains , Mâle , Jeune adulte , Biais (épidémiologie) , Émigrants et immigrants/psychologie , Ethnies/psychologie , Reconnaissance faciale , États-Unis , /psychologie
20.
Sci Rep ; 14(1): 16193, 2024 07 13.
Article de Anglais | MEDLINE | ID: mdl-39003314

RÉSUMÉ

Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.


Sujet(s)
Émotions , Expression faciale , Reconnaissance faciale , Fixation oculaire , Humains , Reconnaissance faciale/physiologie , Femelle , Mâle , Adulte , Fixation oculaire/physiologie , Émotions/physiologie , Jeune adulte , Mouvements oculaires/physiologie , Stimulation lumineuse/méthodes
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE