ABSTRACT
Benzodiazepines are commonly used drugs to treat anxiety in crime witnesses. These increase GABA inhibitory effects, which impairs aversive memory encoding and consolidation. Eyewitness memory is essential in justice. However, memory is malleable leading to false memories that could cause a selection of an innocent in a lineup. Here, we studied whether a low dose of Clonazepam impairs memory encoding as well as consolidation of faces and narrative of the event. We performed two experiments using a double-blind and between subject design (N = 216). Day 1: subjects watched a crime video and received Clonazepam 0.25 mg (CLZ group) or placebo (PLC group) before (Exp. 1) or after the video (Exp. 2) to assess the effect on encoding and consolidation. One week later, the memory was assessed using a present and absent target lineup and asking for a free recall. Regarding encoding, we found that in the CLZ group memory was impaired in the free recall task, while no differences were found for recognition memory. Regarding consolidation, we did not observe memory measures that were affected by this dose of benzodiazepines. The results suggest that while some aspects of eyewitness memory could be modulated even with low doses of benzodiazepine, others could not be affected. More studies should be performed with higher doses of CLZ similar to those administered in real life. These results are relevant in the judicial field to assess the reliability of the eyewitness elections under the effects of this drug.
Subject(s)
Clonazepam , Facial Recognition , Mental Recall , Humans , Facial Recognition/drug effects , Facial Recognition/physiology , Male , Double-Blind Method , Clonazepam/pharmacology , Young Adult , Female , Adult , Mental Recall/drug effects , Memory Consolidation/drug effects , Recognition, Psychology/drug effects , AdolescentABSTRACT
INTRODUCTION: Parkinson's disease is characterised by the presence of motor symptoms including hypomimia, and by non-motor symptoms including alterations in facial recognition of basic emotions. Few studies have investigated this alteration and its relationship to the severity of hypomimia. OBJECTIVE: The objective is to study the relationship between hypomimia and the facial recognition of basic emotions in subjects with Parkinson's disease. SUBJECTS AND METHODS: Twenty-three patients and 29 controls were evaluated with the test battery for basic emotion facial recognition. The patients were divided into two subgroups according to the intensity of their hypomimia. RESULTS: The comparison in battery test performance between the minimal/mild hypomimia and moderate/severe hypomimia groups was statistically significant in favour of the former group. CONCLUSIONS: This finding shows a close relationship between expression and facial recognition of emotions, which could be explained through the mechanism of motor simulation.
TITLE: Relación entre la gravedad de la hipomimia y el reconocimiento de emociones básicas en la enfermedad de Parkinson.Introducción. La enfermedad de Parkinson se caracteriza por la presencia de síntomas motores, entre los que es significativa la presencia de hipomimia, y por síntomas no motores, en los que se destaca la alteración en el reconocimiento facial de emociones básicas. Son pocos los estudios que investiguen dicha alteración relacionada con la gravedad de la hipomimia. Objetivo. El objetivo es estudiar la relación entre la hipomimia y el reconocimiento facial de emociones básicas en sujetos con enfermedad de Parkinson. Sujetos y métodos. Se evaluó a 23 pacientes y 29 controles con la batería de reconocimiento facial de emociones básicas. El grupo de pacientes se dividió en dos subgrupos según la intensidad de la hipomimia. Resultados. La comparación en el rendimiento de las pruebas de la batería entre el grupo de hipomimia mínima/leve e hipomimia moderada/grave resultó estadísticamente significativa a favor del primer grupo. Conclusiones. Este hallazgo evidencia una estrecha relación entre la expresión y el reconocimiento facial de emociones, que podría explicarse a través del mecanismo de simulación motora.
Subject(s)
Emotions , Facial Recognition , Parkinson Disease , Severity of Illness Index , Humans , Parkinson Disease/psychology , Parkinson Disease/complications , Parkinson Disease/physiopathology , Male , Female , Middle Aged , Aged , Facial Recognition/physiology , Facial ExpressionABSTRACT
Wearing facial masks became a common practice worldwide during the COVID-19 pandemic. This study investigated (1) whether facial masks that cover adult faces affect 4- to 6-year-old children's recognition of emotions in those faces and (2) whether the duration of children's exposure to masks is associated with emotion recognition. We tested children from Switzerland (N = 38) and Brazil (N = 41). Brazil represented longer mask exposure due to a stricter mandate during COVID-19. Children had to choose a face displaying a specific emotion (happy, angry, or sad) when the face wore either no cover, a facial mask, or sunglasses. The longer hours of mask exposure were associated with better emotion recognition. Controlling for the hours of exposure, children were less likely to recognise emotions in partially hideen faces. Moreover, Brazilian children were more accurate in recognising happy faces than Swiss children. Overall, facial masks may negatively impact children's emotion recognition. However, prolonged exposure appears to buffer the lack of facial cues from the nose and mouth. In conclusion, restricting facial cues due to masks may impair kindergarten children's emotion recognition in the short run. However, it may facilitate their broader reading of facial emotional cues in the long run.
Subject(s)
COVID-19 , Emotions , Facial Recognition , Masks , Humans , Male , Female , Brazil , Child, Preschool , Child , Switzerland , COVID-19/psychology , COVID-19/prevention & control , Facial Expression , Time FactorsABSTRACT
Este estudo avaliou o reconhecimento (imitação, identidade e identificação) e a nomeação de estímulos emocionais de valência negativa (raiva e tristeza) e positiva (alegria e surpresa) em conjunto com a influência dos tipos de estímulos utilizados (social-feminino, social-masculino, familiar e emoji) em crianças e jovens adultos com autismo ou síndrome de Down, por meio de tarefas aplicadas pela família e mediadas por recursos tecnológicos durante a pandemia de covid-19. Participaram cinco crianças e dois jovens adultos com autismo e uma criança e dois jovens adultos com síndrome de Down. Foram implementadas tarefas de identidade, reconhecimento, nomeação e imitação, com estímulos faciais de função avaliativa (sem consequência diferencial) e de ensino (com consequência diferencial, uso de dicas e critério de aprendizagem), visando a emergência da nomeação emocional por meio do ensino das tarefas de reconhecimento. Os resultados da linha de base identificaram que, para os participantes que apresentaram menor tempo de resposta para o mesmo gênero, a diferença de tempo de resposta foi em média 57,28% menor. Em relação à valência emocional, 50% dos participantes apresentaram diferenças nos acertos, a depender da valência positiva e negativa, sendo que 66,66% apresentaram diferenças para o tempo de resposta a depender da valência emocional. Após o procedimento de ensino, os participantes mostraram maior número de acertos nas tarefas, independentemente do gênero de estímulo e valência emocional, criando ocasião para generalização da aprendizagem de reconhecimento e nomeação de emoções, além de consolidar a viabilidade de estratégias de ensino mediadas por recursos tecnológicos e aplicadas por familiares.(AU)
This study evaluated the recognition (imitation, identity, and identification) and naming of negative (anger and sadness) and positive (joy and surprise) emotional stimuli alongside the influence of the types of stimuli (social-female, social-male, family, and emoji) in children and young adults with autism and Down syndrome, via tasks applied by the family and mediated by technological resources, during the COVID-19 pandemic. Five children and two young adults with autism and one child and two young adults with Down syndrome participated. Identity, recognition, naming, and imitation tasks were planned and implemented using facial stimuli with evaluative (without differential consequence) and teaching (with differential consequence, tips, and learning criteria) functions, aiming at the emergence of emotional naming from the recognition teaching tasks. The baseline results showed that, for participants who had a shorter response time for the same gender, the response time difference was on average 57.28% lower. Regarding the emotional valence, 50% of the participants showed differences in the correct answers, depending on the positive and negative valence, and 66.66% showed differences in the response time depending on the emotional valence. After the teaching procedure, the participants showed a greater number of correct answers in the tasks, regardless of the stimulus type and emotional valence, creating an opportunity for generalizing learning of emotion recognition and naming, in addition to consolidating the feasibility of teaching strategies mediated by technological resources and applied by family members.(AU)
Este estudio evaluó el reconocimiento (imitación, identidad e identificación) y la denominación de estímulos emocionales negativos (enfado y tristeza) y positivos (alegría y sorpresa) y la influencia de los tipos de estímulos utilizados (social-femenino, social-masculino, familiar y emoji ) de niños y jóvenes con autismo o síndrome de Down, a través de tareas aplicadas por la familia, mediadas por recursos tecnológicos durante la pandemia de la covid-19. Participaron cinco niños y dos adultos jóvenes con autismo, y un niño y dos adultos jóvenes con síndrome de Down. Se planificaron e implementaron tareas de identidad, reconocimiento, nombramiento e imitación con estímulos faciales con función evaluativa (sin consecuencia diferencial) y enseñanza (con consecuencia diferencial, uso de ayudas y criterios de aprendizaje), buscando la emergencia del nombramiento emocional después de la enseñanza de tareas de reconocimiento. Los resultados de la línea de base identificaron que para los participantes que tenían un tiempo de respuesta más corto para el mismo género, la diferencia en el tiempo de respuesta fue un 57,28% menor. En cuanto a la valencia emocional, el 50% de los participantes mostraron diferencias en las respuestas correctas, en función de la valencia positiva y negativa, y el 66,66% tuvieron diferencias en el tiempo de respuesta, en función de la valencia emocional. Después del procedimiento de enseñanza, los participantes mostraron mayor número de aciertos en las tareas evaluadas, independientemente del tipo de estímulo o valencia emocional, lo que genera una oportunidad para la generalización del aprendizaje de reconocimiento y denominación de emociones, además de consolidar la viabilidad de estrategias de enseñanza mediadas por recursos tecnológicos y aplicadas por la familia.(AU)
Subject(s)
Humans , Male , Female , Child, Preschool , Child , Adolescent , Adult , Young Adult , Autistic Disorder , Family , Down Syndrome , Expressed Emotion , Emotions , Anxiety , Parent-Child Relations , Parents , Perception , Perceptual Distortion , Personality , Play and Playthings , Prejudice , Psychiatry , Psychology , Psychology, Social , Attention , Audiovisual Aids , Signs and Symptoms , Social Desirability , Social Environment , Social Values , Socialization , Stereotyping , Task Performance and Analysis , Visual Perception , Women , Behavior , Body Image , Image Processing, Computer-Assisted , Symbolism , Activities of Daily Living , Artificial Intelligence , Adaptation, Psychological , Grief , Attitude , Cognitive Behavioral Therapy , Child , Child Rearing , Chromosomes , Clinical Trial , Mental Competency , Caregivers , Cognition , Signal Detection, Psychological , Communication , Conscience , Intuition , Observation , Stereotypic Movement Disorder , Chromosome Disorders , Personal Autonomy , Adult Children , Trust , Comprehension , Personnel Delegation , Data Compression , Education , Education of Intellectually Disabled , Education, Special , Ego , Empathy , Exploratory Behavior , Face , Facial Expression , Cultural Competency , Young Adult , Fear , Feedback , Emotional Intelligence , Social Stigma , Pandemics , Social Skills , Social Norms , Emotional Adjustment , Optimism , Metacognition , Facial Recognition , Autism Spectrum Disorder , Applied Behavior Analysis , Self-Management , Respect , Emotional Regulation , Generalization, Psychological , Genetics , Social Interaction , Identity Recognition , COVID-19 , Gestures , Cognitive Training , Family Support , Processing Speed , Handling, Psychological , Imagination , Interpersonal Relations , Language , Life Change Events , Memory, Short-Term , Men , Mental Disorders , Mental Processes , Intellectual Disability , Nervous System Diseases , Neurologic Manifestations , Neurology , Neuropsychological Tests , Nonverbal CommunicationABSTRACT
Facial expression is the best evidence of our emotions. Its automatic detection and recognition are key for robotics, medicine, healthcare, education, psychology, sociology, marketing, security, entertainment, and many other areas. Experiments in the lab environments achieve high performance. However, in real-world scenarios, it is challenging. Deep learning techniques based on convolutional neural networks (CNNs) have shown great potential. Most of the research is exclusively model-centric, searching for better algorithms to improve recognition. However, progress is insufficient. Despite being the main resource for automatic learning, few works focus on improving the quality of datasets. We propose a novel data-centric method to tackle misclassification, a problem commonly encountered in facial image datasets. The strategy is to progressively refine the dataset by successive training of a CNN model that is fixed. Each training uses the facial images corresponding to the correct predictions of the previous training, allowing the model to capture more distinctive features of each class of facial expression. After the last training, the model performs automatic reclassification of the whole dataset. Unlike other similar work, our method avoids modifying, deleting, or augmenting facial images. Experimental results on three representative datasets proved the effectiveness of the proposed method, improving the validation accuracy by 20.45%, 14.47%, and 39.66%, for FER2013, NHFI, and AffectNet, respectively. The recognition rates on the reclassified versions of these datasets are 86.71%, 70.44%, and 89.17% and become state-of-the-art performance.
Subject(s)
Facial Recognition , Robotics , Neural Networks, Computer , Algorithms , Face , Facial ExpressionABSTRACT
Tasks we often perform in our everyday lives, such as reading or looking for a friend in the crowd, are seemingly straightforward but they actually require the orchestrated activity of several cognitive processes. Free-viewing visual search requires a plan to move our gaze on the different items, identifying them, and deciding on whether to continue with the search. Little is known about the electrophysiological signatures of these processes in free-viewing because there are technical challenges associated with eye movement artefacts. Here, we aimed to study how category information, as well as ecologically relevant variables such as the task performed, influence brain activity in a free-viewing paradigm. Participants were asked to observe/search from an array of faces and objects embedded in random noise. We concurrently recorded electroencephalogram and eye movements and applied a deconvolution analysis approach to estimate the contribution of the different elements embedded in the task. Consistent with classical fixed-gaze experiments and a handful of free-viewing studies, we found a robust categorical effect around 150 ms in occipital and occipitotemporal electrodes. We also report a task effect, more negative in posterior central electrodes in visual search compared with exploration, starting at around 80 ms. We also found significant effects of trial progression and an interaction with the task effect. Overall, these results generalise the characterisation of early visual face processing to a wider range of experiments and show how a suitable analysis approach allows to discern among multiple neural contributions to the signal, preserving key attributes of real-world tasks.
Subject(s)
Eye Movements , Facial Recognition , Humans , Electroencephalography , Visual Perception/physiology , Fixation, OcularABSTRACT
A estimativa de aparência da face de uma pessoa, partindo de um crânio seco, é chamada de Reconstrução Facial Forense (RFF). Pode ser realizada de maneira digital ou manual, a partir da marcação de pontos cranianos, que possuem diferentes médias de espessura de tecido mole sobreposto. Nas reconstruções digitais o uso de tomografias computadorizadas de feixe cônico (TCFC), que nos permite obter o volume de pacientes sentados, possibilitou um avanço significativo na mensuração das médias de volume dos tecidos moles faciais. Foi desenvolvido um protocolo para medições de tecidos moles a partir de 32 pontos craniométricos (10 sagitais e 11 bilaterais). Este trabalho propõe a inserção de cinco novos pontos cranianos ao protocolo, com medidas a partir dos pontos Mentual (Ml), Supra Canino (sC), Fronto-zigomático (Fz), Ptério (Pt) e Posterior do Ramo Mandibular (prM), com o intuito de aumentar a acurácia das reconstruções. As TCFC foram manipuladas no software Horus® (LGPL 3.0) e mensuradas conforme protocolo adaptado de Beaini et al. (1), obtidas as espessuras de tecido mole a partir dos pontos craniométricos propostos. Foram estudadas 100 TCFC de brasileiros adultos (maiores de 18 anos) que disponibilizaram seus exames para utilização em pesquisas de maneira anônima e que compõe um banco de dados já estruturado e utilizado em pesquisas anteriores. Esse banco de dados contém exames de 50 indivíduos do sexo feminino e 50 do sexo masculino, separados em grupos por sexo e idade. Estatisticamente, foram aplicados testes de normalidade e a diferença entre cada grupo foi testada para obtenção das espessuras médias referentes a cada ponto craniano. Para o ponto Fz, as médias de espessura de tecidos moles foram de 4.56mm para mulheres e 5.14mm para homens. Para o ponto Ml, as médias de ETMF foram de 12.88mm para mulheres e 14.74mm para homens. No ponto prM, as médias de ETMF foram de 18.30mm para homens e 19.69mm para mulheres. No ponto Pt, as médias de ETMF foram de 11.01mm para mulheres e 13.09mm para homens. No ponto sC, as médias de ETMF foram de 10.99mm para mulheres e 12.71mm para homens. A divisão de ETMFs por sexo é justificada, concordando com parcela significativa da literatura, uma vez que quatro 10 dos cinco pontos estudados apresentaram diferenças estatísticas significativas, com as espessuras de indivíduos do sexo masculino sendo maiores que de indivíduos do sexo feminino.
Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Aged , Young Adult , Skull/anatomy & histology , Forensic Anthropology , Face/anatomy & histology , Dimensional Measurement Accuracy , Forensic Dentistry , Sex Factors , Age Factors , Cone-Beam Computed Tomography , Facial RecognitionABSTRACT
OBJECTIVE: In this review, we examined if there is a deficit in facial recognition of emotion (FER) in children, adolescents, and adults with attention deficit hyperactivity disorder (ADHD). BACKGROUND: Emotional regulation is impaired in ADHD. Although a facial emotion recognition deficit has been described in this condition, the underlying causal mechanisms remain unclear. METHODS: The search was performed in six databases in September 2022. Studies assessing children, adolescents, or adults with isolated or comorbid ADHD that evaluated participants using a FER task were included. RESULTS: Twelve studies out of 385 were selected, with participants ranging in age from 6 to 37.1 years. A deficit in FER specific to ADHD, or secondary to comorbid autism spectrum disorder, anxiety, and oppositional symptoms, was found. CONCLUSIONS: There is a FER deficit in patients with ADHD. Adults showed improved recognition accuracy, reflecting partial compensation. ADHD symptoms and comorbidities appear to influence FER deficits.
Subject(s)
Attention Deficit Disorder with Hyperactivity , Autism Spectrum Disorder , Facial Recognition , Adolescent , Child , Adult , Humans , Young Adult , Attention Deficit Disorder with Hyperactivity/psychology , Facial Recognition/physiology , Autism Spectrum Disorder/complications , Emotions/physiology , Recognition, Psychology , Facial ExpressionABSTRACT
BACKGROUND AND OBJECTIVE: Patients suffering from Parkinson's disease (PD) present a reduction in facial movements called hypomimia. In this work, we propose to use machine learning facial expression analysis from face images based on action unit domains to improve PD detection. We propose different domain adaptation techniques to exploit the latest advances in automatic face analysis and face action unit detection. METHODS: Three different approaches are explored to model facial expressions of PD patients: (i) face analysis using single frame images and also using sequences of images, (ii) transfer learning from face analysis to action units recognition, and (iii) triplet-loss functions to improve the automatic classification between patients and healthy subjects. RESULTS: Real face images from PD patients show that it is possible to properly model elicited facial expressions using image sequences (neutral, onset-transition, apex, offset-transition, and neutral) with accuracy improvements of up to 5.5% (from 72.9% to 78.4%) with respect to single-image PD detection. We also show that our proposed action unit domain adaptation provides improvements of up to 8.9% (from 78.4% to 87.3%) with respect to face analysis. Finally, we also show that triplet-loss functions provide improvements of up to 3.6% (from 78.8% to 82.4%) with respect to action unit domain adaptation applied upon models created from scratch. The code of the experiments is available at https://github.com/luisf-gomez/Explorer-FE-AU-in-PD. CONCLUSIONS: Domain adaptation via transfer learning methods seem to be a promising strategy to model hypomimia in PD patients. Considering the good results and also the fact that only up to five images per participant are considered in each sequence, we believe that this work is a step forward in the development of inexpensive computational systems suitable to model and quantify problems of PD patients in their facial expressions.
Subject(s)
Facial Recognition , Parkinson Disease , Humans , Parkinson Disease/diagnosis , Facial Expression , Movement , Machine Learning , Recognition, PsychologyABSTRACT
Resumo O desenvolvimento de novas tecnologias fez nascer ferramentas que auxiliam no processo de identificação de indivíduos, possibilitando confirmar identidades e ajudando a solucionar crimes, ao permitir confirmar o encontro de pessoas desaparecidas ou vítimas de acidentes, por exemplo. Entretanto, um importante questionamento ético precisa ser observado: os fins sempre justificam os meios? A identificação facial a partir de imagens coletadas por câmeras de circuito fechado de televisão ou a análise de registros fotográficos são capazes de confirmar a identidade de alguém inequivocamente? Impressões digitais ou labiais podem ser utilizadas, em qualquer hipótese, em um confronto dactiloscópico? O conhecimento sobre as limitações dos métodos técnicos científicos utilizados em comparações de caracteres morfológicos permite que o resultado do perito papiloscopista atenda a dois princípios basilares constitucionais: a legalidade e o direito da pessoa humana. Ao respeitá-los, estará agindo conforme os limites éticos.
Abstract Technological advancements have generated tools to help with identifying individuals, allowing to verify identities and solve crimes by confirming found missing persons or accident victims, for example. An important ethical question, however, arises: do the ends always justify the means? Can facial identification from images collected by closed-circuit television cameras or analysis of photographic records confirm someone's identity unequivocally? Can fingerprints or lip prints be used for any dactyloscopy? Knowing the limitations of scientific technical methods used in morphological comparisons allows examiners to comply with two fundamental constitutional principles: that of legality and right of the human person. By respecting them, examiners will be acting according to ethical limits.
Resumen El desarrollo de nuevas tecnologías dio lugar a herramientas que ayudan en el proceso de identificación de personas, lo que posibilita la confirmación de identidades y contribuye a la resolución de delitos al permitir confirmar, por ejemplo, a personas desaparecidas o víctimas de accidentes. Sin embargo, es necesario observar una cuestión ética importante: ¿el fin siempre justifica los medios? ¿La identificación facial desde imágenes captadas por cámaras de circuito cerrado de televisión o el análisis de registros fotográficos puede confirmar inequívocamente la identidad de una persona? ¿Se pueden utilizar huellas dactilares o labiales, bajo cualquier circunstancia, en un enfrentamiento dactiloscópico? El conocimiento sobre las limitaciones de los métodos técnicos y científicos utilizados en las comparaciones de caracteres morfológicos permite que el resultado del perito en papiloscopía responda a dos principios constitucionales básicos: la legalidad y el derecho de la persona humana. Al respetarlos se estará actuando dentro de los límites éticos.
Subject(s)
Forensic Anthropology , Ethics , Expert Testimony , Facial RecognitionABSTRACT
A imitação facial é um comportamento involuntário capaz de facilitar a transmissão de informações não verbais relevantes em diferentes contextos sociais. Este estudo teve por objetivo analisar a capacidade de reconhecimento de expressões emocionais enquanto o observador tensiona a própria face ou imita a face-alvo. A hipótese utilizada foi a de que indivíduos que tensionam a própria face terão menor probabilidade de acertos na execução das tarefas de reconhecimento de expressões emocionais e aqueles que imitam a expressão terão uma maior probabilidade de acertos na execução das mesmas tarefas. A amostra foi composta por 30 participantes, divididos em dois grupos experimentais: o Grupo Imitação (GI) e o Grupo Ruído (GR), ambos com 18 participantes do sexo feminino e 12 do sexo masculino. O experimento consistiu em apresentar fotos de atores expressando facialmente uma emoção básica por 10 segundos. Neste período, os participantes deveriam, então, observar ou intervir facialmente, imitando ou tensionando a própria face (de acordo com o grupo alocado, Imitação ou Ruído). Após os 10 segundos executando a instrução (observar, imitar ou interferir), o participante deveria responder - entre as opções alegria, tristeza, nojo, raiva, surpresa e medo - a emoção correspondente à imagem. Os resultados apresentaram diferenças significativas quando comparadas as tarefas de tensionar ou imitar a face-alvo, sugerindo que a alteração da própria face do observador pode influenciar durante o desempenho de uma tarefa de reconhecimento de emoções em faces.(AU)
Facial mimicry is an involuntary behavior capable of facilitating the transmission of relevant non-verbal information in different social contexts. The present study aimed to analyze the ability to recognize emotional expressions while the observer tenses their own face or imitates the target face. The hypothesis used was that individuals who tension their own face or imitate the expression of facial emotion have less or greater probability of success in performing tasks to recognize emotional expressions on faces, respectively. The sample consisted of 30 participants, divided into two experimental groups: the Imitation Group - GI (18 female participants and 12 male participants) and the Noise Group - GR (18 female participants and 12 male participants). The experiment consisted of presenting pictures of actors facially expressing a basic emotion for 10 seconds; the participants should then observe or intervene facially, imitating or tensing their own face (according to the allocated group, Imitation or Noise). After 10 seconds of executing the instruction (observing, imitating or interfering), the participant should respond - among the options joy, sadness, disgust, anger, surprise and fear - the emotion corresponding to the image. The results showed significant differences when comparing the tasks of tensioning or imitating the target face, suggesting that the alteration of the observer's own face may influence during the performance of a facial emotion recognition task.(AU)
La imitación facial es un comportamiento involuntario capaz de facilitar la transmisión de información no verbal relevante en diferentes contextos sociales. Esto estudio tuvo como objetivo analizar la capacidad de reconocer expresiones emocionales mientras el observador tensa su propio rostro o imita el rostro objetivo. Se utilizó la hipótesis de que los individuos que tensan su propio rostro tendrán menor probabilidad de éxito en la realización de tareas de reconocimiento de expresiones emocionales y los individuos que imitan la expresión tendrán una mayor probabilidad de éxito en la realización de las mismas tareas. La muestra estuvo formada por 30 participantes divididos en dos grupos experimentales: el Grupo de Imitación - GI (18 mujeres y 12 hombres) y el Grupo de Ruido - GR (18 mujeres y 12 hombres). El experimento consistió en presentar imágenes de actores expresando facialmente una emoción básica durante 10 segundos; los participantes deberían entonces observar o intervenir facialmente, imitando o tensando su propio rostro (según el grupo asignado, Imitación o Ruido). Después de 10 segundos de ejecutar la instrucción (observar, imitar o interferir), el participante debería responder - entre las opciones de alegría, tristeza, asco, ira, sorpresa y miedo - la emoción correspondiente a la imagen. Los resultados mostraron diferencias significativas al comparar las tareas de tensar o imitar el rostro objetivo, sugiriendo que la alteración del propio rostro del observador puede influir durante la realización de una tarea de reconocimiento de emociones en rostros.(AU)
Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Young Adult , Emotions , Facial Expression , Facial Recognition , Psychology , Sensory Receptor Cells , Autistic Disorder , Behavior and Behavior Mechanisms , Neurosciences , Artificial Intelligence , Nuclear Family , Communication , Expressed Emotion , Program for Incentives and Benefits , Mirror Neurons , Physical Appearance, Body , Social Cognition , Handling, Psychological , Interpersonal Relations , Language Development , Noise , Nonverbal CommunicationABSTRACT
ABSTRACT. Recognizing the other's emotions is an important skill for the social context that can be modulated by variables such as gender, age, and race. A number of studies seek to elaborate specific face databases to assess the recognition of basic emotions in different contexts. Objectives: This systematic review sought to gather these studies, describing and comparing the methodologies used in their elaboration. Methods: The databases used to select the articles were the following: PubMed, Web of Science, PsycInfo, and Scopus. The following word crossing was used: "Facial expression database OR Stimulus set AND development OR Validation." Results: A total of 36 articles showed that most of the studies used actors to express the emotions that were elicited from specific situations to generate the most spontaneous emotion possible. The databases were mainly composed of colorful and static stimuli. In addition, most of the studies sought to establish and describe patterns to record the stimuli, such as color of the garments used and background. The psychometric properties of the databases are also described. Conclusions: The data presented in this review point to the methodological heterogeneity among the studies. Nevertheless, we describe their patterns, contributing to the planning of new research studies that seek to create databases for new contexts.
RESUMO. Reconhecer as emoções do outro é uma habilidade importante para o contexto social, que pode ser modulada por variáveis como sexo, idade e raça. Vários estudos buscam elaborar bancos de faces específicos para avaliar o reconhecimento de emoções básicas em diferentes contextos. Objetivos: Esta revisão sistemática buscou reunir esses estudos, descrevendo e comparando as metodologias utilizadas em sua elaboração. Métodos: As bases de dados utilizadas para a seleção dos artigos foram: PubMed, Web of Science, PsycInfo e Scopus. Foi utilizado o seguinte cruzamento de palavras: "facial expression database OR stimulus set AND development OR validation". Resultados: O total de 36 artigos mostrou que a maioria dos estudos utilizou atores para expressar as emoções, que foram suscitadas de situações específicas para serem o mais espontâneas possível. Os bancos de faces foram compostos principalmente de estímulos coloridos e estáticos. Além disso, a maioria dos estudos buscou estabelecer e descrever padrões para registrar os estímulos, como a cor das roupas utilizadas e o fundo. As propriedades psicométricas dos bancos de faces também são descritas. Conclusões: Os dados apresentados nesta revisão apontam para a heterogeneidade metodológica entre os estudos. Apesar disso, descrevemos seus padrões, contribuindo para o planejamento de novas pesquisas que buscam criar bancos de faces específicos para novos contextos.
Subject(s)
Humans , Facial RecognitionABSTRACT
La publicación describe la evidencia científica disponible respecto a la efectividad y seguridad del uso obligatorio de mascarilla versus en el ámbito escolar, a partir de estudios realizados en instituciones educativas en un contexto de disponibilidad de vacunas. Se reportó una asociación significativa entre escuelas con orden de uso obligatorio de mascarillas y una reducción importante en el número de casos de COVID-19 en estudiantes y trabajadores de las escuelas. Sin embargo, la evidencia procede de cuatro estudios ecológicos, no fue posible determinar el efecto aislado del uso de mascarillas de otras medidas de mitigación y el análisis no tomó en cuenta factores como nivel de adherencia o cumplimiento del mandato, tipo de mascarilla empleada, transmisión en el hogar, los diferentes protocolos para detección de casos en las escuelas incluidas y su capacidad para detectar los casos asintomáticos. Todos los estudios se realizaron en un periodo de disponibilidad de vacunas, en su mayoría para niños de 12 años o más, y previo a la circulación de la variante Omicron. Un análisis secundario identificó que la mayor intensidad de la transmisión comunitaria, mayor nivel de individualismo de la población y el nivel de educación secundaria en comparación a nivel pre-escolar, se asociaron con un incremento del riesgo de infección en las escuelas. El riesgo disminuyó con la aplicación de medidas preventivas únicas (distanciamiento físico o uso de mascarillas) o combinadas (distanciamiento físico y uso de mascarillas) versus ninguna medida y con el aumento en la inmunidad de la población. Respecto a los efectos psicosociales y en la comunicación, los resultados de 6 estudios fueron heterogéneos. No se encontró un efecto importante en la capacidad de los niños para inferir las emociones a partir de rostros con mascarillas, no hubo diferencias en el rendimiento cognitivo cuando los niños estuvieron expuestos previamente al uso de mascarillas frente a los que no la usaron y la comprensión del lenguaje fue similar cuando el orador usaba o no la mascarilla, en ausencia de ruido. Por otro lado, se evidenció un menor desempeño en la capacidad de reconocimiento facial y una alteración en el procesamiento de los rostros que usaban una mascarilla.
Subject(s)
Safety , Disease Transmission, Infectious , Education, Primary and Secondary , Absenteeism , Facial Recognition , Physical Distancing , N95 Respirators , COVID-19 , Disaster Mitigation , Immunity , MasksABSTRACT
Automatic identification of human facial expressions has many potential applications in today's connected world, from mental health monitoring to feedback for onscreen content or shop windows and sign-language prosodic identification. In this work we use visual information as input, namely, a dataset of face points delivered by a Kinect device. The most recent work on facial expression recognition uses Machine Learning techniques, to use a modular data-driven path of development instead of using human-invented ad hoc rules. In this paper, we present a Machine-Learning based method for automatic facial expression recognition that leverages information fusion architecture techniques from our previous work and soft voting. Our approach shows an average prediction performance clearly above the best state-of-the-art results for the dataset considered. These results provide further evidence of the usefulness of information fusion architectures rather than adopting the default ML approach of features aggregation.
Subject(s)
Facial Recognition , Face , Facial Expression , Humans , Machine Learning , PoliticsABSTRACT
Recognizing emotional face expressions in others is a valuable non-verbal communication and particularly relevant throughout childhood given that children's language skills are not yet fully developed, but the first interactions with peers have just started. This study aims to investigate developmental markers of emotional facial expression in children and the effect of age and sex on it. A total of 90 children split into three age groups: 6-7 years old (n = 30); 8-9 years old (n = 30); 10-11 years old (n = 30) took part in the study. Participants were exposed to 38 photos in two exposure times (500 ms and 1000 ms) of children expressing happiness, sadness, anger, disgust, fear and surprise on three intensities, plus images of neutral faces. Happiness was the easiest expression to be recognized, followed by disgust and surprise. As expected, 10-11-year-old group showed the highest accuracy means, whereas 6-7-year-old group had the lowest means of accuracy. Data support the non-existence of female advantage.
Subject(s)
Facial Expression , Facial Recognition , Child , Emotions , Fear , Female , Happiness , HumansABSTRACT
Recognition using ear images has been an active field of research in recent years. Besides faces and fingerprints, ears have a unique structure to identify people and can be captured from a distance, contactless, and without the subject's cooperation. Therefore, it represents an appealing choice for building surveillance, forensic, and security applications. However, many techniques used in those applications-e.g., convolutional neural networks (CNN)-usually demand large-scale datasets for training. This research work introduces a new dataset of ear images taken under uncontrolled conditions that present high inter-class and intra-class variability. We built this dataset using an existing face dataset called the VGGFace, which gathers more than 3.3 million images. in addition, we perform ear recognition using transfer learning with CNN pretrained on image and face recognition. Finally, we performed two experiments on two unconstrained datasets and reported our results using Rank-based metrics.
Subject(s)
Facial Recognition , Neural Networks, Computer , Ear , Humans , Recognition, PsychologyABSTRACT
Autism Spectrum Disorder (ASD) is a heterogeneous condition that affects face perception. Evidence shows that there are differences in face perception associated with the processing of low spatial frequency (LSF) and high spatial frequency (HSF) of visual stimuli between non-symptomatic relatives of individuals with autism (broader autism phenotype, BAP) and typically developing individuals. However, the neural mechanisms involved in these differences are not fully understood. Here we tested whether face-sensitive event related potentials could serve as neuronal markers of differential spatial frequency processing, and whether these potentials could differentiate non-symptomatic parents of children with autism (pASD) from parents of typically developing children (pTD). To this end, we performed electroencephalographic recordings of both groups of parents while they had to recognize emotions of face pictures composed of the same or different emotions (happiness or anger) presented in different spatial frequencies. We found no significant differences in the accuracy between groups but lower amplitude modulation in the Late Positive Potential activity in pASD. Source analysis showed a difference in the right posterior part of the superior temporal region that correlated with ASD symptomatology of the child. These results reveal differences in brain processing of recognition of facial emotion in BAP that could be a precursor of ASD.
Subject(s)
Autism Spectrum Disorder/physiopathology , Cerebral Cortex/physiopathology , Electroencephalography , Emotions , Evoked Potentials , Facial Expression , Facial Recognition , Adult , Female , Humans , MaleABSTRACT
Um falso reconhecimento de uma pessoa pode levar à condenação de um inocente. Um método efetivo de diminuir o falso reconhecimento é por meio do alinhamento, procedimento no qual o suspeito é apresentado em conjunto com outras pessoas - fillers (não suspeitos similares ao suspeito). Em um experimento foi comparado o desempenho de testemunhas em alinhamentos nos quais fillers apresentavam moderada ou alta similaridade em relação ao suspeito. Independentemente do grau de similaridade, suspeitos foram identificados com maior frequência que suspeitos inocentes e do que fillers, e fillers foram reconhecidos em maior frequência do que suspeitos inocentes. A similaridade entre fillers e suspeito não teve efeito na probabilidade de reconhecimento do suspeito, seja ele culpado ou inocente. Os resultados são discutidos à luz de teorias acerca do efeito de similaridade de fillers e implicações dos resultados para o sistema de justiça brasileiro (AU).
Faulty witness identification can lead to the conviction of an innocent person. An effective method to reduce misidentification is using a lineup, a procedure in which the suspect is presented among "fillers" (non-suspects similar to the suspect). In an experiment, we compared the responses of eyewitnesses in lineups where fillers had moderate or high similarity to the suspect. Regardless of the degree of similarity, guilty suspects were identified more often than innocent suspects and fillers, and fillers were identified more often than innocent suspects. The similarity between fillers and suspect did not affect the probability of suspect recognition, whether the suspect was guilty or innocent. The results are discussed in the light of theories about the similarity effect of fillers, and implications for the Brazilian justice system (AU).
Un reconocimiento falso de una persona puede conducir a la condena de un inocente. Un método eficaz para reducir el reconocimiento falso es la alineación, un procedimiento en el que el sospechoso se presenta junto con otras personas - fillers (no sospechosos similares al sospechoso). En un experimento se compara el rendimiento de los testigos en alineaciones en las que los fillers tenían una similitud moderada o alta con el sospechoso. Los resultados mostraron que, independientemente del grado de similitud, en una alineación justa, los sospechosos culpables son más propensos a ser identificados que los inocentes y que los fillers, y cuando el sospechoso es inocente, los fillers tienen más probabilidades de ser reconocidos. La similitud entre filler y sospechoso no tuvo efecto sobre la probabilidad de reconocimiento del sospechoso, tanto si era culpable o inocente. Los resultados se discuten a la luz de las teorías sobre el efecto de similitud de los rellenos y las implicaciones de los resultados para el sistema judicial brasileño (AU).
Subject(s)
Humans , Female , Adolescent , Adult , Middle Aged , Recognition, Psychology , Criminals/psychology , Memory, Episodic , Facial RecognitionABSTRACT
OBJECTIVE: To compare plasma concentrations of cannabidiol (CBD) following oral administration of two formulations of the drug (powder and dissolved in oil), and to evaluate the effects of these distinct formulations on responses to emotional stimuli in healthy human volunteers. METHODS: In a randomized, double-blind, placebo-controlled, parallel-group design, 45 healthy male volunteers were randomly assigned to three groups of 15 subjects that received either 150 mg of CBD powder; 150 mg of CBD dissolved in corn oil; or placebo. Blood samples were collected at different times after administration, and a facial emotion recognition task was completed after 150 min. RESULTS: There were no significant differences across groups in the subjective and physiological measures, nor in the facial emotion recognition task. However, groups that received the drug showed statistically significant differences in baseline measures of plasma CBD, with a significantly greater difference in favor of the oil formulation. CONCLUSION: When administered as a single 150-mg dose, neither formulation of oral CBD altered responses to emotional stimuli in healthy subjects. The oil-based CBD formulation resulted in more rapid achievement of peak plasma level, with an approximate fourfold increase in oral bioavailability.
Subject(s)
Cannabidiol , Emotions , Facial Recognition , Pharmaceutical Vehicles , Administration, Oral , Cannabidiol/chemistry , Cannabidiol/pharmacology , Double-Blind Method , Drug Compounding , Humans , MaleABSTRACT
OBJECTIVES: To assess differences in the recognition of facial expressions of emotion among caregivers of older people with different levels of empathy. METHODS: A cross-sectional study was conducted with 158 caregivers of older adults who provided care in family residences or nursing homes. The caregivers were divided into three groups based on the score of the multidimensional Interpersonal Reactivity Index: "lower empathy", "intermediate empathy", and "higher empathy". Data collection involved the administration of a sociodemographic questionnaire, the Emotion Recognition Test, and the Patient Health Questionnaire. RESULTS: No significant differences were found among the groups in terms of sociodemographic variables. Regarding clinical characteristics, the "higher empathy" group had more depressive symptoms than the other groups (p = .001). Moreover, the "higher empathy" group exhibited greater accuracy at recognizing the expression of sadness than the "lower empathy" group (p = .033). The recognition of sadness remained significant in the analysis of variance adjusted for depressive symptoms (p < .05). CONCLUSIONS: Caregivers with higher levels of empathy showed greater accuracy at recognizing sadness emotion compared to caregivers with lower levels of empathy. Additionally, caregivers with greater empathy have more depressive symptoms. CLINICAL IMPLICATIONS: The recognition of facial expressions of sadness may give caregivers a skill to infer possible needs in older care recipients. However, a higher level of empathy may exert a negative psychological impact on caregivers of older people, which could have repercussions regarding the quality of care provided.