RESUMO
Background: Identiï¬cation of emotional facial expressions (EFEs) is important in interpersonal communication. Six 'universal' EFEs are known, though accuracy of their identiï¬cation varies. EFEs involve anatomical changes in certain regions of the face, especially eyes and mouth. But whether other areas of the face are just as important in their identiï¬cation is still debated. This study was conducted to compare the accuracy of identiï¬cation of universal EFEs under full-face and partial face conditions (only showing the eyes and the mouth regions). Methods: An analytical cross-sectional study was conducted among 140 young Indian adults. They were divided into two equal groups and shown the six universal EFEs in two sets, one with full-face images, and the other with images showing just the eyes and the mouth regions on a computer screen. The participants were asked to identify each of the EFE and their responses were analyzed. Results: Mean age was 21.3 1.7 years for full face group, and 21.2 1.6 years for the partial face group. Most were men, from rural areas and from upper socioeconomic status families, and many of them were students. EFE identiï¬cation was signiï¬cantly higher for part-face group compared to full-face group (p-value .0007). Participants of both groups identiï¬ed happiness the best (100%). For other EFEs, part-face images were identiï¬ed more accurately than full-face images, except for disgust. These diï¬erences were statistically signiï¬cant except for anger and fear. Conclusions: Among young Indian adults, accuracy of identiï¬cation of universal EFEs was high, which was signiï¬cantly enhanced for all except disgust, when only combinations of eyes and mouth were shown, suggesting that other facial regions serve as distractors in EFE identiï¬cation. Key Messages: 1. Identiï¬cation of universal EFEs was higher from partial faces (combination of eyes and mouth) as compared to full-face EFEs for all emotions except disgust. 2. This suggests that other regions of face serve as potential distractors in the identiï¬cation of emotions, except for disgust, where these regions provide more information.
Antecedentes: La identiï¬cación de las expresiones faciales emocionales (EFEs) es importante en la comunicación interpersonal. Se conocen seis EFEs "universales", aunque la precisión de su identiï¬cación varía. Los EFEs involucran cambios anatómicos en ciertas regiones del rostro, especialmente los ojos y la boca. Sin embargo, todavía se debate si la identiï¬cación de las otras áreas del rostro es igualmente importante. Este estudio se llevó a cabo para comparar la precisión de la identiï¬cación de los EFEs universales en condiciones de rostro completo y rostro parcial, mostrando solo las regiones de los ojos y la boca. Método: Se llevó a cabo un estudio analítico transversal entre 140 adultos jóvenes indios. Fueron divididos en dos grupos iguales y se les mostraron los seis EFEs universales en dos conjuntos, uno con imágenes de rostro completo y otro con imágenes que mostraban solo las regiones de los ojos y la boca. A los participantes se les pidió que identiï¬caran cada uno de los EFE y sus respuestas fueron analizadas. Resultados: La edad promedio fue de 21.3 1.7 años para el grupo de rostro completo y de 21.2 1.6 años para el grupo de rostro parcial. La mayoría de los participantes eran hombres, provenían de áreas rurales, pertenecían a familias de nivel socioeconómico alto y eran estudiantes. La identiï¬cación de EFEs fue signiï¬cativamente mayor en el grupo de rostro parcial en comparación con el grupo de rostro completo (p = .0007). Los participantes de ambos grupos identiï¬caron mejor la felicidad (100%). Para los demás EFEs, las imágenes de rostro parcial se identiï¬caron con mayor precisión que las imágenes de rostro completo, excepto en el caso del asco. Estas diferencias fueron estadísticamente signiï¬cativas, excepto en los casos de ira y miedo. Conclusiones: Entre los adultos jóvenes indios, la precisión en la identiï¬cación de EFEs universales fue alta, lo cual se mejoró signiï¬cativamente para todos, excepto el asco, cuando solo se mostraron combinaciones de ojos y boca. Esto sugiere que otras regiones faciales actúan como distractores en la identiï¬cación de EFEs.
RESUMO
Trocas afetivas favorecem o desenvolvimento inicial e podem ser impactas por metas maternas. Esse estudo investigou trocas e tentativas de trocas afetivas, metas maternas de socialização emocional e associações entre esses domínios. Foram filmadas observações de 20 mães primíparas e bebês (dois/três meses) do Rio de Janeiro e as mães foram entrevistadas. Realizou-se análise de vídeo, de conteúdo, e os resultados indicaram, em média, 5,5 trocas e 13,8 tentativas por díade, e 2,38 turnos por troca. A mãe promoveu maior frequência de trocas afetivas (90%) e tentativas de trocas afetivas (99%), pela fala. Nas trocas, o sorriso do bebê foi a expressão emocional predominante, e dos comportamentos afetivos maternos, a fala. Análise das entrevistas indicou valorização de metas de autonomia, prezando relações de proximidade, sem serem encontradas associações entre metas maternas e características das trocas analisadas. Porém, o estudo mostrou haver certa complexidade nas trocas afetivas mãe-bebê no momento do desenvolvimento e contexto estudados, com participação ativa do bebê. Propõe-se manter a hipótese de que as metas maternas impactam a expressividade dos bebês nas trocas, suscitando novos estudos. Investigações longitudinais e transversais com mais de uma visita e maior amostra são sugeridas, explorando variáveis sociodemográficas diversas.
Affective exchanges favor early development and can be impacted by maternal goals. This study has investigated exchanges and attempts at affective exchanges, maternal emotional socialization goals, and associations among these domains. Observations of 20 mothers and babies (2/3-months-old) in Rio de Janeiro have been filmed and the mothers have been interviewed. Video and content analyses have been performed and results indicated, on average, 5.5 changes and 13.8 attempts per dyad, and 2.38 shifts per change. Mother has been the one who has most frequently promoted affective exchanges (90%) and attempts at exchanges (99%), by means of speech. In the exchanges, the baby's smile has been the predominant emotional expression, and among the maternal affective behaviors, speech. Analysis of interviews has indicated valuing autonomy goals, valuing close relationships, without associations between maternal goals and characteristics of the analyzed exchanges. However, the study showed that there is a certain complexity in mother-baby affective exchanges at the time of development and context studied, with active baby participation. It's proposed to maintain the hypothesis that maternal goals impact babies' expressiveness in exchanges, prompting further studies. Longitudinal and cross-sectional investigations with more than one visit and a larger sample are suggested, exploring different sociodemographic variables.
Assuntos
Relações Mãe-Filho , AfetoRESUMO
Treinar o reconhecimento de expressões faciais emocionais (REFE) pode auxiliar no incremento de outras habilidades socioemocionais, como teoria da mente (ToM). O objetivo foi desenvolver um treinamento de REFE para crianças e avaliar seus efeitos na acurácia desta habilidade e ToM. Participaram 61 crianças de 8 a 12 anos, alocadas aleatoriamente entre grupo intervenção (n = 32) e controle (n = 29), realizando tarefas pré e pós-intervenção de REFE e ToM (RMET-I). O grupo intervenção realizou o treinamento de REFE denominado Caçadores de Emoção. Todos os participantes aumentaram acurácia do reconhecimento de medo e nojo e reduziram da tristeza. Houve melhora em ambos os grupos na avaliação da ToM. Especificidades das tarefas utilizadas e do treinamento são apresentadas na discussão.
Training emotional facial expression recognition (EFER) can enhance other socio-emotional skills, such as the theory of mind (ToM). This study aimed develop an intervention for EFER for children and assess its effects on the accuracy of EFER and ToM. 61children aged eight to 12 years, randomly allocated between intervention (n = 32) and control group (n = 29), performed pre- and post-intervention tasks of EFER and ToM (RMET-I). The intervention group performed the REFE training named Hunters of Emotion. All participants increased the accuracy of recognizing the faces of fear and disgust and reduced of sadness. Finally, there was an improvement in both groups in the ToM assessment. Specificities of the tasks used and the training are presented in the discussion.
Entrenar el reconocimiento de expresiones faciales emocionales (REFE) puede ayudar a aumentar otras habilidades socioemocionales, como la teoría de la mente (ToM). El objetivo era desarrollar un entrenamiento REFE para niños y evaluar sus efectos sobre la precisión de REFE y ToM. Participaron 61 niños de 8 a 12 años, asignados aleatoriamente a un grupo de intervención (n = 32) y de control (n = 29), realizaron tareas de REFE y ToM (RMET-I) pré y posterior a la intervención. El grupo de intervención realizó el entrenamiento REFE denominado Buscadores de Emociones. Todos los participantes aumentaron la precisión para reconocer el miedo y el disgusto y redujeron la tristeza. Hubo una mejora en ambos grupos en la evaluación de ToM. Los detalles de las tareas utilizadas y el entrenamiento se presentan en la discusión.
Assuntos
Humanos , Criança , Percepção Social , Expressão Facial , Cognição SocialRESUMO
Automatic identification of human facial expressions has many potential applications in today's connected world, from mental health monitoring to feedback for onscreen content or shop windows and sign-language prosodic identification. In this work we use visual information as input, namely, a dataset of face points delivered by a Kinect device. The most recent work on facial expression recognition uses Machine Learning techniques, to use a modular data-driven path of development instead of using human-invented ad hoc rules. In this paper, we present a Machine-Learning based method for automatic facial expression recognition that leverages information fusion architecture techniques from our previous work and soft voting. Our approach shows an average prediction performance clearly above the best state-of-the-art results for the dataset considered. These results provide further evidence of the usefulness of information fusion architectures rather than adopting the default ML approach of features aggregation.
Assuntos
Reconhecimento Facial , Face , Expressão Facial , Humanos , Aprendizado de Máquina , PolíticaRESUMO
Considerado uma das expressões faciais mais complexas, o sorriso é produto da manifestação de diversos estados emocionais e apresenta diferenças sexuais significativas. O objetivo deste estudo foi comparar a frequência do sorriso entre homens e mulheres durante a fala, com base na observação dinâmica da exibição da arcada dentária superior. A amostra foi composta por 88 participantes (41 homens e 47 mulheres), que foram convidados a descrever imagens previamente selecionadas, sendo filmados durante esse procedimento. A partir das gravações obtidas, foi realizada a aferição da frequência de exibição das arcadas dentárias superiores em recurso de câmera lenta (4.0x slow) e a comparação por meio do Test t de Student. Os resultados apontam frequência média maior entre as mulheres (M=23; DP=8,22), em comparação aos homens (M=12; DP=6,76), com diferença estatisticamente significativa (t = 6,44; p<0,0001). Não foi possível definir os determinantes que promoveram tais resultados. No entanto, são explanados fatores evolutivos, cognitivos e socioculturais que contribuem para uma compreensão mais abrangente dessa expressão facial. (AU)
Regarded as one of the most complex facial expressions, the smile is the product of several emotional manifestations and presents relevant sexual differences. This study aimed to compare the frequency of smile between men and women during speech, based on the dynamic observation of the display of the upper dental arch. The sample consisted of 88 participants (41 men and 47 women), who were invited to describe selected images, being filmed during the procedure. From the recordings, the frequency of display of the upper dental arches was measured using a slow motion (4.0x slow) feature and compared using Student's t-test. The results show a higher frequency among women (M = 23; SD = 8.22), compared to men (M = 12; SD = 6.76), with a statistically significant difference (t = 6.44; p <0 , 0001). It was not possible to define the determinants that promoted these results. However, evolutionary, cognitive and sociocultural factors that contribute to a more important understanding of this facial expression are explained. (AU)
Considerada una de las expresiones faciales más complejas, la sonrisa es producto de la manifestación de varios estados emocionales y presenta diferencias sexuales. El objetivo de este estudio fue comparar la frecuencia de la sonrisa entre hombres y mujeres durante el habla, en base a la observación de la visualización del arco dental superior. La muestra consistió en 88 participantes (41 hombres y 47 mujeres), que fueron invitados a describir imágenes seleccionadas, filmadas durante este procedimiento. A partir de las grabaciones realizadas, la frecuencia de visualización de los arcos dentales superiores se midió usando la función de cámara lenta (4.0x slow) y una comparación entre géneros usando la Prueba de Estudiante. Los resultados muestran la frecuencia promedio más alta entre las mujeres (M = 23; SD = 8.22), en comparación con los hombres (M = 12; SD = 6.76), con una diferencia estadísticamente significativa (t = 6.44; p <0,0001). No fue posible definir los determinantes que promovieron tales resultados. Sin embargo, se explican los factores evolutivos, cognitivos y socioculturales que contribuyen a una comprensión más integral de esta expresión facial. (AU)
Assuntos
Humanos , Masculino , Feminino , Adulto , Sorriso/psicologia , Psicologia do Desenvolvimento , Comunicação não Verbal , Fatores Culturais , Expressão FacialRESUMO
The capacity of visual working memory (VWM) depends on the complexity of the stimuli being processed. Emotional characteristics increase stimulus complexity and can interfere with the competition for cognitive resources. Studies involving emotional information processing are scarce and still produce contradicting results. In the present study, we investigated the capacity of VWM for faces with positive, negative, and neutral expressions. A modified change-detection task was used in two experiments, in which the number of faces and the emotional valence were manipulated. The results showed that VWM has a storage capacity of approximately two faces, which is fewer than the storage capacity identified for simpler stimuli. Our results reinforce the evidence that working memory can dynamically distribute its storage resources depending on both the amount and the emotional nature of the stimuli. (AU)
A capacidade da Memória Visual de Trabalho (MTV) depende da complexidade dos estímulos que estão sendo processados. As características emocionais aumentam a complexidade do estímulo e podem interferir na competição por recursos cognitivos. Estudos envolvendo processamento de informações emocionais são escassos e ainda produzem resultados contraditórios. No presente estudo, investiga-se a capacidade da MTV para faces com expressões positivas, negativas e neutras. Uma tarefa modificada de detecção de mudança foi usada em dois experimentos, nos quais o número de faces e a valência emocional foram manipulados. Os resultados mostraram que a MTV tem uma capacidade de armazenamento de aproximadamente duas faces, menor que a capacidade de armazenamento identificada para estímulos mais simples. Os resultados reforçam as evidências de que a memória de trabalho consegue distribuir dinamicamente seus recursos de armazenamento em função tanto da quantidade como da natureza emocional dos estímulos. (AU
La capacidad de la memoria de trabajo visual (MTV) puede variar dependiendo de la complejidad de los estímulos procesados. Las características emocionales aumentan la complejidad del estímulo y pueden interferir con la competencia por los recursos cognitivos. Los estudios que relacionan el procesamiento de informaciones emocionales son todavía escasos y aún producen resultados contradictorios. En el presente estudio, investigamos la capacidad de la MTV de rostros con valencia emocional positiva, negativa y neutra. Se utilizó una tarea de detección de cambios modificada en dos experimentos, en los que se manipuló la cantidad de rostros y niveles de valencia emocional. Nuestros resultados refuerzan la evidencia de que la memoria de trabajo es capaz de distribuir dinámicamente sus recursos de almacenamiento dependiendo tanto de la cantidad, como de la naturaleza emocional de los estímulos. (AU)
Assuntos
Humanos , Masculino , Feminino , Adulto , Emoções , Expressão Facial , Memória de Curto PrazoRESUMO
Animals' facial expressions are widely used as a readout for emotion. Scientific interest in the facial expressions of laboratory animals has centered primarily on negative experiences, such as pain, experienced as a result of scientific research procedures. Recent attempts to standardize evaluation of facial expressions associated with pain in laboratory animals has culminated in the development of "grimace scales". The prevention or relief of pain in laboratory animals is a fundamental requirement for in vivo research to satisfy community expectations. However, to date it appears that the grimace scales have not seen widespread implementation as clinical pain assessment techniques in biomedical research. In this review, we discuss some of the barriers to implementation of the scales in clinical laboratory animal medicine, progress made in automation of collection, and suggest avenues for future research.
RESUMO
BACKGROUND: Beyond the well-known deleterious effects of ethanol defining Fetal Alcohol Spectrum Disorders (FASD), the notion of fetal alcohol programming has gained scientific support. This phenomenon implies early neural plasticity relative to learning mechanisms comprising ethanol´s sensory cues and physiological effects of the drug; among others, its reinforcing properties and its depressant effects upon respiration. In this study, as a function of differential ethanol exposure during gestation, we analyzed neonatal physiological and behavioral responsiveness recruited by the odor of the drug. METHODS: A factorial design defined by maternal ethanol intake during pregnancy (Low, n = 38; Moderate, n = 18 or High, n = 19) and olfactory stimulation (ethanol odor and/or or a novel scent) served as the basis of the study. Neonatal respiratory and cardiac frequencies, oxygen saturation levels and appetitive or aversive facial expressions, served as dependent variables. RESULTS: Newborns of High drinkers exhibited significant physiological and behavioral signs indicative of alcohol odor recognition; specifically, respiratory depressions and exacerbated appetitive facial reactions coupled with diminished aversive expressions. Respiratory depressions were not accompanied by heart rate accelerations (cardiorespiratory dysautonomia). According to ROC curve analyses respiratory and behavioral reactivity were predictive of high maternal intake patterns. CONCLUSIONS: These results validate the notion of human fetal alcohol programming that is detected immediately after birth. The reported early functional signs indicative of relatively high alcohol gestational exposure should broaden our capability of diagnosing FASD and lead to appropriate primary or secondary clinical interventions (Registry of Health Research N.3201- RePIS, Córdoba, Argentina).
RESUMO
The stimulus equivalence paradigm presented operational criteria to identify symbolic functions in observable behaviors. When humans match dissimilar stimuli (e.g., words to pictures), equivalence relations between those stimuli are likely to be demonstrated through behavioral tests derived from the logical properties of reflexivity, symmetry, and transitivity. If these properties are confirmed, one can say that those stimuli are members of an equivalence class in which each member is substitutable for the others. A number of studies, which have established equivalence classes comprised of arbitrary stimuli and pictures of faces expressing emotions, have found that valences of the faces affect the relatedness of equivalent stimuli. Importantly, several studies reported stronger relational strength in equivalence classes containing happy faces than in equivalence classes containing angry faces. The processes that may account for this higher degree of relatability of happy faces are not yet known. The current study investigated the dynamics of the symbolic relational responding involving facial expressions of different emotions by means of the Implicit Relational Assessment Procedure (IRAP). Participants were 186 undergraduate students who were taught to establish two equivalence classes, each comprising pictures of faces expressing either happiness (for one class) or a negative emotion (for another class), and meaningless words. The IRAP effect was taken as an index for the relational strength established between equivalent stimuli in the different equivalence classes. The dynamics of arbitrary relational responding in the course of the four IRAP trial types revealed that the participants exhibited a stronger IRAP effect in trials involving the happy faces and a weaker IRAP effect in trials involving the negative faces. These findings indicate that the happy faces had higher impact on the symbolic relational responding than the negative faces. The potential role played by the orienting function of happy vs. negative faces is discussed. By considering other studies that also reported a happiness superiority effect in other contexts, we present converging evidence for the prioritization of positive affect in emotional, categorical, and symbolic processing.
RESUMO
Although motor activity is actively inhibited during rapid eye movement (REM) sleep, specific activations of the facial mimetic musculature have been observed during this stage, which may be associated with greater emotional dream mentation. Nevertheless, no specific biomarker of emotional valence or arousal related to dream content has been identified to date. In order to explore the electromyographic (EMG) activity (voltage, number, density and duration) of the corrugator and zygomaticus major muscles during REM sleep and its association with emotional dream mentation, this study performed a series of experimental awakenings after observing EMG facial activations during REM sleep. The study was performed with 12 healthy female participants using an 8-hr nighttime sleep recording. Emotional tone was evaluated by five blinded judges and final valence and intensity scores were obtained. Emotions were mentioned in 80.4% of dream reports. The voltage, number, density and duration of facial muscle contractions were greater for the corrugator muscle than for the zygomaticus muscle, whereas high positive emotions predicted the number (R2 0.601, p = 0.0001) and voltage (R2 0.332, p = 0.005) of the zygomaticus. Our findings suggest that zygomaticus events were predictive of the experience of positive affect during REM sleep in healthy women.
Assuntos
Sonhos/fisiologia , Eletromiografia/métodos , Emoções/fisiologia , Expressão Facial , Sono REM/fisiologia , Adulto , Feminino , Voluntários Saudáveis , Humanos , Adulto JovemRESUMO
Social anxiety disorder (SAD) is characterized by the fear of being judged negatively in social situations. Eye-tracking techniques have been prominent among the methods used in recent decades to investigate emotional processing in SAD. This study offers a systematic review of studies on eye-tracking patterns in individuals with SAD and controls in facial emotion recognition tasks. Thirteen articles were selected from the consulted databases. It was observed that the subjects with SAD exhibited hypervigilance-avoidance in response to emotions, primarily in the case of negative expressions. There was avoidance of conspicuous areas of the face, particularly the eyes, during observations of negative expressions. However, this hypervigilance did not occur if the stimulus was presented in virtual reality. An important limitation of these studies is that they use only static expressions, which can reduce the ecological validity of the results.
RESUMO
Abstract Social anxiety disorder (SAD) is characterized by the fear of being judged negatively in social situations. Eye-tracking techniques have been prominent among the methods used in recent decades to investigate emotional processing in SAD. This study offers a systematic review of studies on eye-tracking patterns in individuals with SAD and controls in facial emotion recognition tasks. Thirteen articles were selected from the consulted databases. It was observed that the subjects with SAD exhibited hypervigilance-avoidance in response to emotions, primarily in the case of negative expressions. There was avoidance of conspicuous areas of the face, particularly the eyes, during observations of negative expressions. However, this hypervigilance did not occur if the stimulus was presented in virtual reality. An important limitation of these studies is that they use only static expressions, which can reduce the ecological validity of the results.
Assuntos
Transtornos Fóbicos/psicologia , Emoções , Movimentos Oculares , Expressão FacialRESUMO
The experience of maltreatment can impair child development, including changes in the process of emotions recognition, which may result in impairment of social interactions and behavioral disabilities. In order to measure the association between maltreatment and changes on emotion recognition among Brazilian adolescents, the Emotional Recognition Test on Human Faces (ERTHF) was applied to a sample of 50 adolescents who had suffered different intensities and types of abuse. The social and clinical characteristics of the participants were analyzed and, from ERTHF data, the accuracy and response time for the emotion recognition. Males were 60%, with mean age of 13 years and 3 months; 60% were living in shelters. Emotion recognition changes were associated with intensity and types of maltreatment. Physical neglect (48%) was associated with changes in neutral and negative emotions recognition. Emotional neglect (48%) and emotional abuse (46%) were associated with changes in both positive and negative emotions recognition. Physical abuse (38%) was associated with changes in positive emotion recognition only. False recognition of anger was the most common outcome of maltreatment, being associated with physical neglect (p = 0.015) and emotional neglect (p = 0.047). Our results point out to the need to add emotional and facial recognition's rehabilitation interventions to better attend the specific demands of maltreated children and to increase the chances of social and family reintegration.
RESUMO
Abstract: The Reading the Mind in the Eyes Test (RMET) is internationally used to assess emotional perception, but there are few validity studies with Brazilian samples. The test was answered by 1440 participants, along with the Computerized Test of Primary Emotions Perception (PEP), and abstract (AR) and verbal reasoning (VR) tasks. RMET items were studied with Rasch model. Results indicate that its items are concentrated at a lower level of difficulty, lacking difficult items to assess higher levels of emotional perception. Both RMET and PEP showed significant correlations with AR and VR, corroborating other studies showing emotional perception is related to other types of intelligence. However the correlation between RMET and PEP was lower than expected (r = .43), suggesting perception of emotions in the eyes is only partially related to perception in the whole face.
Resumo: O Reading the Mind in the Eyes Test (RMET) é utilizado internacionalmente para avaliação da percepção emocional, mas são poucos os estudosde validade com amostras brasileiras. O teste foi respondido por 1440 participantes, juntamente com o Teste Informatizado de Percepção de Emoções Primárias (PEP) e provas de raciocínio abstrato (RA) e verbal (RV). Os itens do RMET foram estudados com modelo de Rasch. Os resultados indicaram que os itens estão concentrados em um nível menor de dificuldade, com falta de itens difíceis para avaliar níveis mais altos de percepção emocional. Tanto o RMET quanto o PEP mostraram correlações significativas com RA e RV, corroborando estudos que mostram que percepção emocional está relacionada a outros tipos de inteligência. Contudo a correlação entre RMET e PEP foi menor do que esperada (r=0,43), sugerindo que percepção de emoção nos olhos está apenas parcialmente relacionada a percepção na face inteira.
Resumen: La Reading the Mind in theEyes Test (RMET) se utiliza internacionalmente para evaluar la percepción emocional, pero son pocos los estudios de validez con muestras brasileñas. La prueba fue respondida por 1440 participantes, junto con la Prueba Computarizada de Percepción de Emociones Primarias (PEP), y tareas de razonamiento abstracto (RA) y verbal (RV). Los ítems de la RMET fueron estudiados con el modelo de Rasch. Los resultados indicaron que sus elementos se concentran en un nivel inferior de dificultad, corto de ítems difíciles de evaluar niveles superiores de percepción emocional. Tanto RMET como PEP mostraron correlaciones significativas con RA y RV, corroborando otros estudios que muestran que la percepción emocional está relacionada con otros tipos de inteligencia. Sin embargo, la correlación entre RMET y PEP fue menor que lo esperado (r = 0,43), lo que sugiere que percepción de emociones en los ojos está sólo parcialmente relacionada con percepción en toda la cara.
Assuntos
Humanos , Masculino , Feminino , Adolescente , Adulto , Pessoa de Meia-Idade , Expressão Facial , Teoria da Informação , Inteligência , PsicologiaRESUMO
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
Assuntos
Bases de Dados Factuais , Emoções , Expressão Facial , Estimulação Luminosa/métodos , Adolescente , Adulto , Argentina , Feminino , Humanos , Masculino , Reconhecimento Psicológico , Adulto JovemRESUMO
No processo de envelhecimento, alterações na percepção e na cognição podem gerar prejuízos no reconhecimento de emoções faciais. No presente trabalho, foi realizada uma revisão sistemática, de acordo com as diretrizes do PRISMA, de estudos recentes que avaliaram a percepção e o reconhecimento facial de emoções em idosos sem patologias. As bases eletrónicas de dados pesquisadas foram: MEDLINE, PsycoINFO e Web of Science, sendo selecionados 22 artigos publicados entre 2009 e 2016. De um modo geral, verificou-se que os idosos apresentaram um declínio no reconhecimento de emoções, principalmente para as emoções negativas. Tais resultados podem ser explicados tanto pela teoria estrutural, quanto pela teoria da seletividade socioemocional. Os resultados têm importantes implicações na medida em que sinalizam a relevância da avaliação cognitiva e do uso de estímulos mais ecológicos nas tarefas de reconhecimento emocional em idosos.
In the aging process, changes in perception and cognition may lead to losses in the recognition of facial emotions. In this study, we carried out, according to PRISMA guidelines, a systematic review of recent studies that evaluated the perception and facial emotion recognition in the elderly without pathologies. Searches occurred in the electronic databases: MEDLINE, PsycoINFO and Web of Science, being selected 22 articles published between 2009 and 2016. In general, we found that elderly show a decline in emotion recognition, mainly for negative emotions. These results can be explained by the structural theory as well as by theory of the socioemotional selectivity. The results have important implications inasmuch as they indicate the relevance of cognitive assessment and use of more ecological stimuli in emotion recognition tasks in elderly.
RESUMO
This study was designed to investigate the relation between rating responses and the patterns of cortical activation in an integration task using pairs of emotional faces. Participants judged on a graphic rating scale the overall affective intensity conveyed by two emotional faces, each presented to one of the two hemispheres via a Divided Visual Field technique (DVF). While they performed the task, EEG was recorded from 6 scalp locations. Three discrete emotions were considered (Joy, Fear, and Anger) and varied across three levels of expression intensity. Some face pairs portrayed the same emotion (same-emotion pairs), others two different emotions (distinct-emotions pairs). The patterns of integration of the two sources of information were examined both at the level of the ratings and of the brain response (event-related-α-desynchronization: ERD) recorded at each EEG lead. Adding-type rules were found for the ratings of both same-emotion and different-emotions pairs. Adding-type integration was also commonly found when α-ERD was taken as a response. Outcomes are discussed with a link to the lateralization of emotional processing and the relations between the observable R (e.g., ratings) and possible implementational aspects of the implicit r posited by Information Integration Theory (IIT).
El objetivo de este estudio fue investigar la relación entre la tasa de respuestas y los patrones de activación cortical en la integración de tareas usando los pares de expresiones faciales. Los participantes emitieron un juicio sobre una gráfica y la calificaron en una escala de intensidad afectiva que transmitía dos expresiones faciales, cada una se presentó a uno de los dos hemisferios usando la técnica de Divides Visual Field (DVF). Mientras ellos realizaban la tarea, fue grabada su respuesta en el EEG usando 6 electrodos. Tres emociones discretas fueron consideradas (Alegría, Miedo y Rabia) y estas variaron en tres niveles de intensidad de la expresión. Varios pares de caras contenían la misma emoción, otras dos mostraban emociones diferentes. Los patrones de integración de las dos fuentes de información fueron examinadas tanto con las escalas como con las respuestas cerebrales (ERD) grabadas en cada seguimiento del EEG. El patrón de la regla de la adición fue observado en las calificaciones de pares de emociones iguales y pares de emociones diferentes. La integración de tipo aditivo fue comúnmente observada cuando el α -ERD fue tomado como una respuesta. Los resultados fueron discutidos teniendo en cuenta la lateralización de los procesamientos emocionales y las relaciones entre la R observable y los posibles aspectos prácticos de r propuestos por la Teoría de Integración de la Información (IIT).
RESUMO
Faces and bodies are typically seen together in most social interactions, rendering probable that facial and bodily expressions are perceived and eventually processed simultaneously. The methodology of Information Integration Theory and Functional Measurement was used here to address the following questions: Under what rules do facial and bodily information integrate in judgments over different dimensions of so-called basic and self-conscious emotions? How does relative importance of face and body vary across emotions and judgment dimensions? Does the relative importance of face and body afford a basis for distinguishing between basic and self-conscious emotions? Three basic (happiness, anger, sadness) and two social self-conscious emotions (shame and pride) were considered in this study. Manipulated factors were 3-D realistic facial expressions (varied across 5 levels of intensity) and synthetic 3-D realistic body postures (3 levels of intensity). Different groups of participants judged expressed intensity, valence, or arousal of the combined presentations of face and body, meaning that judgment dimension was varied between-subjects. With the exception of arousal judgments, averaging was the predominant integration rule. Relative importance of face and body was found to vary as a function of judgment dimension, specific emotions and, for judgments of arousal only, type of emotion (basic versus self-conscious).
Caras y cuerpos son típicamente observados en conjunto en muchas de las interacciones sociales, haciendo probable que tanto las expresiones faciales como las expresiones corporales sean percibidas y eventualmente procesadas simultaneamente. La metodología de la Teoría de Integración de la Información y la Medición Funcional fue usada en este estúdio para contestar las siguientes preguntas: ¿bajo qué reglas son integradas las informaciones faciales y corporales en los juicios sobre diferentes dimensiones de las llamadas emociones autoconcientes?, ¿cómo la importáncia relativa de la cara y del cuerpo varían a través de las emociones y las dimensiones de los juicios? ¿La importancia relativa de la cara y del cuerpo permiten tener una base para para diferenciar entre las emociones básicas y las autoconcientes? En este estudio se consideraron tres emociones básicas (felicidad, ira y tristeza) y dos emociones autoconcientes (verguenza y orgullo). Los factores manipulados fueron las expresiones faciales realistas en modelos de 3D (variadas a través e 5 niveles de intensidad) y posiciones corporales realistas en modelos de 3D (que variaron en 3 niveles de intensidad). Diferentes grupos de participantes juzgaron la intensidad de las expresiones, la valencia, o la estimulación de las diferentes presentaciones de combinaciones de caras y cuerpos, el significado de las dimesiones del juicio fue variado entre-sujetos. Con excepción de los juicios sobre la estimulación, la regla de integración del promedio fue la predominante. La importancia relativa de la cara y del cuerpo fueron observadas al variar en función de las dimensiones del juicio, de las emociones específicas y, en el caso de los juicios de estimulación solo para un tipo de emoción (básicas versus autoconscientes).
RESUMO
A pesquisa investigou o funcionamento de um teste de percepção de emoções em pessoas surdas, assim como sua aplicação coletiva. Participaram 13 surdos e 25 não surdos que responderam ao teste apresentado em projetor multimídia, e 65 não surdos que responderam ao teste individualmente em computador. A idade média foi 17,03 (DP=2,11). Os resultados mostraram que houve perda de desempenho nos dois grupos que usaram projetor, embora os surdos tenham demonstrado capacidade de perceber emoções levemente superior não significativa. Os surdos também apresentaram menor capacidade de identificar emoções autênticas ou falseadas. Discute-se que esses resultados parecem estar relacionados à utilização da língua de sinais. Além disso, os surdos apresentaram maiores distorções na percepção, sugerindo interesse em socialização, preocupação com autonomia e pensamentos agressivos, corroborando outros estudos. Houve funcionamento diferencial de itens, que se discute em relação às diferenças encontradas.
The research investigated the functioning of an emotional perception test in deaf people, as well as its collective administration. Participants included 13 deaf and 25 hearing people who responded to the test shown on a multimedia projector, and 65 hearing people who responded to the test individually on a computer. The mean age was 17.03 (SD=2.11). Results showed that there was performance loss in both groups using multimedia projector, although the deaf participants demonstrated a non-significant but slightly higher ability to perceive emotions. Deaf participants also displayed lower ability to identify authentic or false emotions. We contend that these results appear to be related to the use of sign language. In addition, the deaf participants had greater perception distortions, suggesting interest in socializing, concerns for autonomy and aggressive thoughts, corroborating other studies. There was differential item functioning, discussed in relation to the differences found.
El estudio investigó el funcionamiento de una prueba de percepción de las emociones en las personas sordas, así como su administración colectiva. Los participantes fueran 13 personas sordas y 25 no sordos que respondieron a la prueba presentada por proyector multimedia, y 65 no sordos que respondieron a la prueba individualmente en la computadora. La edad media fue de 17,03 (DP=2,11). Los resultados mostraron pérdida de rendimiento en los dos grupos a partir de proyector, aunque los sordos han demostrado capacidad un poco más alta de percibir emociones, no significativa. Las personas sordas también tenían menor capacidad para identificar las emociones auténticas o falsas. Se argumenta que estos resultados parecen estar relacionados con el uso de la lengua de signos. Además, los sordos tuvieron mayores distorsiones en la percepción, lo que sugiere interés en la socialización, preocupación con la autonomía y pensamientos agresivos, corroborando otros estudios. Había funcionamiento diferenciales de los ítems, que se analizan en relación a las diferencias encontradas.
Assuntos
Humanos , Feminino , Adolescente , Percepção , Língua de Sinais , Pessoas com Deficiência Auditiva/psicologia , Emoções , Expressão Facial , Inteligência EmocionalRESUMO
A pesquisa investigou o funcionamento de um teste de percepção de emoções em pessoas surdas, assim como sua aplicação coletiva. Participaram 13 surdos e 25 não surdos que responderam ao teste apresentado em projetor multimídia, e 65 não surdos que responderam ao teste individualmente em computador. A idade média foi 17,03 (DP=2,11). Os resultados mostraram que houve perda de desempenho nos dois grupos que usaram projetor, embora os surdos tenham demonstrado capacidade de perceber emoções levemente superior não significativa. Os surdos também apresentaram menor capacidade de identificar emoções autênticas ou falseadas. Discute-se que esses resultados parecem estar relacionados à utilização da língua de sinais. Além disso, os surdos apresentaram maiores distorções na percepção, sugerindo interesse em socialização, preocupação com autonomia e pensamentos agressivos, corroborando outros estudos. Houve funcionamento diferencial de itens, que se discute em relação às diferenças encontradas.
The research investigated the functioning of an emotional perception test in deaf people, as well as its collective administration. Participants included 13 deaf and 25 hearing people who responded to the test shown on a multimedia projector, and 65 hearing people who responded to the test individually on a computer. The mean age was 17.03 (SD=2.11). Results showed that there was performance loss in both groups using multimedia projector, although the deaf participants demonstrated a non-significant but slightly higher ability to perceive emotions. Deaf participants also displayed lower ability to identify authentic or false emotions. We contend that these results appear to be related to the use of sign language. In addition, the deaf participants had greater perception distortions, suggesting interest in socializing, concerns for autonomy and aggressive thoughts, corroborating other studies. There was differential item functioning, discussed in relation to the differences found.
El estudio investigó el funcionamiento de una prueba de percepción de las emociones en las personas sordas, así como su administración colectiva. Los participantes fueran 13 personas sordas y 25 no sordos que respondieron a la prueba presentada por proyector multimedia, y 65 no sordos que respondieron a la prueba individualmente en la computadora. La edad media fue de 17,03 (DP=2,11). Los resultados mostraron pérdida de rendimiento en los dos grupos a partir de proyector, aunque los sordos han demostrado capacidad un poco más alta de percibir emociones, no significativa. Las personas sordas también tenían menor capacidad para identificar las emociones auténticas o falsas. Se argumenta que estos resultados parecen estar relacionados con el uso de la lengua de signos. Además, los sordos tuvieron mayores distorsiones en la percepción, lo que sugiere interés en la socialización, preocupación con la autonomía y pensamientos agresivos, corroborando otros estudios. Había funcionamiento diferenciales de los ítems, que se analizan en relación a las diferencias encontradas.