Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 13.020
Filtrar
1.
Nat Commun ; 11(1): 4728, 2020 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-32963237

RESUMO

Social trust is linked to a host of positive societal outcomes, including improved economic performance, lower crime rates and more inclusive institutions. Yet, the origins of trust remain elusive, partly because social trust is difficult to document in time. Building on recent advances in social cognition, we design an algorithm to automatically generate trustworthiness evaluations for the facial action units (smile, eye brows, etc.) of European portraits in large historical databases. Our results show that trustworthiness in portraits increased over the period 1500-2000 paralleling the decline of interpersonal violence and the rise of democratic values observed in Western Europe. Further analyses suggest that this rise of trustworthiness displays is associated with increased living standards.


Assuntos
Sinais (Psicologia) , Face/anatomia & histologia , Expressão Facial , Aprendizado de Máquina , Algoritmos , Europa (Continente) , Feminino , Humanos , Masculino , Pinturas , Percepção Social , Confiança
2.
Rev. cient. odontol ; 8(2): e16-e16, mayo-ago. 2020. ilus, tab
Artigo em Espanhol | LILACS, LIPECS | ID: biblio-1118807

RESUMO

Objetivo: Analizar el patrón facial y su relación con la sonrisa en usuarios que acudieron a la Organización Internacional de Policía Criminal Interpol ­ Perú, Sede Descentralizada Arequipa, en 2018. Materiales y métodos: Se escogió una muestra de 72 usuarios de entre 20 y 40 años, de quienes se obtuvo el índice facial morfológico y se determinó el patrón facial. Se tomó 3 fotografías en sonrisa posada para realizar el análisis de los componentes de la sonrisa: corredor bucal, plano frontal oclusal, arco de sonrisa, línea labial, curvatura del labio superior; simetría de la sonrisa y componentes dentarios y gingivales. Los datos fueron analizados mediante la prueba exacta de Fisher o prueba de chi cuadrado para establecer la asociación de las variables. Resultados: El patrón facial euriprosopo presenta una línea labial baja (50%) y media (45,8%), los patrones faciales mesoprosopo y leptoprosopo presentan una línea labial media (54,2% y 50,0% respectivamente); los patrones faciales euriprosopo y mesoprosopo presentan un plano frontal oclusal no aceptable (58,3% y 54,2%, respectivamente), y el patrón facial leptoprosopo presenta un plano frontal oclusal aceptable (58,3%). Los demás componentes de la sonrisa son similares en los tres patrones faciales. Conclusiones: Los resultados obtenidos establecen que no existe asociación entre los patrones faciales y los componentes de la sonrisa, puesto que no hubo valores estadísticamente significativos (p > 0,05). Por lo tanto, no hay suficiente evidencia para concluir que las variables están asociadas. (AU)


Aim: To analyze facial patterns and their relationship with the smile in users at the decentralized headquarters of the International Criminal Police Organization INTERPOL in Arequipa, Peru in 2018. Materials and methods: A sample of 72 individuals between 20 and 40 years old was selected, in whom the facial morphological index was obtained, and the facial pattern was determined. Three posed smile photographs were taken to analyze the smile components: oral corridor, frontal occlusal plane, smile arch, lip line, curvature of the upper lip; symmetry of the smile and dental and gingival components. The data were analyzed using the Fisher's exact test or Chi square test to establish associations among variables. Results: The euryprosopo facial pattern presents a low smile line (50%) and average smile line (45.8%), while mesoprosopo and leptoprosopo facial patterns present an average smile line (54.2% and 50.0% respectively). Euryprosopo and mesoprosopo facial patterns have an unacceptable occlusal frontal plane (58.3% and 54.2% respectively), and the leptoprosopo facial pattern has an acceptable occlusal frontal plane (58.3%). The remaining smile components are similar among the three facial patterns. Conclusions: The results obtained establish that there is no association between facial patterns and smile components. In the absence of statistically significant values (p > 0.05), there is not enough evidence to conclude that the variables are associated. (AU)


Assuntos
Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Sorriso , Expressão Facial
3.
Crim Behav Ment Health ; 30(5): 228-239, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32744391

RESUMO

BACKGROUND: The link between facial affect recognition and criminal justice involvement has been extensively researched, yet there are virtually no data on the capacity for facial affect recognition in post-incar+cerated individuals, and the results of many studies are limited due to a narrow focus on psychopathy rather than offence category. AIMS: To test the first hypothesis that individuals reporting a history of a violent offence would show a deficit in facial affect recognition and the second hypothesis that the violent offender's deficit would be exclusive to recognition of negative expressions, not affecting positive or neutral expressions. METHOD: Post-incarcerated individuals (N = 298) were recruited online through Qualtrics and completed questionnaires assessing their criminal justice background and demographics. They completed measures of facial affect recognition, anxiety and depression, and components of aggression. RESULTS: A logistic regression, including sex, ethnicity, age and years of education and depression/anxiety scores, indicated that committing a violent offence was independently associated with lower facial affect recognition scores as well as male gender and a trait-based propensity towards physical aggression, but no other co-variable. These data provided no evidence that this deficit was specific to negative emotions. CONCLUSIONS AND IMPLICATIONS FOR FUTURE RESEARCH/PRACTICE: Our study is one of the first to examine facial affect recognition in a post-incarcerated sample. It suggests that deficits in facial affect recognition, already well documented among violent prisoners, persist. While acknowledging that these may be relatively fixed characteristics, this study also suggests that, for these people, nothing happening during their imprisonment was touching this. Improving capacity in facial affect recognition should be considered as a target of intervention for violent offenders, developing or revising in-prison programmes as required.


Assuntos
Agressão/psicologia , Criminosos/psicologia , Emoções/fisiologia , Expressão Facial , Reconhecimento Facial/fisiologia , Prisioneiros/psicologia , Violência/psicologia , Adolescente , Adulto , Transtorno da Personalidade Antissocial/psicologia , Ansiedade/psicologia , Face , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção Social , Voluntários
4.
PLoS One ; 15(8): e0236953, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32764830

RESUMO

Collective emotion is the synchronous convergence of an effective response across individuals toward a specific event or object. Previous studies have focused on the transmission of cyber collective emotion; however, little attention has been paid to the transmission of collective emotion in face-to-face interactions. Using an experimental design, we examined how emotions are transmitted from some members to the whole group in face-to-face situations. We used a news report of a social event as an emotion stimulus to induce anger and disgust in 158 middle school students aged 12 to 15, with an average age of 13.20 years (SD = 0.651) We randomly assigned one-third of the participants to be "transmitters," while the others were "receivers." Transmitters shared their feelings with receivers; then, receivers communicated with other group members. The results indicated that negative collective emotions were transmitted from high- to low-intensity members, which converged through the effect of emotional contagion. It accumulated through the effect of an emotional circle, during which the feedback reinforced emotion intensity. The collective emotion transmission model comprised emotion diffusion, contagion, and accumulation. This model elucidates the intrinsic features of collective emotion transmission, enriches the research on collective emotion, and provides theoretical references for monitoring and managing future public events.


Assuntos
Emoções , Relações Interpessoais , Modelos Psicológicos , Adolescente , Criança , China , Meios de Comunicação , Expressão Facial , Feminino , Humanos , Masculino
6.
Medicine (Baltimore) ; 99(29): e21154, 2020 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-32702870

RESUMO

BACKGROUND: Traumatic brain injury (TBI) refers to head injuries that disrupt normal function of the brain. TBI commonly lead to a wide range of potential psychosocial functional deficits. Although psychosocial function after TBI is influenced by many factors, more and more evidence shows that social cognitive skills are critical contributors. Facial emotion recognition, one of the higher-level skills of social cognition, is the ability to perceive and recognize emotional states of others based on their facial expressions. Numerous studies have assessed facial emotion recognition performance in adult patients with TBI. However, there have been inconsistent findings. The aim of this study is to conduct a meta-analysis to characterize facial emotion recognition in adult patients with TBI. METHODS: A systematic literature search will be performed for eligible studies published up to March 19, 2020 in three international databases (PubMed, Web of Science and Embase). The work such as article retrieval, screening, quality evaluation, data collection will be conducted by two independent researchers. Meta-analysis will be conducted using Stata 15.0 software. RESULTS: This meta-analysis will provide a high-quality synthesis from existing evidence for facial emotion recognition in adult patients with TBI, and analyze the facial emotion recognition performance in different aspects (i.e., recognition of negative emotions or positive emotions or any specific basic emotion). CONCLUSIONS: This meta-analysis will provide evidence of facial emotion recognition performance in adult patients with TBI. INPLASY REGISTRATION NUMBER: INPLASY202050109.


Assuntos
Lesões Encefálicas Traumáticas/psicologia , Protocolos Clínicos , Emoções/classificação , Reconhecimento Facial , Adulto , Lesões Encefálicas Traumáticas/classificação , Expressão Facial , Humanos , Metanálise como Assunto , Revisões Sistemáticas como Assunto
7.
Exp Psychol ; 67(2): 140-149, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32729401

RESUMO

Psychosocial stress has been shown to alter social perception and behavior. In the present study, we investigated whether a standardized psychosocial stressor modulates the perceptual sensitivity for positive and negative facial emotions and the tendency to allocate attention to facial expressions. Fifty-four male participants underwent the Trier Social Stress Test for Groups (TSST-G) or a nonstressful control condition before they performed a facial emotions detection task and a facial dot-probe task to assess attention for positive and negative facial expressions. Saliva samples were collected over the course of the experiment to measure free cortisol and alpha amylase. In response to the TSST-G, participants showed marked increases in subjective stress, salivary cortisol, and alpha amylase compared to the control condition. In the control condition, detection performance was higher for angry compared to happy facial expressions, while in the stressful condition this difference was reversed. Here, participants were more sensitive to happy compared to angry facial expressions. Attention was unaffected by psychosocial stress. The results suggest that psychosocial stress shifts social perception in terms of detection sensitivity for facial expressions toward positive social cues, a pattern that is consistent with the tendency to seek social support for coping with stress.


Assuntos
Emoções/fisiologia , Expressão Facial , Adolescente , Adulto , Humanos , Masculino , Pessoa de Meia-Idade , Estresse Psicológico/psicologia , Adulto Jovem
8.
PLoS One ; 15(7): e0235390, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32609780

RESUMO

Whether language information influences recognition of emotion from facial expressions remains the subject of debate. The current studies investigate how variations in emotion labels that are paired with expressions influences participants' judgments of the emotion displayed. Static (Study 1) and dynamic (Study 2) facial expressions depicting eight emotion categories were paired with emotion labels that systematically varied in arousal (low and high). Participants rated the arousal, valence, and dominance of expressions paired with labels. Isolated faces and isolated labels were also rated. As predicted, the label presented influenced participants' judgments of the expressions. Across both studies, higher arousal labels were associated with: 1) higher ratings of arousal for sad, angry, and scared expressions, and 2) higher ratings of dominance for angry, proud, and disgust expressions. These results indicate that emotion labels influence judgments of facial expressions.


Assuntos
Nível de Alerta , Expressão Facial , Julgamento , Reconhecimento Visual de Modelos , Fúria , Adulto , Viés de Atenção , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
9.
PLoS One ; 15(7): e0234104, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32609778

RESUMO

Advances in computer and communications technology have deeply affected the way we communicate. Social media have emerged as a major means of human communication. However, a major limitation in such media is the lack of non-verbal stimuli, which sometimes hinders the understanding of the message, and in particular the associated emotional content. In an effort to compensate for this, people started to use emoticons, which are combinations of keyboard characters that resemble facial expressions, and more recently their evolution: emojis, namely, small colorful images that resemble faces, actions and daily life objects. This paper presents evidence of the effect of emojis on memory retrieval through a functional Magnetic Resonance Imaging (fMRI) study. A total number of fifteen healthy volunteers were recruited for the experiment, during which successive stimuli were presented, containing words with intense emotional content combined with emojis, either with congruent or incongruent emotional content. Volunteers were asked to recall a memory related to the stimulus. The study of the reaction times showed that emotional incongruity among word+emoji combinations led to longer reaction times in memory retrieval compared to congruent combinations. General Linear Model (GLM) and Blind Source Separation (BSS) methods have been tested in assessing the influence of the emojis on the process of memory retrieval. The analysis of the fMRI data showed that emotional incongruity among word+emoji combinations activated the Broca's area (BA44 and BA45) in both hemispheres, the Supplementary Motor Area (SMA) and the inferior prefrontal cortex (BA47), compared to congruent combinations. Furthermore, compared to pseudowords, word+emoji combinations activated the left Broca's area (BA44 and BA45), the amygdala, the right temporal pole (BA48) and several frontal regions including the SMA and the inferior prefrontal cortex.


Assuntos
Memória Episódica , Rememoração Mental/fisiologia , Simbolismo , Adulto , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Comunicação , Compreensão , Emoções , Expressão Facial , Feminino , Voluntários Saudáveis , Humanos , Imagem por Ressonância Magnética/métodos , Masculino , Memória/fisiologia , Córtex Motor/fisiologia , Comunicação não Verbal/psicologia , Córtex Pré-Frontal/fisiologia , Leitura , Lobo Temporal/fisiologia , Redação , Adulto Jovem
11.
PLoS One ; 15(7): e0235908, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32673325

RESUMO

This work presents the design and analysis of an Adaptive User Interface (AUI) for a desktop application that uses a novel solution for the recognition of the emotional state of a user through both facial expressions and body posture from an RGB-D sensor. Six basic emotions are recognized through facial expressions in addition to the physiological state, which is recognized through the body posture. The facial expressions and body posture are acquired in real-time from a Kinect sensor. A scoring system is used to improve recognition by minimizing the confusion between the different emotions. The implemented solution achieves an accuracy rate of above 90%. The recognized emotion is then used to derive an Automatic AUI where the user can use speech commands to modify the User Interface (UI) automatically. A comprehensive user study is performed to compare the usability of an Automatic, Manual, and a Hybrid AUI. The AUIs are evaluated in terms of their efficiency, effectiveness, productivity, and error safety. Additionally, a comprehensive analysis is performed to evaluate the results from the viewpoint of different genders and age groups. Results show that the hybrid adaptation improves usability in terms of productivity and efficiency. Finally, a combination of both automatic and hybrid AUIs result in significantly positive user experience compared to the manual adaptation.


Assuntos
Emoções , Expressão Facial , Processamento de Imagem Assistida por Computador/métodos , Interface Usuário-Computador , Adolescente , Adulto , Idoso , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Postura , Inquéritos e Questionários , Adulto Jovem
12.
PLoS One ; 15(7): e0235545, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32645045

RESUMO

The automatic detection of facial expressions of pain is needed to ensure accurate pain assessment of patients who are unable to self-report pain. To overcome the challenges of automatic systems for determining pain levels based on facial expressions in clinical patient monitoring, a surface electromyography method was tested for feasibility in healthy volunteers. In the current study, two types of experimental gradually increasing pain stimuli were induced in thirty-one healthy volunteers who attended the study. We used a surface electromyography method to measure the activity of five facial muscles to detect facial expressions during pain induction. Statistical tests were used to analyze the continuous electromyography data, and a supervised machine learning was applied for pain intensity prediction model. Muscle activation of corrugator supercilii was most strongly associated with self-reported pain, and the levator labii superioris and orbicularis oculi showed a statistically significant increase in muscle activation when the pain stimulus reached subjects' self -reported pain thresholds. The two strongest features associated with pain, the waveform length of the corrugator supercilii and levator labii superioris, were selected for a prediction model. The performance of the pain prediction model resulted in a c-index of 0.64. In the study results, the most detectable difference in muscle activity during the pain experience was connected to eyebrow lowering, nose wrinkling and upper lip raising. As the performance of the prediction model remains modest, yet with a statistically significant ordinal classification, we suggest testing with a larger sample size to further explore the variables that affect variation in expressiveness and subjective pain experience.


Assuntos
Eletromiografia/métodos , Expressão Facial , Medição da Dor/métodos , Adulto , Músculos Faciais/fisiologia , Feminino , Humanos , Masculino , Limiar da Dor
13.
PLoS One ; 15(6): e0233731, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32484837

RESUMO

Facial expressions in sign languages are used to express grammatical functions, such as question marking, but can also be used to express emotions (either the signer's own or in constructed action contexts). Emotions and grammatical functions can utilize the same articulators, and the combinations can be congruent or incongruent. For instance, surprise and polar questions can be marked by raised eyebrows, while anger is usually marked by lowered eyebrows. We investigated what happens when different emotions (neutral/surprise/anger) are combined with different sentence types (statement/polar question/wh-question) in Kazakh-Russian Sign Language (KRSL), replicating studies previously made for other sign languages. We asked 9 native signers (5 deaf, 4 hearing children of deaf adults) to sign 10 simple sentences in 9 conditions (3 emotions * 3 sentence types). We used OpenPose software to track eyebrow position in the video recordings. We found that emotions and sentence types influence eyebrow position in KRSL: eyebrows are raised for polar questions and surprise, and lowered for anger. There are also some interactions between the two factors, as well as some differences between hearing and deaf native signers, namely a smaller effect of polar questions for the deaf group, and a different interaction between emotions and wh-question marking in the two groups. We thus find evidence for the complex influences on non-manual behavior in signers of sign languages, and showcase a quantitative approach to this field.


Assuntos
Emoções , Expressão Facial , Línguas de Sinais , Adulto , Sobrancelhas/fisiologia , Feminino , Humanos , Cazaquistão , Masculino
14.
PLoS One ; 15(6): e0234513, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32525966

RESUMO

Fearful facial expressions tend to be more salient than other expressions. This threat bias is to some extent driven by simple low-level image properties, rather than the high-level emotion interpretation of stimuli. It might be expected therefore that different expressions will, on average, have different physical contrasts. However, studies tend to normalise stimuli for RMS contrast, potentially removing a naturally-occurring difference in salience. We assessed whether images of faces differ in both physical and apparent contrast across expressions. We measured physical RMS contrast and the Fourier amplitude spectra of 5 emotional expressions prior to contrast normalisation. We also measured expression-related differences in perceived contrast. Fear expressions have a steeper Fourier amplitude slope compared to neutral and angry expressions, and consistently significantly lower contrast compared to other faces. This effect is more pronounced at higher spatial frequencies. With the exception of stimuli containing only low spatial frequencies, fear expressions appeared higher in contrast than a physically matched reference. These findings suggest that contrast normalisation artificially boosts the perceived salience of fear expressions; an effect that may account for perceptual biases observed for spatially filtered fear expressions.


Assuntos
Expressão Facial , Medo , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Adolescente , Adulto , Atenção/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
15.
PLoS One ; 15(6): e0233892, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32484842

RESUMO

The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a limited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occurrence frequency for a subset of linguistic expressions in American English. First, we objectively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Second, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication.


Assuntos
Idioma , Linguística , Percepção da Fala/fisiologia , Fala/fisiologia , Comunicação , Expressão Facial , Gestos , Humanos , Semântica , Processamento de Sinais Assistido por Computador , Gravação em Vídeo
17.
Braz Oral Res ; 34: e043, 2020 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-32401933

RESUMO

The aim of the present study was to compare the sensitivity and specificity of pain scales used to assess dentin hypersensitivity (DH). The preferred scale, and toothbrushing habits of participants were also investigated. This cross-sectional study was conducted with students and employees of a Brazilian Federal University who presented DH. The participants answered a questionnaire about their toothbrushing and drinking habits. Hypersensitive and non-sensitive teeth were submitted to tactile and ice stick stimuli. Then, the subjects marked their pain level in the visual analogue (VAS), numeric scale (NS), faces pain scale (FPS) and verbal evaluation scale (VES). DH was also assessed by Schiff scale (SS). The data were analyzed by Wilcoxon and Chi-Square tests, as well as by ROC curve. The mean age of the sample (56 women, 16 men) was 27.8 years. The most prevalent acidic beverage was coffee (36.0%) and the most preferred scale was the NS (47.2%). The pain level was statistically higher in teeth with DH compared to teeth without DH (p < 0.05). The accuracy ranged from 0.729 (SS) to 0.750 (NS). The highest sensitivity value was 81.9% for NS. The SS presented the highest specificity (91%). The visual analog, numerical, verbal evaluation, faces pain, and Schiff scales were accurate for DH diagnosis. The Schiff scale was the preferred scale for DH assessment.


Assuntos
Sensibilidade da Dentina/diagnóstico , Medição da Dor/métodos , Adolescente , Adulto , Bebidas/efeitos adversos , Estudos Transversais , Expressão Facial , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Índice de Gravidade de Doença , Estatísticas não Paramétricas , Inquéritos e Questionários , Escovação Dentária/efeitos adversos , Adulto Jovem
18.
Proc Biol Sci ; 287(1927): 20192941, 2020 05 27.
Artigo em Inglês | MEDLINE | ID: mdl-32396799

RESUMO

Mimicry, and especially spontaneous facial mimicry, is a rudimentary element of social-emotional experience that is well-conserved across numerous species. Although such mimicry is thought to be a relatively automatic process, research indicates that contextual factors can influence mimicry, especially in humans. Here, we extend this work by investigating the effect of acute psychosocial stress on spontaneous facial mimicry. Participants performed a spontaneous facial mimicry task with facial electromyography (fEMG) at baseline and approximately one month later, following an acute psychosocial stressor (Trier Social Stress Test). Results show that the magnitude of the endocrine stress response reduced zygomaticus major reactivity, and specifically spontaneous facial mimicry for positive social stimuli (i.e. smiles). Individuals with higher levels of the stress hormone cortisol showed a more blunted fEMG response to smiles, but not to frowns. Conversely, stress had no effect on corrugator supercilii activation (i.e. frowning to frowns). These findings highlight the importance of the biological stress response system in this basic element of social-emotional experience.


Assuntos
Hidrocortisona/sangue , Comportamento Imitativo/fisiologia , Estresse Psicológico/sangue , Emoções , Expressão Facial , Músculos Faciais , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA