Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 13.870
Filtrar
1.
Sensors (Basel) ; 21(17)2021 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-34502809

RESUMO

Face and person detection are important tasks in computer vision, as they represent the first component in many recognition systems, such as face recognition, facial expression analysis, body pose estimation, face attribute detection, or human action recognition. Thereby, their detection rate and runtime are crucial for the performance of the overall system. In this paper, we combine both face and person detection in one framework with the goal of reaching a detection performance that is competitive to the state of the art of lightweight object-specific networks while maintaining real-time processing speed for both detection tasks together. In order to combine face and person detection in one network, we applied multi-task learning. The difficulty lies in the fact that no datasets are available that contain both face as well as person annotations. Since we did not have the resources to manually annotate the datasets, as it is very time-consuming and automatic generation of ground truths results in annotations of poor quality, we solve this issue algorithmically by applying a special training procedure and network architecture without the need of creating new labels. Our newly developed method called Simultaneous Face and Person Detection (SFPD) is able to detect persons and faces with 40 frames per second. Because of this good trade-off between detection performance and inference time, SFPD represents a useful and valuable real-time framework especially for a multitude of real-world applications such as, e.g., human-robot interaction.


Assuntos
Reconhecimento Facial , Robótica , Expressão Facial , Humanos , Processamento de Imagem Assistida por Computador
2.
Sensors (Basel) ; 21(17)2021 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-34502877

RESUMO

With the prevalence of virtual avatars and the recent emergence of metaverse technology, there has been an increase in users who express their identity through an avatar. The research community focused on improving the realistic expressions and non-verbal communication channels of virtual characters to create a more customized experience. However, there is a lack in the understanding of how avatars can embody a user's signature expressions (i.e., user's habitual facial expressions and facial appearance) that would provide an individualized experience. Our study focused on identifying elements that may affect the user's social perception (similarity, familiarity, attraction, liking, and involvement) of customized virtual avatars engineered considering the user's facial characteristics. We evaluated the participant's subjective appraisal of avatars that embodied the participant's habitual facial expressions or facial appearance. Results indicated that participants felt that the avatar that embodied their habitual expressions was more similar to them than the avatar that did not. Furthermore, participants felt that the avatar that embodied their appearance was more familiar than the avatar that did not. Designers should be mindful about how people perceive individuated virtual avatars in order to accurately represent the user's identity and help users relate to their avatar.


Assuntos
Expressão Facial , Interface Usuário-Computador , Emoções , Humanos , Percepção Social
3.
Handb Clin Neurol ; 183: 99-108, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34389127

RESUMO

One of the most important means of communicating emotions is by facial expressions. About 30-40 years ago, several studies examined patients with right and left hemisphere strokes for deficits in expressing and comprehending emotional facial expressions. The participants with right- or left-hemispheric strokes attempted to determine if two different actors were displaying the same or different emotions, to name the different emotions being displayed, and to select the face displaying an emotion named by the examiner. Investigators found that the right hemisphere-damaged group was impaired on all these emotional facial tests and that this deficit was not solely related to visuoperceptual processing defects. Further studies revealed that the patients who were impaired at recognizing emotional facial expressions and who had lost these visual representations of emotional faces often had damage to their right parietal lobe and their right somatosensory cortex. Injury to the cerebellum has been reported to impair emotional facial recognition, as have dementing diseases such as Alzheimer's disease and frontotemporal dementia, movement disorders such as Parkinson's disease and Huntington's disease, traumatic brain injuries, and temporal lobe epilepsy. Patients with right hemisphere injury are also more impaired than left-hemisphere-damaged patients when attempting to voluntarily produce facial emotional expressions and in their spontaneous expression of emotions in response to stimuli. This impairment does not appear to be induced by emotional conceptual deficits or an inability to experience emotions. Many of the disorders that cause impairments of comprehension of affective facial expressions also impair facial emotional expression. Treating the underlying disease may help patients with impairments of facial emotion recognition and expression, but unfortunately, there have not been many studies of rehabilitation.


Assuntos
Demência Frontotemporal , Doença de Huntington , Compreensão , Emoções , Expressão Facial , Lateralidade Funcional , Humanos , Testes Neuropsicológicos
4.
J Affect Disord ; 293: 320-328, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-34229285

RESUMO

BACKGROUND: Major depressive disorder (MDD) has been associated with difficulties in social and interpersonal functioning. Deficits in emotion processing may contribute to the development and maintenance of interpersonal difficulties in MDD. Although some studies have found that MDD is associated with deficits in recognition of emotion in faces, other studies have failed to find any impairment. METHODS: The present meta-analysis of 23 studies, with 516 dysthymic/depressed participants and 614 euthymic control participants, examined facial emotion recognition accuracy in MDD. Several potential moderators were investigated, including type of emotion, symptom severity, patient status, method of diagnosis, type of stimulus, and stimulus duration. RESULTS: Results showed that participants with MDD in inpatient settings (Hedges' g = -0.35) and with severe levels of symptom severity (g = -0.42) were less accurate in recognizing happy facial expressions of emotion (g = -0.25) compared to participants in outpatient settings (g = -0.24) and with mild symptoms of depression (g = -0.17). Studies that presented stimuli for longer durations (g = -0.26) tended to find lower accuracy levels in dysthymic/depressed, relative to euthymic, participants. LIMITATIONS: Limitations include a lack of studies which examined gender identity, as well as other potential moderators. CONCLUSIONS: Results of the current study support the existence of a broad facial emotion recognition deficit in individuals suffering from unipolar depression. Clinicians should be mindful of this and other research which suggests broad-based deficits in various forms of information processing, including attention, perception, and memory in depression.


Assuntos
Transtorno Depressivo Maior , Reconhecimento Facial , Emoções , Expressão Facial , Feminino , Identidade de Gênero , Humanos , Masculino
5.
Depress Anxiety ; 38(8): 846-859, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34224655

RESUMO

BACKGROUND: Patients with specific phobia (SP) show altered brain activation when confronted with phobia-specific stimuli. It is unclear whether this pathogenic activation pattern generalizes to other emotional stimuli. This study addresses this question by employing a well-powered sample while implementing an established paradigm using nonspecific aversive facial stimuli. METHODS: N = 111 patients with SP, spider subtype, and N = 111 healthy controls (HCs) performed a supraliminal emotional face-matching paradigm contrasting aversive faces versus shapes in a 3-T magnetic resonance imaging scanner. We performed region of interest (ROI) analyses for the amygdala, the insula, and the anterior cingulate cortex using univariate as well as machine-learning-based multivariate statistics based on this data. Additionally, we investigated functional connectivity by means of psychophysiological interaction (PPI). RESULTS: Although the presentation of emotional faces showed significant activation in all three ROIs across both groups, no group differences emerged in all ROIs. Across both groups and in the HC > SP contrast, PPI analyses showed significant task-related connectivity of brain areas typically linked to higher-order emotion processing with the amygdala. The machine learning approach based on whole-brain activity patterns could significantly differentiate the groups with 73% balanced accuracy. CONCLUSIONS: Patients suffering from SP are characterized by differences in the connectivity of the amygdala and areas typically linked to emotional processing in response to aversive facial stimuli (inferior parietal cortex, fusiform gyrus, middle cingulate, postcentral cortex, and insula). This might implicate a subtle difference in the processing of nonspecific emotional stimuli and warrants more research furthering our understanding of neurofunctional alteration in patients with SP.


Assuntos
Imageamento por Ressonância Magnética , Transtornos Fóbicos , Tonsila do Cerebelo/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Emoções , Expressão Facial , Giro do Cíngulo/diagnóstico por imagem , Humanos , Transtornos Fóbicos/diagnóstico por imagem
6.
Neuroscience ; 471: 72-79, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34332014

RESUMO

Recent evidence raised the importance of the cerebellum in emotional processes, with specific regard to negative emotions. However, its role in the processing of face emotional expressions is still unknown. This study was aimed at assessing whether face emotional expressions influence the cerebellar learning processes, using the delay eyeblink classical conditioning (EBCC) as a model. Visual stimuli composed of faces expressing happy, sad and neutral emotions were used as conditioning stimulus in forty healthy subjects to modulate the cerebellum-brainstem pathway underlying the EBCC. The same stimuli were used to explore their effects on the blink reflex (BR) and its recovery cycle (BRRC) and on the cerebellar-brain inhibition (CBI). Data analysis revealed that the learning component of the EBCC was significantly reduced following the passive view of sad faces, while the extinction phase was modulated by both sad and happy faces. By contrast, BR, BRRC and CBI were not significantly affected by the view of emotional face expressions. The present study provides first evidence that the passive viewing of faces displaying emotional expressions, are processed by the cerebellum, with no apparent involvement of the brainstem and the cerebello-cortical connection. In particular, the view of sad faces, reduces the excitability of the cerebellar circuit underlying the learning phase of the EBCC. Differently, the extinction phase was shortened by both happy and sad faces, suggesting that different neural bases underlie learning and extinction of emotions expressed by faces.


Assuntos
Piscadela , Expressão Facial , Cerebelo , Condicionamento Clássico , Emoções , Humanos
7.
J Speech Lang Hear Res ; 64(8): 2941-2955, 2021 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-34310173

RESUMO

Purpose The nature of gender differences in emotion processing has remained unclear due to the discrepancies in existing literature. This study examined the modulatory effects of emotion categories and communication channels on gender differences in verbal and nonverbal emotion perception. Method Eighty-eight participants (43 females and 45 males) were asked to identify three basic emotions (i.e., happiness, sadness, and anger) and neutrality encoded by female or male actors from verbal (i.e., semantic) or nonverbal (i.e., facial and prosodic) channels. Results While women showed an overall advantage in performance, their superiority was dependent on specific types of emotion and channel. Specifically, women outperformed men in regard to two basic emotions (happiness and sadness) in the nonverbal channels and only the anger category with verbal content. Conversely, men did better for the anger category in the nonverbal channels and for the other two emotions (happiness and sadness) in verbal content. There was an emotion- and channel-specific interaction effect between the two types of gender differences, with male subjects showing higher sensitivity to sad faces and prosody portrayed by the female encoders. Conclusion These findings reveal explicit emotion processing as a highly dynamic complex process with significant gender differences tied to specific emotion categories and communication channels. Supplemental Material https://doi.org/10.23641/asha.15032583.


Assuntos
Emoções , Semântica , Ira , Expressão Facial , Feminino , Felicidade , Humanos , Masculino , Fatores Sexuais
8.
Sensors (Basel) ; 21(14)2021 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-34300600

RESUMO

Facial expressions are well known to change with age, but the quantitative properties of facial aging remain unclear. In the present study, we investigated the differences in the intensity of facial expressions between older (n = 56) and younger adults (n = 113). In laboratory experiments, the posed facial expressions of the participants were obtained based on six basic emotions and neutral facial expression stimuli, and the intensities of their faces were analyzed using a computer vision tool, OpenFace software. Our results showed that the older adults expressed strong expressions for some negative emotions and neutral faces. Furthermore, when making facial expressions, older adults used more face muscles than younger adults across the emotions. These results may help to understand the characteristics of facial expressions in aging and can provide empirical evidence for other fields regarding facial recognition.


Assuntos
Expressão Facial , Reconhecimento Facial , Idoso , Envelhecimento , Computadores , Emoções , Humanos
9.
Sensors (Basel) ; 21(14)2021 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-34300666

RESUMO

Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, suggesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio-video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the standard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classifier attained 74% accuracy, an f1-score of 0.73, and an area under the curve score of 0.926.


Assuntos
Aprendizado Profundo , Música , Emoções , Expressão Facial , Redes Neurais de Computação
10.
Proc Biol Sci ; 288(1954): 20210966, 2021 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-34229489

RESUMO

Facial expressions are vital for social communication, yet the underlying mechanisms are still being discovered. Illusory faces perceived in objects (face pareidolia) are errors of face detection that share some neural mechanisms with human face processing. However, it is unknown whether expression in illusory faces engages the same mechanisms as human faces. Here, using a serial dependence paradigm, we investigated whether illusory and human faces share a common expression mechanism. First, we found that images of face pareidolia are reliably rated for expression, within and between observers, despite varying greatly in visual features. Second, they exhibit positive serial dependence for perceived facial expression, meaning an illusory face (happy or angry) is perceived as more similar in expression to the preceding one, just as seen for human faces. This suggests illusory and human faces engage similar mechanisms of temporal continuity. Third, we found robust cross-domain serial dependence of perceived expression between illusory and human faces when they were interleaved, with serial effects larger when illusory faces preceded human faces than the reverse. Together, the results support a shared mechanism for facial expression between human faces and illusory faces and suggest that expression processing is not tightly bound to human facial features.


Assuntos
Reconhecimento Facial , Ilusões , Expressão Facial , Felicidade , Humanos
11.
Sensors (Basel) ; 21(12)2021 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-34203007

RESUMO

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


Assuntos
Face , Expressão Facial , Bases de Dados Factuais
12.
Sensors (Basel) ; 21(12)2021 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-34208539

RESUMO

First impressions make up an integral part of our interactions with other humans by providing an instantaneous judgment of the trustworthiness, dominance and attractiveness of an individual prior to engaging in any other form of interaction. Unfortunately, this can lead to unintentional bias in situations that have serious consequences, whether it be in judicial proceedings, career advancement, or politics. The ability to automatically recognize social traits presents a number of highly useful applications: from minimizing bias in social interactions to providing insight into how our own facial attributes are interpreted by others. However, while first impressions are well-studied in the field of psychology, automated methods for predicting social traits are largely non-existent. In this work, we demonstrate the feasibility of two automated approaches-multi-label classification (MLC) and multi-output regression (MOR)-for first impression recognition from faces. We demonstrate that both approaches are able to predict social traits with better than chance accuracy, but there is still significant room for improvement. We evaluate ethical concerns and detail application areas for future work in this direction.


Assuntos
Expressão Facial , Percepção Social , Humanos , Julgamento , Reconhecimento Psicológico , Fatores Sociológicos
13.
Sci Rep ; 11(1): 14448, 2021 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-34262075

RESUMO

Faces hold a substantial value for effective social interactions and sharing. Covering faces with masks, due to COVID-19 regulations, may lead to difficulties in using social signals, in particular, in individuals with neurodevelopmental conditions. Daily-life social participation of individuals who were born preterm is of immense importance for their quality of life. Here we examined face tuning in individuals (aged 12.79 ± 1.89 years) who were born preterm and exhibited signs of periventricular leukomalacia (PVL), a dominant form of brain injury in preterm birth survivors. For assessing the face sensitivity in this population, we implemented a recently developed experimental tool, a set of Face-n-Food images bordering on the style of Giuseppe Arcimboldo. The key benefit of these images is that single components do not trigger face processing. Although a coarse face schema is thought to be hardwired in the brain, former preterms exhibit substantial shortages in the face tuning not only compared with typically developing controls but also with individuals with autistic spectrum disorders. The lack of correlations between the face sensitivity and other cognitive abilities indicates that these deficits are domain-specific. This underscores impact of preterm birth sequelae for social functioning at large. Comparison of the findings with data in individuals with other neurodevelopmental and neuropsychiatric conditions provides novel insights into the origins of deficient face processing.


Assuntos
Encéfalo/fisiologia , Reconhecimento Facial , Reconhecimento Visual de Modelos , Nascimento Prematuro , Cognição Social , Adolescente , Transtorno do Espectro Autista , COVID-19 , Criança , Cognição , Neurociência Cognitiva , Expressão Facial , Feminino , Humanos , Leucomalácia Periventricular , Gravidez , Qualidade de Vida , Reconhecimento Psicológico/fisiologia , Fatores Sexuais , Comportamento Social , Percepção Visual/fisiologia
14.
Sensors (Basel) ; 21(13)2021 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-34283146

RESUMO

People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user's intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant's expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems.


Assuntos
Emoções , Expressão Facial , Ira , Felicidade , Humanos , Reconhecimento Psicológico
15.
Neuropsychologia ; 159: 107959, 2021 08 20.
Artigo em Inglês | MEDLINE | ID: mdl-34271003

RESUMO

Previous study have explored the influence of explicit emotion priming on computational estimation strategy execution, but the corresponding influence of implicit emotion priming still remains unknown. The present study aimed to solve this problem. Participants were asked to complete two-digit multiplication computational estimation task, under different implicit emotion priming conditions (gender judgment task). In the two-digit multiplication computational estimation task, the computational estimation question was presented in the middle of the screen, meanwhile, two alternative answers were presented side by side at the bottom of the screen, participants were required to select which answer is correct, by using the down-up strategy (e.g., doing 30 × 50 = 1500 for 34 × 46). Behavioral results showed that the response speed under implicit happy and fear (vs. neutral and angry) priming conditions was quicker, and the accuracy under different priming conditions showed no significant difference. The ERP results showed that the influence of implicit emotion priming on computational estimation strategy execution consisted of two phases: in the first phase, the N1 amplitudes elicited by completing the multiplication computational estimation task were smaller under implicit fear (vs. angry) priming condition; in the second phase, the corresponding P2 amplitudes under implicit happy (vs. fear) priming condition were smaller. The present study indicated that implicit happy and fear experience contributed to complete computational estimation tasks, suggesting that implicit negative emotional (e.g., fear) experience was not always detrimental to computational estimation strategy execution.


Assuntos
Medo , Felicidade , Ira , Emoções , Expressão Facial , Humanos
16.
Res Dev Disabil ; 116: 104034, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34304046

RESUMO

BACKGROUND: The autonomic nervous system has an influence on emotions and behavior modulation, however, the relationship between autonomic modulation impairment and the autism spectrum disorder (ASD) is yet to be fully described. AIMS: To evaluate the autonomic responses of children with and without ASD through the non-linear, and linear heart rate variability (HRV) measures, and assess the correlation between these responses, the severity and behavioral symptoms of autism. METHODS AND PROCEDURES: 27 children diagnosed with ASD (EG = experimental group) and 28 matching controls (CG = control group) were evaluated. The HRV was evaluated in 15 min sections at the following moments: I) Resting condition; II) During facial expression tasks; and III) Recovery. The severity and behavioral symptoms of autism were evaluated by the Childhood Autism Rating Scale (CARS) and Autistic Behaviors Checklist (ABC) scales. OUTCOMES AND RESULTS: The facial expression tasks influenced the activity of the autonomic nervous system in both groups, however the EG experienced more autonomic changes. These changes were mostly evidenced by the non-linear indices. Also, the CARS and ABC scales showed significant correlations with HRV indices. CONCLUSIONS AND IMPLICATIONS: Children with ASD presented an autonomic modulation impairment, mostly identified by the non-linear indices of HRV. Also, this autonomic impairment is associated with the severity and behavioral symptoms of autism.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Sistema Nervoso Autônomo , Criança , Expressão Facial , Frequência Cardíaca , Humanos
17.
Artigo em Inglês | MEDLINE | ID: mdl-34300102

RESUMO

There is a large body of evidence that exposure to simulated natural scenes has positive effects on emotions and reduces stress. Some studies have used self-reported assessments, and others have used physiological measures or combined self-reports with physiological measures; however, analysis of facial emotional expression has rarely been assessed. In the present study, participant facial expressions were analyzed while viewing forest trees with foliage, forest trees without foliage, and urban images by iMotions' AFFDEX software designed for the recognition of facial emotions. It was assumed that natural images would evoke a higher magnitude of positive emotions in facial expressions and a lower magnitude of negative emotions than urban images. However, the results showed only very low magnitudes of facial emotional responses, and differences between natural and urban images were not significant. While the stimuli used in the present study represented an ordinary deciduous forest and urban streets, differences between the effects of mundane and attractive natural scenes and urban images are discussed. It is suggested that more attractive images could result in more pronounced emotional facial expressions. The findings of the present study have methodological relevance for future research. Moreover, not all urban dwellers have the possibility to spend time in nature; therefore, knowing more about the effects of some forms of simulated natural scenes surrogate nature also has some practical relevance.


Assuntos
Emoções , Expressão Facial , Face , Humanos , Reconhecimento Psicológico , Software
18.
Neuroimage Clin ; 31: 102731, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34174690

RESUMO

BACKGROUND: So far findings on emotional face processing among depressed individuals reveal an inconsistent image, with only some studies supporting a mood-congruent bias in salience processing. Thereby, many results are based on the processing of sad emotions and mostly focused on resting-state connectivity analysis. The present study aimed to target this misbalance by implementing a social oddball paradigm, with a special focus on the amygdala, the ACC, the insula and subdivisions of insula and ACC. METHODS: Twenty-seven depressed patients and twenty-seven non-depressed controls took part in a fMRI event-related social oddball paradigm based on smiling facial expressions as target stimuli embedded in a stream of neutral facial expressions. FMRI activation and functional connectivity analysis were calculated for the pre-defined ROIs of the salience network (SN), with a special focus on twelve insular subdivisions and six ACC subdivisions. RESULTS: For both groups the social oddball paradigm triggered similar BOLD responses within the pre-defined ROIs, while the quality of functional connectivity showed pronounced alterations from the salience network to the ventral attention- and default mode network (DMN). CONCLUSION: On a first level of target detection, smiling faces are equally processed and trigger similar bold responses in structures of the salience network. On a second level of inter-network communication the brain of depressed participants tends to be pre-formed for self-referential processing and rumination instead of fast goal directed behavior and socio-emotional cognitive processing.


Assuntos
Transtorno Depressivo Maior , Reconhecimento Facial , Córtex Cerebral , Transtorno Depressivo Maior/diagnóstico por imagem , Expressão Facial , Humanos , Imageamento por Ressonância Magnética
19.
Proc Inst Mech Eng H ; 235(10): 1113-1127, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34105405

RESUMO

Children with autism spectrum disorder have impairments in emotional processing which leads to the inability in recognizing facial expressions. Since emotion is a vital criterion for having fine socialisation, it is incredibly important for the autistic children to recognise emotions. In our study, we have chosen the facial skin temperature as a biomarker to measure emotions. To assess the facial skin temperature, the thermal imaging modality has been used in this study, since it has been recognised as a promising technique to evaluate emotional responses. The aim of this study was the following: (1) to compare the facial skin temperature of autistic and non-autistic children by using thermal imaging across various emotions; (2) to classify the thermal images obtained from the study using the customised convolutional neural network compared with the ResNet 50 network. Fifty autistic and fifty non-autistic participants were included for the study. Thermal imaging was used to obtain the temperature of specific facial regions such as the eyes, cheek, forehead and nose while we evoked emotions (Happiness, anger and sadness) in children using an audio-visual stimulus. Among the emotions considered, the emotion anger had the highest temperature difference between the autistic and non-autistic participants in the region's eyes (1.9%), cheek (2.38%) and nose (12.6%). The accuracy obtained by classifying the thermal images of the autistic and non-autistic children using Customised Neural Network and ResNet 50 Network was 96% and 90% respectively. This computer aided diagnostic tool can be a predictable and a steadfast method in the diagnosis of the autistic individuals.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Aprendizado Profundo , Transtorno do Espectro Autista/diagnóstico por imagem , Criança , Emoções , Expressão Facial , Humanos
20.
Cortex ; 141: 280-292, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34102411

RESUMO

The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.


Assuntos
Emoções , Riso , Percepção Auditiva , Teorema de Bayes , Eletromiografia , Expressão Facial , Músculos Faciais , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...