Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Behav Res Methods ; 56(6): 5709-5731, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38273072

ABSTRACT

Facial expressions are among the earliest behaviors infants use to express emotional states, and are crucial to preverbal social interaction. Manual coding of infant facial expressions, however, is laborious and poses limitations to replicability. Recent developments in computer vision have advanced automated facial expression analyses in adults, providing reproducible results at lower time investment. Baby FaceReader 9 is commercially available software for automated measurement of infant facial expressions, but has received little validation. We compared Baby FaceReader 9 output to manual micro-coding of positive, negative, or neutral facial expressions in a longitudinal dataset of 58 infants at 4 and 8 months of age during naturalistic face-to-face interactions with the mother, father, and an unfamiliar adult. Baby FaceReader 9's global emotional valence formula yielded reasonable classification accuracy (AUC = .81) for discriminating manually coded positive from negative/neutral facial expressions; however, the discrimination of negative from neutral facial expressions was not reliable (AUC = .58). Automatically detected a priori action unit (AU) configurations for distinguishing positive from negative facial expressions based on existing literature were also not reliable. A parsimonious approach using only automatically detected smiling (AU12) yielded good performance for discriminating positive from negative/neutral facial expressions (AUC = .86). Likewise, automatically detected brow lowering (AU3+AU4) reliably distinguished neutral from negative facial expressions (AUC = .79). These results provide initial support for the use of selected automatically detected individual facial actions to index positive and negative affect in young infants, but shed doubt on the accuracy of complex a priori formulas.


Subject(s)
Facial Expression , Humans , Infant , Female , Male , Longitudinal Studies , Emotions/physiology , Software , Facial Recognition/physiology
2.
Sensors (Basel) ; 23(22)2023 Nov 09.
Article in English | MEDLINE | ID: mdl-38005462

ABSTRACT

Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.


Subject(s)
Face , Facial Expression , Humans , Facial Muscles/physiology , Electromyography/methods , Videotape Recording , Emotions/physiology
3.
Educ Inf Technol (Dordr) ; 28(6): 7413-7436, 2023.
Article in English | MEDLINE | ID: mdl-36471775

ABSTRACT

This quasi-experimental study aimed to determine the relationship between (i) oral language ability and emotions represented by facial emotions, and (ii) modality of assessment (audios versus videos) and sentiments embedded in each modality. Sixty university students watched and/or listened to four selected audio-visual stimuli and orally answered follow-up comprehension questions. One stimulus was designed to evoke happiness while the other, sadness. Participants' facial emotions during the answering were measured using the FaceReader technology. In addition, four trained raters assessed the responses of the participants. An analysis of the FaceReader data showed that there were significant main and interaction effects of sentiment and modality on participants' facial emotional expression. Notably, there was a significant difference in the amount of facial emotions evoked by (i) the happy vs. sad sentiment videos and (ii) video vs. audio modalities. In contrast, sentiments embedded in the stimuli and modalities had no significant effect on the measured speaking performance of the participants. Nevertheless, we found a number of significant correlations between the participants' test scores and some of their facial emotions evoked by the stimuli. Implications of these findings for the assessment of oral communication are discussed.

4.
Aesthetic Plast Surg ; 45(6): 2742-2748, 2021 12.
Article in English | MEDLINE | ID: mdl-34580758

ABSTRACT

BACKGROUND: The widespread popularity of browlifts and blepharoplasties speaks directly to the importance that patients place on the periorbital region of the face. In literature, most esthetic outcomes are based on instinctive analysis of the esthetic surgeon, rather than on patient assessments, public opinions, or other objective means. We employed an artificial intelligence system to objectively measure the impact of brow lifts and associated rejuvenation procedures on the appearance of emotion while the patient is in repose. METHODS: We retrospectively identified all patients who underwent bilateral brow lift for visual field obstruction between 2006 and 2019. Images were analyzed using a commercially available facial expression recognition software package (FaceReader™, Noldus Information Technology BV, Wageningen, Netherlands). The data generated reflected the proportion of each emotion expressed for any given facial movement and the action units associated. RESULTS: A total of 52 cases were identified after exclusion. Pre-operatively, the angry, happy, sad, scared, and surprised emotion were detected on average of 13.06%, 1.68%, 13.06%, 3.53%, and 0.97% among all the patients, respectively. Post-operatively, the angry emotion average decreased to 5.42% (p=0.009). The happy emotion increased to 9.35% (p=0.0013), while the sad emotion decreased to 5.42%. The scared emotion remained relatively the same at 3.4%, and the surprised emotion increased to 2.01%; however, these were not statistically significant. CONCLUSION: This study proposes a paradigm shift in the clinical evaluation of brow lift and other facial esthetic surgery, implementing an existing facial emotion recognition system to quantify changes in expression associated with facial surgery. LEVEL OF EVIDENCE IV: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .


Subject(s)
Artificial Intelligence , Rhytidoplasty , Emotions , Humans , Rejuvenation , Retrospective Studies
5.
Nihon Ronen Igakkai Zasshi ; 56(4): 478-486, 2019.
Article in Japanese | MEDLINE | ID: mdl-31761854

ABSTRACT

AIM: Facial expressions are often impaired in patients with Parkinson's disease (PD). Few studies have examined the effects of head and neck rehabilitation in patients with PD using a facial expression analysis. In the present study, to further elucidate the effects of facial rehabilitation exercise in patients with PD, a three-dimensional facial expression analysis with FaceReader™ and surface electromyography (EMG) were performed in order to assess the facial expressions and muscle activities, respectively. The effects of such exercises on the mood and mental health were also evaluated. METHOD: Twenty-one patients with PD (63.3±12.1 years) participated in the present study and were randomly assigned to an intervention group and non-intervention group. Facial rehabilitation exercise was performed for 60 minutes once a week for 12 weeks in the intervention group. GHQ-12, the facial expression analysis with FaceReader™, surface EMG, and the VAS scale for mood changes were used to evaluate the effects of the program. The results from both groups were compared. RESULTS: The results from eight patients in the intervention group and five in the non-intervention group were analyzed. FaceReader™ revealed a higher "Happy" index and lower "Sad" index in the intervention group than in the non-intervention group, and a significant interaction "Happy" index by an analysis of variance was noted between the two groups. EMG also showed increases in the activity of facial muscles in the intervention group. The subjects' mood improved after each facial rehabilitation exercise session. CONCLUSION: The results of the present study suggest that the facial rehabilitation exercise affected the mood, facial expression, and facial muscle activities in patients with PD and indicate that the expression analysis with the FaceReader™ and surface EMG are useful for evaluating the effects of facial rehabilitation exercise.


Subject(s)
Exercise Therapy , Facial Expression , Facial Muscles , Parkinson Disease , Adult , Affect , Aged , Face/physiopathology , Facial Muscles/physiology , Female , Humans , Male , Middle Aged , Parkinson Disease/complications , Parkinson Disease/psychology , Parkinson Disease/rehabilitation
6.
Appetite ; 116: 315-322, 2017 09 01.
Article in English | MEDLINE | ID: mdl-28478065

ABSTRACT

The aim of this study was to assess the role of extended time in the United States (as defined as a continuous period greater than two years; referred to hereafter as "US Acclimated"), as well as other demographic factors, on the level of net positive response of consumers to different salt levels in food samples. One hundred panelists were recruited, including 50 meeting our US acclimation criterion. Panelists assessed samples of potatoes with five different levels of salt concentrations, and the levels of their net positive responses were evaluated with FaceReader technology (Noldus). The data of our study showed a significant positive association between US Acclimated participants and the level of net positive response to samples with higher salt contents. This interaction remained statistically significant even when modeling the effects with consideration of race/ethnicity and gender. Another notable outcome was the unexpected significant interaction between gender and US acclimation in regards to evaluated positive response across all salt concentrations (US Acclimated females demonstrating substantially and significant higher levels of positive response than US Acclimated males). The association between living in the United States and showing more positive response to higher salt contents is consistent with many persistent characterizations of the eating habits in the United States, but it is not in fact well explained by the most recent data regarding the observed levels of average sodium consumption across worldwide geographical regions. The results of this study may be demonstrating evidence of underlying as-yet-unknown factors contributing to the responses of consumers to salt levels in foods. Further examination of these possible factors may well be warranted.


Subject(s)
Acclimatization , Diet , Sodium Chloride, Dietary/administration & dosage , Adult , Algorithms , Cohort Studies , Demography , Emigrants and Immigrants , Ethnicity , Female , Food Analysis , Health Behavior , Humans , Male , Sodium Chloride, Dietary/analysis , United States , Young Adult
7.
Cogn Emot ; 31(2): 209-224, 2017 02.
Article in English | MEDLINE | ID: mdl-26469744

ABSTRACT

The current study adds to prior research by investigating specific (happiness, sadness, surprise, disgust, anger and fear) and general (corrugator and zygomatic muscle activity) facial reactions to violent and comedy films among individuals with varying levels of callous-unemotional (CU) traits and impulsive aggression (IA). Participants at differential risk of CU traits and IA were selected from a sample of 1225 young adults. In Experiment 1, participants (N = 82) facial expressions were recorded while they watched violent and comedy films. Video footage of participants' facial expressions was analysed using FaceReader, a facial coding software that classifies facial reactions. Findings suggested that individuals with elevated CU traits showed reduced facial reactions of sadness and disgust to violent films, indicating low empathic concern in response to victims' distress. In contrast, impulsive aggressors produced specifically more angry facial expressions when viewing violent and comedy films. In Experiment 2 (N = 86), facial reactions were measured by monitoring facial electromyography activity. FaceReader findings were verified by the reduced facial electromyography at the corrugator, but not the zygomatic, muscle in response to violent films shown by individuals high in CU traits. Additional analysis suggested that sympathy to victims explained the association between CU traits and reduced facial reactions to violent films.


Subject(s)
Emotions , Facial Expression , Impulsive Behavior , Violence/psychology , Wit and Humor as Topic/psychology , Aggression , Female , Humans , Male , Motion Pictures , Young Adult
8.
Cogn Affect Behav Neurosci ; 16(2): 374-81, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26667366

ABSTRACT

There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.


Subject(s)
Affect/physiology , Cues , Emotions/physiology , Facial Expression , Mental Recall/physiology , Semantics , Adolescent , Adult , Female , Humans , Male , Memory, Episodic , Young Adult
9.
J Psychiatr Res ; 176: 9-17, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38830297

ABSTRACT

Emotional deficits in psychosis are prevalent and difficult to treat. In particular, much remains unknown about facial expression abnormalities, and a key reason is that expressions are very labor-intensive to code. Automatic facial coding (AFC) can remove this barrier. The current study sought to both provide evidence for the utility of AFC in psychosis for research purposes and to provide evidence that AFC are valid measures of clinical constructs. Changes of facial expressions and head position of participants-39 with schizophrenia/schizoaffective disorder (SZ), 46 with other psychotic disorders (OP), and 108 never psychotic individuals (NP)-were assessed via FaceReader, a commercially available automated facial expression analysis software, using video recorded during a clinical interview. We first examined the behavioral measures of the psychotic disorder groups and tested if they can discriminate between the groups. Next, we evaluated links of behavioral measures with clinical symptoms, controlling for group membership. We found the SZ group was characterized by significantly less variation in neutral expressions, happy expressions, arousal, and head movements compared to NP. These measures discriminated SZ from NP well (AUC = 0.79, sensitivity = 0.79, specificity = 0.67) but discriminated SZ from OP less well (AUC = 0.66, sensitivity = 0.77, specificity = 0.46). We also found significant correlations between clinician-rated symptoms and most behavioral measures (particularly happy expressions, arousal, and head movements). Taken together, these results suggest that AFC can provide useful behavioral measures of psychosis, which could improve research on non-verbal expressions in psychosis and, ultimately, enhance treatment.


Subject(s)
Facial Expression , Psychotic Disorders , Video Recording , Humans , Psychotic Disorders/physiopathology , Psychotic Disorders/diagnosis , Female , Male , Adult , Middle Aged , Schizophrenia/physiopathology , Schizophrenia/diagnosis , Psychiatric Status Rating Scales , Head Movements/physiology , Young Adult , Emotions/physiology
10.
Heliyon ; 10(2): e23728, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38347906

ABSTRACT

This study investigated the relationship between emotional states (valence, arousal, and six basic emotions) and donation size in pet charities, and it compared the effectiveness of affective computing and emotion self-report methods in assessing attractiveness. Using FaceReader software and self-report, we measured the emotional states of participants (N = 45) during the donation task. The results showed that sadness, happiness, and anger were significantly related to donation size. Sadness and anger increased donations, whereas happiness decreased them. Arousal was not significantly correlated with the willingness to donate. These results are supported by both methods, whereas the self-reported data regarding the association of surprise, fear, and disgust with donation size are inconclusive. Thus, unpleasant emotions increase donation size, and combining affective computing with self-reported data improves the prediction of the effectiveness of a charity appeal. This study contributes to the understanding of the relationship between emotions and charitable behavior toward pet charities and evaluates the effectiveness of marketing mix elements using affective computing. The limitations include the laboratory setting for this experiment and the lack of measurement of prolonged and repeated exposure to unpleasant charity appeals.

11.
Front Psychol ; 14: 1223806, 2023.
Article in English | MEDLINE | ID: mdl-37583610

ABSTRACT

Introduction: This work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding. Methods: We used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos-obtained during real-life parent-infant interactions in the home-were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software's detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy. Results: We found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers' faces were more important for predicting Positive and Neutral expressions, whilst fathers' faces were more important in predicting Negative and Surprise expressions. Discussion: We discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research.

12.
Meat Sci ; 199: 109124, 2023 May.
Article in English | MEDLINE | ID: mdl-36736127

ABSTRACT

Sensorial perceptions change as people age and biometrics analysis can be used to explore the unconscious consumer responses. Investigation was conducted of effects of consumer age (younger, 22-52 years; older, 60-76 years) on facial expression response (FER) during consumption of beef patties with varying firmness (soft, medium, hard) and taste (±plum sauce). Video images were collected and FERs analysed using FaceReader™. Younger people exhibited higher intensity for happy/sad/scared and lower intensity for neutral/disgusted, relative to older people. Interactions between age and texture/sauce showed little FER variation in older people, whereas younger people showed considerable FER variation. Younger people, but not older people, had lowest intensity of happy FER and highest intensity of angry FER for the hard patty. Sauce addition resulted in higher intensity of happy/contempt in younger consumers, but not older consumers. FER collected using FaceReader™ was successfully used to differentiate between the unconscious responses of younger and older consumers.


Subject(s)
Emotions , Taste , Animals , Humans , Cattle , Aged , Young Adult , Adult , Middle Aged , Taste Perception , Food Handling/methods , Biometry , Consumer Behavior
13.
J Plast Reconstr Aesthet Surg ; 75(9): 3628-3651, 2022 09.
Article in English | MEDLINE | ID: mdl-35989146

ABSTRACT

INTRODUCTION: There remains a lack of standards in facial rejuvenation procedures, which may be attributed to the subjective measures used to determine surgical outcomes and success. The aim of this study was to evaluate the use of machine learning technology, i.e. FaceReader™, to objectively measure facial rejuvenation surgery outcomes. METHODS: Using a retrospective study design, we enrolled a cohort of patients undergoing high SMAS facelift with/without additional procedures during a one-year interval. The predictor variable was surgery done (pre- vs. postoperative). The outcome variables were 28 facial action units, happiness, and sadness emotions, detected by FaceReader™. Appropriate statistics were calculated at α = 0.05. RESULTS: The sample comprised of 15 patients (11 females, 15 Caucasians, mean age of 55.7 years). There was an average increase in detected happy emotion from 1.03% to 13.17% (p>0.01). Conversely, the average angry emotion detected decreased from 14.66% to 0.63% (p<0.05). There were no other distinct action unit patterns across the operation. CONCLUSION: Despite a small sample size, the results of this study suggest that FaceReader™ can be used as an objective outcome assessment tool in patients undergoing high SMAS facelift with/without its adjuncts.


Subject(s)
Rejuvenation , Rhytidoplasty , Artificial Intelligence , Emotions , Female , Humans , Middle Aged , Rejuvenation/psychology , Retrospective Studies , Rhytidoplasty/methods
14.
Front Psychiatry ; 12: 628397, 2021.
Article in English | MEDLINE | ID: mdl-33841202

ABSTRACT

Videotape recordings obtained during an initial and conventional psychiatric interview were used to assess possible emotional differences in facial expressions and acoustic parameters of the voice between Borderline Personality Disorder (BPD) female patients and matched controls. The incidence of seven basic emotion expressions, emotional valence, heart rate, and vocal frequency (f0), and intensity (dB) of the discourse adjectives and interjections were determined through the application of computational software to the visual (FaceReader) and sound (PRAAT) tracks of the videotape recordings. The extensive data obtained were analyzed by three statistical strategies: linear multilevel modeling, correlation matrices, and exploratory network analysis. In comparison with healthy controls, BPD patients express a third less sadness and show a higher number of positive correlations (14 vs. 8) and a cluster of related nodes among the prosodic parameters and the facial expressions of anger, disgust, and contempt. In contrast, control subjects showed negative or null correlations between such facial expressions and prosodic parameters. It seems feasible that BPD patients restrain the facial expression of specific emotions in an attempt to achieve social acceptance. Moreover, the confluence of prosodic and facial expressions of negative emotions reflects a sympathetic activation which is opposed to the social engagement system. Such BPD imbalance reflects an emotional alteration and a dysfunctional behavioral strategy that may constitute a useful biobehavioral indicator of the severity and clinical course of the disorder. This face/voice/heart rate emotional expression assessment (EMEX) may be used in the search for reliable biobehavioral correlates of other psychopathological conditions.

15.
Sci Total Environ ; 710: 135484, 2020 Mar 25.
Article in English | MEDLINE | ID: mdl-31780160

ABSTRACT

Sound perception studies mostly depend on questionnaires with fixed indicators. Therefore, it is desirable to explore methods with dynamic outputs. The present study aims to explore the effects of sound perception in the urban environment on facial expressions using a software named FaceReader based on facial expression recognition (FER). The experiment involved three typical urban sound recordings, namely, traffic noise, natural sound, and community sound. A questionnaire on the evaluation of sound perception was also used, for comparison. The results show that, first, FER is an effective tool for sound perception research, since it is capable of detecting differences in participants' reactions to different sounds and how their facial expressions change over time in response to those sounds, with mean difference of valence between recordings from 0.019 to 0.059 (p < 0.05or p < 0.01). In a natural sound environment, for example, facial expression increased by 0.04 in the first 15 s and then went down steadily at 0.004 every 20 s. Second, the expression indices, namely, happy, sad, and surprised, change significantly under the effect of sound perception. In the traffic sound environment, for example, happy decreased by 0.012, sad increased by 0.032, and surprised decreased by 0.018. Furthermore, social characteristics such as distance from living place to natural environment (r = 0.313), inclination to communicate (r = 0.253), and preference for crowd (r = 0.296) have effects on facial expression. Finally, the comparison of FER and questionnaire survey results showed that in the traffic noise recording, valence in the first 20 s best represents acoustic comfort and eventfulness; for natural sound, valence in the first 40 s best represents pleasantness; and for community sound, valence in the first 20 s of the recording best represents acoustic comfort, subjective loudness, and calmness.


Subject(s)
Facial Expression , Sound , Acoustics , Auditory Perception , Humans , Noise , Recognition, Psychology
17.
Front Psychol ; 10: 259, 2019.
Article in English | MEDLINE | ID: mdl-30809180

ABSTRACT

Facial expressions that show emotion play an important role in human social interactions. In previous theoretical studies, researchers have suggested that there are universal, prototypical facial expressions specific to basic emotions. However, the results of some empirical studies that tested the production of emotional facial expressions based on particular scenarios only partially supported the theoretical predictions. In addition, all of the previous studies were conducted in Western cultures. We investigated Japanese laypeople (n = 65) to provide further empirical evidence regarding the production of emotional facial expressions. The participants produced facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) in specific scenarios. Under the baseline condition, the participants imitated photographs of prototypical facial expressions. The produced facial expressions were automatically coded using FaceReader in terms of the intensities of emotions and facial action units. In contrast to the photograph condition, where all target emotions were shown clearly, the scenario condition elicited the target emotions clearly only for happy and surprised expressions. The photograph and scenario conditions yielded different profiles for the intensities of emotions and facial action units associated with all of the facial expressions tested. These results provide partial support for the theory of universal, prototypical facial expressions for basic emotions but suggest the possibility that the theory may need to be modified based on empirical evidence.

18.
Front Psychiatry ; 9: 687, 2018.
Article in English | MEDLINE | ID: mdl-30618867

ABSTRACT

Depressive disorder (DD) shortens a healthy and productive human life, has significant public health costs is and associated with high suicide rates. In depression sadness and emotional misery manifest in facial expressions, as psychomotor slowing, lack of energy, high tension, and attenuated sensory perception. Loss of appetite, changes to the taste of food, and the loss of pleasure in eating are important criteria in the diagnosis of DD. We hypothesized that a patient's facial expressions and emotional responses to different tastes can be used as the diagnostic moderators for the development of a new contactless, computer-based method for diagnosis of DD. The confirmation of this hypothesis can shed a new perspective on early contactless, computer-based psychiatric diagnostic strategies and early identification of DD symptoms, as DD is an important issue in public mental health. The benefits of this method are evidence from several perspectives (I) patients can use a self-rating instrument to assess DD symptoms; this may act as an incentive to seek professional help; (II) family and community can use an instrument for early recognition of DD symptoms and suicidal tendencies, making it possible to encourage the individual to seek professional health care; (III) general practitioners have a reliable instrument for preliminary diagnosis of DD in primary care, thus saving the time and resources; (IV) public health benefits include early diagnosis and treatment of DD and better outcomes, reductions in disability-adjusted life years and the global burden of the disease. It is nevertheless important to recognize the limitations and risks of contactless diagnosis of DD. As it is a self-assessment method it is not possible to rule out false positives and false negatives. However, this method might be used for early diagnosis of DD symptoms. Also, it should be mentioned that further evaluation and an experts opinion about this method is needed. The clinical diagnosis of DD should continue to be made by healthcare professionals. Finally, this method may perspectively predict DD at an early stage and may ensure a higher quality of the patients' primary care in the public health system.

19.
Meat Sci ; 119: 22-31, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27115865

ABSTRACT

The study determined the emotional reactions of consumers in relation to hams using face visualization method, which was recorded by FaceReader (FR). The aims of the research were to determine the effect of the ham samples on the type of emotion, to examine more deeply the individual emotional reactions of consumers and to analyse the emotional variability with regard to the temporal measurement of impressions. The research involved testing the effectiveness of measuring emotions in response to the ongoing flavour impression after consumption of smoked hams. It was found that for all of the assessed samples, neutral and negative emotions prevailed as the overall emotions recorded during the assessment of the taste/flavour impression. The range of variability of the overall emotions depended more on the consumer reactions and less on the properties of the assessed product. Consumers expressed various emotions in time and the ham samples evoked different emotional reactions as an effect of duration of the impression.


Subject(s)
Consumer Behavior , Facial Expression , Meat Products , Red Meat , Adult , Animals , Emotions , Female , Humans , Male , Reading , Swine , Taste , Young Adult
20.
Food Res Int ; 64: 81-90, 2014 Oct.
Article in English | MEDLINE | ID: mdl-30011719

ABSTRACT

The aim of this study was to get a better understanding of reactions elicited by the taste of foods using the example of different juices. The reactions investigated were the rating behavior of self-reported spontaneous liking, various autonomous nervous system (ANS) responses and implicit as well as explicit facial expressions. Therefore, the following four hypotheses were tested: 1) Different sensory stimuli of juices elicit different ANS responses. 2) Differences in facial expressions elicited by sensory stimuli of juices used in an implicit and explicit measurement approach can be detected by using FaceReader 5. 3) Self-reported liking is correlated with the measured ANS parameters and the elicited facial expressions. 4) The measured ANS parameters, facial expressions and self-reported liking allow identical differentiations between samples. Skin conductance level (SCL), skin temperature (ST), heart rate (HR), pulse volume amplitude (PVA) and the facial expressions of 81 participants were analyzed during and shortly after tasting juice samples (implicit measurement approach). Additionally, participants were asked to show how much they liked the tasted sample with an intentional facial expression (explicit measurement approach). Banana, grapefruit, mixed vegetable, orange and sauerkraut juices were used as sensory stimuli. The juices elicited significant differences in SCL and PVA responses and intensities of several facial expressions. For these parameters a moderate correlation with self-reported liking was found, allowing a differentiation between liked, disliked and neutral rated samples. The results show that self-reported liking cannot simply be explained by the measured ANS and implicit facial expression parameters, instead providing different information. Significant differences in facial expressions between the implicit and explicit approach were observed. In the implicit approach participants showed hardly any positive emotions when tasting samples they liked, whereas in the explicit approach they displayed a high degree of positive emotions. In both cases negative emotions were shown more intensely for disliked samples.

SELECTION OF CITATIONS
SEARCH DETAIL