Your browser doesn't support javascript.
loading
Visualization and Semantic Labeling of Mood States Based on Time-Series Features of Eye Gaze and Facial Expressions by Unsupervised Learning.
Madokoro, Hirokazu; Nix, Stephanie; Sato, Kazuhito.
Afiliação
  • Madokoro H; Faculty of Software and Information Science, Iwate Prefectural University, Takizawa 020-0693, Japan.
  • Nix S; Faculty of Software and Information Science, Iwate Prefectural University, Takizawa 020-0693, Japan.
  • Sato K; Faculty of Systems Science and Technology, Akita Prefectural University, Akita 015-0055, Japan.
Healthcare (Basel) ; 10(8)2022 Aug 08.
Article em En | MEDLINE | ID: mdl-36011150
ABSTRACT
This study is intended to develop a stress measurement and visualization system for stress management in terms of simplicity and reliability. We present a classification and visualization method of mood states based on unsupervised machine learning (ML) algorithms. Our proposed method attempts to examine the relation between mood states and extracted categories in human communication from facial expressions, gaze distribution area and density, and rapid eye movements, defined as saccades. Using a psychological check sheet and a communication video with an interlocutor, an original benchmark dataset was obtained from 20 subjects (10 male, 10 female) in their 20s for four or eight weeks at weekly intervals. We used a Profile of Mood States Second edition (POMS2) psychological check sheet to extract total mood disturbance (TMD) and friendliness (F). These two indicators were classified into five categories using self-organizing maps (SOM) and U-Matrix. The relation between gaze and facial expressions was analyzed from the extracted five categories. Data from subjects in the positive categories were found to have a positive correlation with the concentrated distributions of gaze and saccades. Regarding facial expressions, the subjects showed a constant expression time of intentional smiles. By contrast, subjects in negative categories experienced a time difference in intentional smiles. Moreover, three comparative experiment results demonstrated that the feature addition of gaze and facial expressions to TMD and F clarified category boundaries obtained from U-Matrix. We verify that the use of SOM and its two variants is the best combination for the visualization of mood states.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Healthcare (Basel) Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Japão

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Healthcare (Basel) Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Japão