Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
JMIR Hum Factors ; 11: e45494, 2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38277201

RESUMO

BACKGROUND: Social robots are becoming increasingly important as companions in our daily lives. Consequently, humans expect to interact with them using the same mental models applied to human-human interactions, including the use of cospeech gestures. Research efforts have been devoted to understanding users' needs and developing robot's behavioral models that can perceive the user state and properly plan a reaction. Despite the efforts made, some challenges regarding the effect of robot embodiment and behavior in the perception of emotions remain open. OBJECTIVE: The aim of this study is dual. First, it aims to assess the role of the robot's cospeech gestures and embodiment in the user's perceived emotions in terms of valence (stimulus pleasantness), arousal (intensity of evoked emotion), and dominance (degree of control exerted by the stimulus). Second, it aims to evaluate the robot's accuracy in identifying positive, negative, and neutral emotions displayed by interacting humans using 3 supervised machine learning algorithms: support vector machine, random forest, and K-nearest neighbor. METHODS: Pepper robot was used to elicit the 3 emotions in humans using a set of 60 images retrieved from a standardized database. In particular, 2 experimental conditions for emotion elicitation were performed with Pepper robot: with a static behavior or with a robot that expresses coherent (COH) cospeech behavior. Furthermore, to evaluate the role of the robot embodiment, the third elicitation was performed by asking the participant to interact with a PC, where a graphical interface showed the same images. Each participant was requested to undergo only 1 of the 3 experimental conditions. RESULTS: A total of 60 participants were recruited for this study, 20 for each experimental condition for a total of 3600 interactions. The results showed significant differences (P<.05) in valence, arousal, and dominance when stimulated with the Pepper robot behaving COH with respect to the PC condition, thus underlying the importance of the robot's nonverbal communication and embodiment. A higher valence score was obtained for the elicitation of the robot (COH and robot with static behavior) with respect to the PC. For emotion recognition, the K-nearest neighbor classifiers achieved the best accuracy results. In particular, the COH modality achieved the highest level of accuracy (0.97) when compared with the static behavior and PC elicitations (0.88 and 0.94, respectively). CONCLUSIONS: The results suggest that the use of multimodal communication channels, such as cospeech and visual channels, as in the COH modality, may improve the recognition accuracy of the user's emotional state and can reinforce the perceived emotion. Future studies should investigate the effect of age, culture, and cognitive profile on the emotion perception and recognition going beyond the limitation of this work.


Assuntos
Robótica , Humanos , Emoções , Comunicação não Verbal , Gestos , Percepção
2.
Stud Health Technol Inform ; 310: 1337-1338, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38270032

RESUMO

The European Project GATEKEEPER aims to develop a platform and marketplace to ensure a healthier independent life for the aging population. In this platform the role of HL7 FHIR is to provide a shared logical data model to collect data in heterogeneous living, which can be used by AI Service and the Gatekeeper HL7 FHIR Implementation Guide was created for this purpose. Independent pilots used this IG and illustrate the impact of the approach, benefit, value, and scalability.


Assuntos
Coleta de Dados , Promoção da Saúde , Humanos , Idoso
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA