Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Games Health J ; 12(6): 480-488, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37449840

RESUMO

Objective: Children with cerebral palsy (CP) present motor learning disorders and somatosensory dysfunction. Although many protocols use videogames in children with CP, few apply or examine motor learning principles. This study aims at (1) implementing therapist-user-designer collaboration in adapting a videogame to the principles of motor learning and the characteristics of users with CP, and (2) piloting the effectiveness of these adaptations by analyzing the achievement of motor learning parameters (learning rate acquisition, retention, and transfer to motor and somatosensory function). Materials and Methods: Periodical interprofessional meetings conducted to the adaptation of a videogame, requiring the control of a joystick for traveling through a maze, to motor learning principles. In a pilot validation, effects in unilateral upper limb function, gross manual dexterity, and somatosensory thresholds were assessed before and after 10-week training in 13 children with CP. Results: After 10-week training with the adapted serious game, children showed learning rates above 90% and improvement in motor learning parameters along the sessions. Manual dexterity and pronation-supination of the dominant hand improved after training. No significant effects were found on somatosensory thresholds. Conclusion: Serious games are useful as motor learning tools for improving motor function in children with PC. Cooperative work among professionals and users is advisable for designing efficient videogames according to rehabilitation best practices.


Assuntos
Paralisia Cerebral , Jogos de Vídeo , Humanos , Criança , Destreza Motora , Paralisia Cerebral/reabilitação , Extremidade Superior , Aprendizagem
2.
PLoS One ; 16(5): e0251057, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33979375

RESUMO

Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character's appearance.


Assuntos
Bases de Dados Factuais , Expressão Facial , Riso/fisiologia , Sorriso/fisiologia , Gráficos por Computador , Simulação por Computador , Emoções/fisiologia , Face/fisiologia , Humanos , Obras Pictóricas como Assunto , Software
3.
Sensors (Basel) ; 20(23)2020 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-33255347

RESUMO

In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing manner with human users to recognize their potential emotions through facial expressions, contextual cues and bio-signals. In particular, this work is focused on analyzing facial expression. A social robot is used to validate a pre-trained convolutional neural network (CNN) which recognizes facial expressions. Facial expression recognition plays an important role in recognizing and understanding human emotion by robots. Robots equipped with expression recognition capabilities can also be a useful tool to get feedback from the users. The designed experiment allows evaluating a trained neural network in facial expressions using a social robot in a real environment. In this paper a comparison between the CNN accuracy and human experts is performed, in addition to analyze the interaction, attention and difficulty to perform a particular expression by 29 non-expert users. In the experiment, the robot leads the users to perform different facial expressions in motivating and entertaining way. At the end of the experiment, the users are quizzed about their experience with the robot. Finally, a set of experts and the CNN classify the expressions. The obtained results allow affirming that the use of social robot is an adequate interaction paradigm for the evaluation on facial expression.


Assuntos
Reconhecimento Facial , Robótica , Emoções , Expressão Facial , Humanos , Interação Social
4.
Sensors (Basel) ; 20(17)2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-32882884

RESUMO

The recognition of human activities is usually considered to be a simple procedure. Problems occur in complex scenes involving high speeds. Activity prediction using Artificial Intelligence (AI) by numerical analysis has attracted the attention of several researchers. Human activities are an important challenge in various fields. There are many great applications in this area, including smart homes, assistive robotics, human-computer interactions, and improvements in protection in several areas such as security, transport, education, and medicine through the control of falling or aiding in medication consumption for elderly people. The advanced enhancement and success of deep learning techniques in various computer vision applications encourage the use of these methods in video processing. The human presentation is an important challenge in the analysis of human behavior through activity. A person in a video sequence can be described by their motion, skeleton, and/or spatial characteristics. In this paper, we present a novel approach to human activity recognition from videos using the Recurrent Neural Network (RNN) for activity classification and the Convolutional Neural Network (CNN) with a new structure of the human skeleton to carry out feature presentation. The aims of this work are to improve the human presentation through the collection of different features and the exploitation of the new RNN structure for activities. The performance of the proposed approach is evaluated by the RGB-D sensor dataset CAD-60. The experimental results show the performance of the proposed approach through the average error rate obtained (4.5%).


Assuntos
Aprendizado Profundo , Atividades Humanas , Esqueleto , Idoso , Inteligência Artificial , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA