Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Soc Robot ; 15(2): 165-183, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36467283

RESUMO

This study scrutinizes the impacts of utilizing a socially assistive robot, the RASA robot, during speech therapy sessions for children with language disorders. Two capabilities were developed for the robotic platform to enhance children-robot interactions during speech therapy interventions: facial expression communication (containing recognition and expression) and lip-syncing. Facial expression recognition was conducted by training several well-known CNN architectures on one of the most extensive facial expressions databases, the AffectNet database, and then modifying them using the transfer learning strategy performed on the CK+ dataset. The robot's lip-syncing capability was designed in two steps. The first step was concerned with designing precise schemes of the articulatory elements needed during the pronunciation of the Persian phonemes (i.e., consonants and vowels). The second step included developing an algorithm to pronounce words by disassembling them into their components (including consonants and vowels) and then morphing them into each other successively. To pursue the study's primary goal, two comparable groups of children with language disorders were considered, the intervention and control groups. The intervention group attended therapy sessions in which the robot acted as the therapist's assistant, while the control group only communicated with the human therapist. The study's first purpose was to compare the children's engagement while playing a mimic game with the affective robot and the therapist, conducted via video coding. The second objective was to assess the efficacy of the robot's presence in the speech therapy sessions alongside the therapist, accomplished by administering the Persian Test of Language Development, Persian TOLD. According to the first scenario, playing with the affective robot is more engaging than playing with the therapist. Furthermore, the statistical analysis of the study's results indicates that participating in robot-assisted speech therapy (RAST) sessions enhances children with language disorders' achievements in comparison with taking part in conventional speech therapy interventions.

2.
Int J Soc Robot ; : 1-15, 2022 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-36320591

RESUMO

Lack of educational facilities for the burgeoning world population, financial barriers, and the growing tendency in favor of inclusive education have all helped channel a general inclination toward using various educational assistive technologies, e.g., socially assistive robots. Employing social robots in diverse educational scenarios could enhance learners' achievements by motivating them and sustaining their level of engagement. This study is devoted to manufacturing and investigating the acceptance of a novel social robot named APO, designed to improve hearing-impaired individuals' lip-reading skills through an educational game. To accomplish the robot's objective, we proposed and implemented a lip-syncing system on the APO social robot. The proposed robot's potential with regard to its primary goals, tutoring and practicing lip-reading, was examined through two main experiments. The first experiment was dedicated to evaluating the clarity of the utterances articulated by the robot. The evaluation was quantified by comparing the robot's articulation of words with a video of a human teacher lip-syncing the same words. In this inspection, due to the adults' advanced skill in lip-reading compared to children, twenty-one adult participants were asked to identify the words lip-synced in the two scenarios (the articulation of the robot and the video recorded from the human teacher). Subsequently, the number of words that participants correctly recognized from the robot and the human teacher articulations was considered a metric to evaluate the caliber of the designed lip-syncing system. The outcome of this experiment revealed that no significant differences were observed between the participants' recognition of the robot and the human tutor's articulation of multisyllabic words. Following the validation of the proposed articulatory system, the acceptance of the robot by a group of hearing-impaired participants, eighteen adults and sixteen children, was scrutinized in the second experiment. The adults and the children were asked to fill in two standard questionnaires, UTAUT and SAM, respectively. Our findings revealed that the robot acquired higher scores than the lip-syncing video in most of the questionnaires' items, which could be interpreted as a greater intention of utilizing the APO robot as an assistive technology for lip-reading instruction among adults and children.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...