RESUMO
One of the possible benefits of robot-mediated education is the effect of the robot becoming a catalyst between people and facilitating learning. In this study, the authors focused on an asynchronous active learning method mediated by robots. Active learning is believed to help students continue learning and develop the ability to think independently. Therefore, the authors improved the UGA (User Generated Agent) system that we have created for long-term active learning in COVID-19 to create an environment where children introduce books to each other via robots. The authors installed the robot in an elementary school and conducted an experiment lasting more than a year. As a result, it was confirmed that the robot could continue to be used without getting bored even over a long period of time. They also analyzed how the children created the contents by analyzing the contents that had a particularly high number of views. In particular, the authors observed changes in children's behavior, such as spontaneous advertising activities, guidance from upperclassmen to lowerclassmen, collaboration with multiple people, and increased interest in technology, even under conditions where the new coronavirus was spreading and children's social interaction was inhibited.
RESUMO
Driven by the rapid development of artificial intelligence (AI) and anthropomorphic robotic systems, the various possibilities and risks of such technologies have become a topic of urgent discussion. Although science fiction (SF) works are often cited as references for visions of future developments, this framework of discourse may not be appropriate for serious discussions owing to technical inaccuracies resulting from its reliance on entertainment media. However, these science fiction works could help researchers understand how people might react to new AI and robotic systems. Hence, classifying depictions of artificial intelligence in science fiction may be expected to help researchers to communicate more clearly by identifying science fiction elements to which their works may be similar or dissimilar. In this study, we analyzed depictions of artificial intelligence in SF together with expert critics and writers. First, 115 AI systems described in SF were selected based on three criteria, including diversity of intelligence, social aspects, and extension of human intelligence. Nine elements representing their characteristics were analyzed using clustering and principal component analysis. The results suggest the prevalence of four distinctive categories, including human-like characters, intelligent machines, helpers such as vehicles and equipment, and infrastructure, which may be mapped to a two-dimensional space with axes representing intelligence and humanity. This research contributes to the public relations of AI and robotic technologies by analyzing shared imaginative visions of AI in society based on SF works.
RESUMO
The authors evaluate the extent to which a user's impression of an AI agent can be improved by giving the agent the ability of self-estimation, thinking time, and coordination of risk tendency. The authors modified the algorithm of an AI agent in the cooperative game Hanabi to have all of these traits, and investigated the change in the user's impression by playing with the user. The authors used a self-estimation task to evaluate the effect that the ability to read the intention of a user had on an impression. The authors also show thinking time of an agent influences impression for an agent. The authors also investigated the relationship between the concordance of the risk-taking tendencies of players and agents, the player's impression of agents, and the game experience. The results of the self-estimation task experiment showed that the more accurate the estimation of the agent's self, the more likely it is that the partner will perceive humanity, affinity, intelligence, and communication skills in the agent. The authors also found that an agent that changes the length of thinking time according to the priority of action gives the impression that it is smarter than an agent with a normal thinking time when the player notices the difference in thinking time or an agent that randomly changes the thinking time. The result of the experiment regarding concordance of the risk-taking tendency shows that influence player's impression toward agents. These results suggest that game agent designers can improve the player's disposition toward an agent and the game experience by adjusting the agent's self-estimation level, thinking time, and risk-taking tendency according to the player's personality and inner state during the game.
RESUMO
There have been several attempts in recent years to develop a remote communication device using sensory modalities other than speech that would induce a user's positive experience with his/her conversation partner. Specifically, Hugvie is a human-shaped pillow as well as a remote communication device enabling users to combine a hugging experience with telecommunication to improve the quality of remote communication. The present research is based on the hypothesis that using Hugvie maintains users' level of trust toward their conversation partners in situations prone to suspicion. The level of trust felt toward other remote game players was compared between participants using Hugvie and those using a basic communication device while playing a modified version of Werewolf, a conversation-based game, designed to evaluate trust. Although there are always winners and losers in the regular version of Werewolf, the rules were modified to generate a possible scenario in which no enemy was present among the players and all players would win if they trusted each other. We examined the effect of using Hugvie while playing Werewolf on players' level of trust toward each other and our results demonstrated that in those using Hugvie, the level of trust toward other players was maintained.