Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Sensors (Basel) ; 22(14)2022 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-35891144

RESUMO

We examined the influence of groups of agents and the type of avatar on movement interference. In addition, we studied the synchronization of the subject with the agent. For that, we conducted experiments utilizing human subjects to examine the influence of one, two, or three agents, as well as human or robot avatars, and finally, the agent moving biologically or linearly. We found the main effect on movement interference was the number of agents; namely, three agents had significantly more influence on movement interference than one agent. These results suggest that the number of agents is more influential on movement interference than other avatar characteristics. For the synchronization, the main effect of the type of the agent was revealed, showing that the human agent kept more synchronization compared to the robotic agent. In this experiment, we introduced an additional paradigm on the interference which we called synchronization, discovering that a group of agents is able to influence this behavioral level as well.


Assuntos
Movimento , Robótica , Humanos
2.
Sensors (Basel) ; 21(24)2021 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-34960435

RESUMO

Human operators have the trend of increasing physical and mental workloads when performing teleoperation tasks in uncertain and dynamic environments. In addition, their performances are influenced by subjective factors, potentially leading to operational errors or task failure. Although agent-based methods offer a promising solution to the above problems, the human experience and intelligence are necessary for teleoperation scenarios. In this paper, a truncated quantile critics reinforcement learning-based integrated framework is proposed for human-agent teleoperation that encompasses training, assessment and agent-based arbitration. The proposed framework allows for an expert training agent, a bilateral training and cooperation process to realize the co-optimization of agent and human. It can provide efficient and quantifiable training feedback. Experiments have been conducted to train subjects with the developed algorithm. The performances of human-human and human-agent cooperation modes are also compared. The results have shown that subjects can complete the tasks of reaching and picking and placing with the assistance of an agent in a shorter operational time, with a higher success rate and less workload than human-human cooperation.


Assuntos
Robótica , Algoritmos , Retroalimentação , Humanos , Aprendizagem , Interface Usuário-Computador
3.
Sensors (Basel) ; 21(24)2021 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-34960538

RESUMO

In recent years, conversational agents (CAs) have become ubiquitous and are a presence in our daily routines. It seems that the technology has finally ripened to advance the use of CAs in various domains, including commercial, healthcare, educational, political, industrial, and personal domains. In this study, the main areas in which CAs are successful are described along with the main technologies that enable the creation of CAs. Capable of conducting ongoing communication with humans, CAs are encountered in natural-language processing, deep learning, and technologies that integrate emotional aspects. The technologies used for the evaluation of CAs and publicly available datasets are outlined. In addition, several areas for future research are identified to address moral and security issues, given the current state of CA-related technological developments. The uniqueness of our review is that an overview of the concepts and building blocks of CAs is provided, and CAs are categorized according to their abilities and main application domains. In addition, the primary tools and datasets that may be useful for the development and evaluation of CAs of different categories are described. Finally, some thoughts and directions for future research are provided, and domains that may benefit from conversational agents are introduced.


Assuntos
Comunicação , Objetivos , Humanos , Processamento de Linguagem Natural , Tecnologia , Visão Ocular
4.
Sensors (Basel) ; 21(8)2021 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-33918868

RESUMO

Virtual agents have been widely used in human-agent collaboration work. One important problem with human-agent collaboration is the attribution of responsibility as perceived by users. We focused on the relationship between the appearance of a virtual agent and the attribution of perceived responsibility. We conducted an experiment with five agents: an agent without an appearance, a human-like agent, a robot-like agent, a dog-like agent, and an angel-like agent. We measured the perceived agency and experience for each agent, and we conducted an experiment involving a sound-guessing game. In the game, participants listened to a sound and guessed what the sound was with an agent. At the end of the game, the game finished with failure, and the participants did not know who made the mistake, the participant or the agent. After the game, we asked the participants how they perceived the agents' trustworthiness and to whom they attributed responsibility. As a result, participants attributed less responsibility to themselves when interacting with a robot-like agent than interacting with an angel-like robot. Furthermore, participants perceived the least trustworthiness toward the robot-like agent among all conditions. In addition, the agents' perceived experience had a correlation with the attribution of perceived responsibility. Furthermore, the agents that made the participants feel their attribution of responsibility to be less were not trusted. These results suggest the relationship between agents' appearance and perceived attribution of responsibility and new methods for designs in the creation of virtual agents for collaboration work.


Assuntos
Emoções , Robótica , Realidade Virtual , Animais , Humanos , Percepção
5.
Sensors (Basel) ; 21(19)2021 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-34640960

RESUMO

Smart home assistants, which enable users to control home appliances and can be used for holding entertaining conversations, have become an inseparable part of many people's homes. Recently, there have been many attempts to allow end-users to teach a home assistant new commands, responses, and rules, which can then be shared with a larger community. However, allowing end-users to teach an agent new responses, which are shared with a large community, opens the gate to malicious users, who can teach the agent inappropriate responses in order to promote their own business, products, or political views. In this paper, we present a platform that enables users to collaboratively teach a smart home assistant (or chatbot) responses using natural language. We present a method of collectively detecting malicious users and using the commands taught by the malicious users to further mitigate activity of future malicious users. We ran an experiment with 192 subjects and show the effectiveness of our platform.

6.
J Med Internet Res ; 22(4): e13726, 2020 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-32324146

RESUMO

BACKGROUND: Assistive technologies have become more important owing to the aging population, especially when they foster healthy behaviors. Because of their natural interface, virtual agents are promising assistants for people in need of support. To engage people during an interaction with these technologies, such assistants need to match the users´ needs and preferences, especially with regard to social outcomes. OBJECTIVE: Prior research has already determined the importance of an agent's appearance in a human-agent interaction. As seniors can particularly benefit from the use of virtual agents to maintain their autonomy, it is important to investigate their special needs. However, there are almost no studies focusing on age-related differences with regard to appearance effects. METHODS: A 2×4 between-subjects design was used to investigate the age-related differences of appearance effects in a human-agent interaction. In this study, 46 seniors and 84 students interacted in a health scenario with a virtual agent, whose appearance varied (cartoon-stylized humanoid agent, cartoon-stylized machine-like agent, more realistic humanoid agent, and nonembodied agent [voice only]). After the interaction, participants reported on the evaluation of the agent, usage intention, perceived presence of the agent, bonding toward the agent, and overall evaluation of the interaction. RESULTS: The findings suggested that seniors evaluated the agent more positively (liked the agent more and evaluated it as more realistic, attractive, and sociable) and showed more bonding toward the agent regardless of the appearance than did students. In addition, interaction effects were found. Seniors reported the highest usage intention for the cartoon-stylized humanoid agent, whereas students reported the lowest usage intention for this agent. The same pattern was found for participant bonding with the agent. Seniors showed more bonding when interacting with the cartoon-stylized humanoid agent or voice only agent, whereas students showed the least bonding when interacting with the cartoon-stylized humanoid agent. CONCLUSIONS: In health-related interactions, target group-related differences exist with regard to a virtual assistant's appearance. When elderly individuals are the target group, a humanoid virtual assistant might trigger specific social responses and be evaluated more positively at least in short-term interactions.


Assuntos
Experimentação Humana/normas , Realidade Virtual , Fatores Etários , Feminino , Humanos , Masculino
7.
Sensors (Basel) ; 20(19)2020 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-33003380

RESUMO

Intelligent agents that can interact with users using natural language are becoming increasingly common. Sometimes an intelligent agent may not correctly understand a user command or may not perform it properly. In such cases, the user might try a second time by giving the agent another, slightly different command. Giving an agent the ability to detect such user corrections might help it fix its own mistakes and avoid making them in the future. In this work, we consider the problem of automatically detecting user corrections using deep learning. We develop a multimodal architecture called SAIF, which detects such user corrections, taking as inputs the user's voice commands as well as their transcripts. Voice inputs allow SAIF to take advantage of sound cues, such as tone, speed, and word emphasis. In addition to sound cues, our model uses transcripts to determine whether a command is a correction to the previous command. Our model also obtains internal input from the agent, indicating whether the previous command was executed successfully or not. Finally, we release a unique dataset in which users interacted with an intelligent agent assistant, by giving it commands. This dataset includes labels on pairs of consecutive commands, which indicate whether the latter command is in fact a correction of the former command. We show that SAIF outperforms current state-of-the-art methods on this dataset.

8.
PeerJ Comput Sci ; 10: e2277, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39314690

RESUMO

A pre-touch reaction, which is a response before a physical contact, is an essential factor for natural human-agent interaction. Although numerous studies have investigated the effectiveness of pre-touch reaction design for virtual agents in virtual reality (VR) environments and robots in physical environments, one area remains underexplored: displayed agents, i.e., on-screen computer graphics agents. To design an appropriate pre-touch reaction for such a displayed agent, this article focused on the display's physical boundary as a criterion for the pre-touch reaction of the agent. This article developed a displayed agent system that can detect both the touch events on the screen and the pre-touch behaviors of the interacting people around the display. This study examined the effectiveness of the pre-touch reactions of the displayed agent by the developed system in experiments with human participants. The findings revealed that people significantly preferred pre-touch reactions over post-touch reactions in the context of perceived feelings.

9.
Sci Rep ; 14(1): 15850, 2024 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-38982070

RESUMO

Ingroup favoritism and intergroup discrimination can be mutually reinforcing during social interaction, threatening intergroup cooperation and the sustainability of societies. In two studies (N = 880), we investigated whether promoting prosocial outgroup altruism would weaken the ingroup favoritism cycle of influence. Using novel methods of human-agent interaction via a computer-mediated experimental platform, we introduced outgroup altruism by (i) nonadaptive artificial agents with preprogrammed outgroup altruistic behavior (Study 1; N = 400) and (ii) adaptive artificial agents whose altruistic behavior was informed by the prediction of a machine learning algorithm (Study 2; N = 480). A rating task ensured that the observed behavior did not result from the participant's awareness of the artificial agents. In Study 1, nonadaptive agents prompted ingroup members to withhold cooperation from ingroup agents and reinforced ingroup favoritism among humans. In Study 2, adaptive agents were able to weaken ingroup favoritism over time by maintaining a good reputation with both the ingroup and outgroup members, who perceived agents as being fairer than humans and rated agents as more human than humans. We conclude that a good reputation of the individual exhibiting outgroup altruism is necessary to weaken ingroup favoritism and improve intergroup cooperation. Thus, reputation is important for designing nudge agents.


Assuntos
Altruísmo , Humanos , Masculino , Feminino , Adulto , Comportamento Cooperativo , Adulto Jovem , Processos Grupais , Interação Social , Relações Interpessoais , Adolescente
10.
Front Robot AI ; 11: 1456613, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39445151

RESUMO

Introduction: In human-agent interaction, trust is often measured using human-trust constructs such as competence, benevolence, and integrity, however, it is unclear whether technology-trust constructs such as functionality, helpfulness, and reliability are more suitable. There is also evidence that perception of "humanness" measured through anthropomorphism varies based on the characteristics of the agent, but dimensions of anthropomorphism are not highlighted in empirical studies. Methods: In order to study how different embodiments and qualities of speech of agents influence type of trust and dimensions of anthropomorphism in perception of the agent, we conducted an experiment using two agent "bodies", a speaker and robot, employing four levels of "humanness of voice", and measured perception of the agent using human-trust, technology-trust, and Godspeed series questionnaires. Results: We found that the agents elicit both human and technology conceptions of trust with no significant difference, that differences in body and voice of an agent have no significant impact on trust, even though body and voice are both independently significant in anthropomorphism perception. Discussion: Interestingly, the results indicate that voice may be a stronger characteristic in influencing the perception of agents (not relating to trust) than physical appearance or body. We discuss the implications of our findings for research on human-agent interaction and highlight future research areas.

11.
Front Psychol ; 14: 1195059, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37546466

RESUMO

Virtual reality (VR) environments are increasingly popular for various applications, and the appearance of virtual characters is a critical factor that influences user behaviors. In this study, we aimed to investigate the impact of avatar and agent appearances on pre-touch proxemics in VR. To achieve this goal, we designed experiments utilizing three user avatars (man/woman/robot) and three virtual agents (man/woman/robot). Specifically, we measured the pre-touch reaction distances to the face and body, which are the distances at which a person starts to feel uncomfortable before being touched. We examined how these distances varied based on the appearances of avatars, agents, and user gender. Our results revealed that the appearance of avatars and agents significantly impacted pre-touch reaction distances. Specifically, those using a female avatar tended to maintain larger distances before their face and body to be touched, and people also preferred greater distances before being touched by a robot agent. Interestingly, we observed no effects of user gender on pre-touch reaction distances. These findings have implications for the design and implementation of VR systems, as they suggest that avatar and agent appearances play a significant role in shaping users' perceptions of pre-touch proxemics. Our study highlights the importance of considering these factors when creating immersive and socially acceptable VR experiences.

12.
Front Psychol ; 13: 920844, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35992472

RESUMO

Recent advances in automation technology have increased the opportunity for collaboration between humans and multiple autonomous systems such as robots and self-driving cars. In research on autonomous system collaboration, the trust users have in autonomous systems is an important topic. Previous research suggests that the trust built by observing a task can be transferred to other tasks. However, such research did not focus on trust in multiple different devices but in one device or several of the same devices. Thus, we do not know how trust changes in an environment involving the operation of multiple different devices such as a construction site. We investigated whether trust can be transferred among multiple different devices, and investigated the effect of two factors: the similarity among multiple devices and the agency attributed to each device, on trust transfer among multiple devices. We found that the trust a user has in a device can be transferred to other devices and that attributing different agencies to each device can clarify the distinction among devices, preventing trust from transferring.

13.
Front Robot AI ; 9: 701250, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36246495

RESUMO

Natural and efficient communication with humans requires artificial agents that are able to understand the meaning of natural language. However, understanding natural language is non-trivial and requires proper grounding mechanisms to create links between words and corresponding perceptual information. Since the introduction of the "Symbol Grounding Problem" in 1990, many different grounding approaches have been proposed that either employed supervised or unsupervised learning mechanisms. The latter have the advantage that no other agent is required to learn the correct groundings, while the former are often more sample-efficient and accurate but require the support of another agent, like a human or another artificial agent. Although combining both paradigms seems natural, it has not achieved much attention. Therefore, this paper proposes a hybrid grounding framework which combines both learning paradigms so that it is able to utilize support from a tutor, if available, while it can still learn when no support is provided. Additionally, the framework has been designed to learn in a continuous and open-ended manner so that no explicit training phase is required. The proposed framework is evaluated through two different grounding scenarios and its unsupervised grounding component is compared to a state-of-the-art unsupervised Bayesian grounding framework, while the benefit of combining both paradigms is evaluated through the analysis of different feedback rates. The obtained results show that the employed unsupervised grounding mechanism outperforms the baseline in terms of accuracy, transparency, and deployability and that combining both paradigms increases both the sample-efficiency as well as the accuracy of purely unsupervised grounding, while it ensures that the framework is still able to learn the correct mappings, when no supervision is available.

14.
Front Robot AI ; 9: 934325, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36504495

RESUMO

One of the possible benefits of robot-mediated education is the effect of the robot becoming a catalyst between people and facilitating learning. In this study, the authors focused on an asynchronous active learning method mediated by robots. Active learning is believed to help students continue learning and develop the ability to think independently. Therefore, the authors improved the UGA (User Generated Agent) system that we have created for long-term active learning in COVID-19 to create an environment where children introduce books to each other via robots. The authors installed the robot in an elementary school and conducted an experiment lasting more than a year. As a result, it was confirmed that the robot could continue to be used without getting bored even over a long period of time. They also analyzed how the children created the contents by analyzing the contents that had a particularly high number of views. In particular, the authors observed changes in children's behavior, such as spontaneous advertising activities, guidance from upperclassmen to lowerclassmen, collaboration with multiple people, and increased interest in technology, even under conditions where the new coronavirus was spreading and children's social interaction was inhibited.

15.
Math Biosci Eng ; 19(8): 7933-7951, 2022 05 27.
Artigo em Inglês | MEDLINE | ID: mdl-35801451

RESUMO

Agent-based negotiation aims at automating the negotiation process on behalf of humans to save time and effort. While successful, the current research considers communication between negotiation agents through offer exchange. In addition to the simple manner, many real-world settings tend to involve linguistic channels with which negotiators can express intentions, ask questions, and discuss plans. The information bandwidth of traditional negotiation is therefore restricted and grounded in the action space. Against this background, a negotiation agent called MCAN (multiple channel automated negotiation) is described that models the negotiation with multiple communication channels problem as a Markov decision problem with a hybrid action space. The agent employs a novel deep reinforcement learning technique to generate an efficient strategy, which can interact with different opponents, i.e., other negotiation agents or human players. Specifically, the agent leverages parametrized deep Q-networks (P-DQNs) that provides solutions for a hybrid discrete-continuous action space, thereby learning a comprehensive negotiation strategy that integrates linguistic communication skills and bidding strategies. The extensive experimental results show that the MCAN agent outperforms other agents as well as human players in terms of averaged utility. A high human perception evaluation is also reported based on a user study. Moreover, a comparative experiment shows how the P-DQNs algorithm promotes the performance of the MCAN agent.


Assuntos
Meios de Comunicação , Negociação , Algoritmos , Comunicação , Humanos , Aprendizagem , Negociação/métodos
16.
Front Psychol ; 13: 752748, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35615198

RESUMO

In Japan, many incidents regarding manga-like virtual agents have happened recently, in which critics have indicated that virtual agents used in public spaces are too sexual. Prior study defined this perception as "moe-phobia." In many cases, critics have pointed to agents' clothes. However, after verifying actual moe-phobia incidents, I hypothesize that these incidents are associated with not only the agents' clothes but also the situations in which they are used. I conducted an experiment with three factors and two levels to verify this hypothesis. The independent values were the agents' clothes, usage scenario, and the gender of the participants. The dependent values were the agents' trustworthiness, familiarity, likability, sexuality, and suitability as perceived by humans. I conducted the experiment with female and male groups and conducted a three-way ANOVA for each dependent value for each group. As a result, I observed a different tendency regarding the impression of the agents between female and male groups; however, both groups had the same tendency regarding the perceived suitability. The female and male participants judged the agents' suitability from not only their clothes but also the scenario.

17.
Front Neurogenom ; 3: 959578, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-38235446

RESUMO

Robot faces often differ from human faces in terms of their facial features (e.g., lack of eyebrows) and spatial relationships between these features (e.g., disproportionately large eyes), which can influence the degree to which social brain [i.e., Fusiform Face Area (FFA), Superior Temporal Sulcus (STS); Haxby et al., 2000] areas process them as social individuals that can be discriminated from other agents in terms of their perceptual features and person attributes. Of interest in this work is whether robot stimuli are processed in a less social manner than human stimuli. If true, this could undermine human-robot interactions (HRIs) because human partners could potentially fail to perceive robots as individual agents with unique features and capabilities-a phenomenon known as outgroup homogeneity-potentially leading to miscalibration of trust and errors in allocation of task responsibilities. In this experiment, we use the face inversion paradigm (as a proxy for neural activation in social brain areas) to examine whether face processing differs between human and robot face stimuli: if robot faces are perceived as less face-like than human-faces, the difference in recognition performance for faces presented upright compared to upside down (i.e., inversion effect) should be less pronounced for robot faces than human faces. The results demonstrate a reduced face inversion effect with robot vs. human faces, supporting the hypothesis that robot faces are processed in a less face-like manner. This suggests that roboticists should attend carefully to the design of robot faces and evaluate them based on their ability to engage face-typical processes. Specific design recommendations on how to accomplish this goal are provided in the discussion.

18.
Front Public Health ; 10: 1042478, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36388374

RESUMO

The past two decades have seen exponential growth in demand for wireless access that has been projected to continue for years to come. Meeting the demand would necessarily bring about greater human exposure to microwave and radiofrequency (RF) radiation. Our knowledge regarding its health effects has increased. Nevertheless, they have become a focal point of current interest and concern. The cellphone and allied wireless communication technologies have demonstrated their direct benefit to people in modern society. However, as for their impact on the radiation health and safety of humans who are unnecessarily subjected to various levels of RF exposure over prolonged durations or even over their lifetime, the jury is still out. Furthermore, there are consistent indications from epidemiological studies and animal investigations that RF exposure is probably carcinogenic to humans. The principle of ALARA-as low as reasonably achievable-ought to be adopted as a strategy for RF health and safety protection.


Assuntos
Telefone Celular , Ondas de Rádio , Animais , Humanos , Ondas de Rádio/efeitos adversos , Carcinogênese , Previsões
19.
Front Artif Intell ; 5: 750763, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35295867

RESUMO

In this paper, we discuss the development of artificial theory of mind as foundational to an agent's ability to collaborate with human team members. Agents imbued with artificial social intelligence will require various capabilities to gather the social data needed to inform an artificial theory of mind of their human counterparts. We draw from social signals theorizing and discuss a framework to guide consideration of core features of artificial social intelligence. We discuss how human social intelligence, and the development of theory of mind, can contribute to the development of artificial social intelligence by forming a foundation on which to help agents model, interpret and predict the behaviors and mental states of humans to support human-agent interaction. Artificial social intelligence will need the processing capabilities to perceive, interpret, and generate combinations of social cues to operate within a human-agent team. Artificial Theory of Mind affords a structure by which a socially intelligent agent could be imbued with the ability to model their human counterparts and engage in effective human-agent interaction. Further, modeling Artificial Theory of Mind can be used by an ASI to support transparent communication with humans, improving trust in agents, so that they may better predict future system behavior based on their understanding of and support trust in artificial socially intelligent agents.

20.
Front Robot AI ; 8: 658348, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34712700

RESUMO

The authors evaluate the extent to which a user's impression of an AI agent can be improved by giving the agent the ability of self-estimation, thinking time, and coordination of risk tendency. The authors modified the algorithm of an AI agent in the cooperative game Hanabi to have all of these traits, and investigated the change in the user's impression by playing with the user. The authors used a self-estimation task to evaluate the effect that the ability to read the intention of a user had on an impression. The authors also show thinking time of an agent influences impression for an agent. The authors also investigated the relationship between the concordance of the risk-taking tendencies of players and agents, the player's impression of agents, and the game experience. The results of the self-estimation task experiment showed that the more accurate the estimation of the agent's self, the more likely it is that the partner will perceive humanity, affinity, intelligence, and communication skills in the agent. The authors also found that an agent that changes the length of thinking time according to the priority of action gives the impression that it is smarter than an agent with a normal thinking time when the player notices the difference in thinking time or an agent that randomly changes the thinking time. The result of the experiment regarding concordance of the risk-taking tendency shows that influence player's impression toward agents. These results suggest that game agent designers can improve the player's disposition toward an agent and the game experience by adjusting the agent's self-estimation level, thinking time, and risk-taking tendency according to the player's personality and inner state during the game.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa