Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Hum Factors ; 57(5): 895-909, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25921302

RESUMO

OBJECTIVE: We examined whether participants would trust an agent that was similar to them more than an agent that was dissimilar to them. BACKGROUND: Trust is an important psychological factor determining the acceptance of smart systems. Because smart systems tend to be treated like humans, and similarity has been shown to increase trust in humans, we expected that similarity would increase trust in a virtual agent. METHODS: In a driving simulator experiment, participants (N = 111) were presented with a virtual agent that was either similar to them or not. This agent functioned as their virtual driver in a driving simulator, and trust in this agent was measured. Furthermore, we measured how trust changed with experience. RESULTS: Prior to experiencing the agent, the similar agent was trusted more than the dissimilar agent. This effect was mediated by perceived similarity. After experiencing the agent, the similar agent was still trusted more than the dissimilar agent. CONCLUSION: Just as similarity between humans increases trust in another human, similarity also increases trust in a virtual agent. When such an agent is presented as a virtual driver in a self-driving car, it could possibly enhance the trust people have in such a car. APPLICATION: Displaying a virtual driver that is similar to the human driver might increase trust in a self-driving car.


Assuntos
Condução de Veículo/psicologia , Confiança/psicologia , Interface Usuário-Computador , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
2.
Hum Factors ; 54(5): 799-810, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23156624

RESUMO

OBJECTIVE: We examine whether trust in smart systems is generated analogously to trust in humans and whether the automation level of smart systems affects trustworthiness and acceptability of those systems. BACKGROUND: Trust is an important factor when considering acceptability of automation technology. As shared goals lead to social trust, and intelligent machines tend to be treated like humans, the authors expected that shared driving goals would also lead to increased trustworthiness and acceptability of adaptive cruise control (ACC) systems. METHOD: In an experiment, participants (N = 57) were presented with descriptions of three ACCs with different automation levels that were described as systems that either shared their driving goals or did not. Trustworthiness and acceptability of all the ACCs were measured. RESULTS: ACCs sharing the driving goals of the user were more trustworthy and acceptable than were ACCs not sharing the driving goals of the user. Furthermore, ACCs that took over driving tasks while providing information were more trustworthy and acceptable than were ACCs that took over driving tasks without providing information. Trustworthiness mediated the effects of both driving goals and automation level on acceptability of ACCs. CONCLUSION: As when trusting other humans, trusting smart systems depends on those systems sharing the user's goals. Furthermore, based on their description, smart systems that take over tasks are judged more trustworthy and acceptable when they also provide information. APPLICATION: For optimal acceptability of smart systems, goals of the user should be shared by the smart systems, and smart systems should provide information to their user.


Assuntos
Acidentes de Trânsito/prevenção & controle , Condução de Veículo/psicologia , Sistemas Homem-Máquina , Confiança , Análise de Variância , Inteligência Artificial , Automação , Automóveis , Feminino , Humanos , Masculino , Países Baixos , Equipamentos de Proteção
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...