Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
PLoS One ; 16(11): e0259928, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34807907

RESUMO

The COVID-19 pandemic continues to impact people worldwide-steadily depleting scarce resources in healthcare. Medical Artificial Intelligence (AI) promises a much-needed relief but only if the technology gets adopted at scale. The present research investigates people's intention to adopt medical AI as well as the drivers of this adoption in a representative study of two European countries (Denmark and France, N = 1068) during the initial phase of the COVID-19 pandemic. Results reveal AI aversion; only 1 of 10 individuals choose medical AI over human physicians in a hypothetical triage-phase of COVID-19 pre-hospital entrance. Key predictors of medical AI adoption are people's trust in medical AI and, to a lesser extent, the trait of open-mindedness. More importantly, our results reveal that mistrust and perceived uniqueness neglect from human physicians, as well as a lack of social belonging significantly increase people's medical AI adoption. These results suggest that for medical AI to be widely adopted, people may need to express less confidence in human physicians and to even feel disconnected from humanity. We discuss the social implications of these findings and propose that successful medical AI adoption policy should focus on trust building measures-without eroding trust in human physicians.


Assuntos
Inteligência Artificial , Atitude , COVID-19/psicologia , Telemedicina , Adulto , Idoso , COVID-19/epidemiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores Socioeconômicos
2.
Sci Rep ; 9(1): 13080, 2019 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-31511560

RESUMO

The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people's moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people's personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people's moral decisions that can be attributed to the situational factors of the dilemmas. The reported studies demonstrate that people's moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. In addition, once the personal perspective is made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These biases in people's moral decisions underline the social challenge in the design of a universal moral code for autonomous vehicles. We discuss the implications of our findings and provide directions for future research.


Assuntos
Inteligência Artificial , Automóveis , Tomada de Decisões , Princípios Morais , Adulto , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA