RESUMO
Current national and international guidelines for the ethical design and development of artificial intelligence (AI) and robotics emphasize ethical theory. Various governing and advisory bodies have generated sets of broad ethical principles, which institutional decisionmakers are encouraged to apply to particular practical decisions. Although much of this literature examines the ethics of designing and developing AI and robotics, medical institutions typically must make purchase and deployment decisions about technologies that have already been designed and developed. The primary problem facing medical institutions is not one of ethical design but of ethical deployment. The purpose of this paper is to develop a practical model by which medical institutions may make ethical deployment decisions about ready-made advanced technologies. Our slogan is "more process, less principles." Ethically sound decisionmaking requires that the process by which medical institutions make such decisions include participatory, deliberative, and conservative elements. We argue that our model preserves the strengths of existing frameworks, avoids their shortcomings, and delivers its own moral, practical, and epistemic advantages.
Assuntos
Inteligência Artificial , Robótica , Humanos , Teoria ÉticaRESUMO
This article addresses ethical concerns with the use of electronic health records (EHRs) by physicians in clinical practice. It presents arguments for two claims. First, requiring physicians to maintain patient EHRs for medically unnecessary tasks is likely contributing to increased burnout, decreased quality of care, and potential risks to patient safety. Second, medical institutions have ethical reasons to employ medical scribes to maintain patient EHRs. Finally, this article reviews central objections to employing medical scribes and provides responses to each.
Assuntos
Documentação , Médicos , Registros Eletrônicos de Saúde , Humanos , Satisfação do PacienteRESUMO
As costs decline and technology inevitably improves, current trends suggest that artificial intelligence (AI) and a variety of "carebots" will increasingly be adopted in medical care. Medical ethicists have long expressed concerns that such technologies remove the human element from medicine, resulting in dehumanization and depersonalized care. However, we argue that where shame presents a barrier to medical care, it is sometimes ethically permissible and even desirable to deploy AI/carebots because (i) dehumanization in medicine is not always morally wrong, and (ii) dehumanization can sometimes better promote and protect important medical values. Shame is often a consequence of the human-to-human element of medical care and can prevent patients from seeking treatment and from disclosing important information to their healthcare provider. Conditions and treatments that are shame-inducing offer opportunities for introducing AI/carebots in a manner that removes the human element of medicine but does so ethically. We outline numerous examples of shame-inducing interactions and how they are overcome by implementing existing and expected developments of AI/carebot technology that remove the human element from care.
Assuntos
Inteligência Artificial , Assistência ao Paciente , Desumanização , Humanos , Vergonha , TecnologiaRESUMO
The role and importance of empathy in clinical practice has been widely discussed. This paper focuses on the ideal of clinical empathy, as involving both cognitive understanding and affective resonance. I argue that this account is subject to a number of objections. Affective resonance may serve more as a liability than as a benefit in clinical settings, and utilizing this capacity is not clearly supported by the relevant empirical literature. Instead, I argue that the ideal account of empathy in medicine remains cognitive, though there is a central role for expressing empathic concern toward patients.