Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Med Ethics ; 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39117396

RESUMEN

It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this 'the disclosure thesis.' Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue that each of these four arguments are unconvincing, and therefore, that the disclosure thesis ought to be rejected. I suggest that mandating disclosure may also even risk harming patients by providing stakeholders with a way to avoid accountability for harm that results from improper applications or uses of these systems.

2.
Am J Bioeth ; 24(10): 58-69, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38662360

RESUMEN

A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called "update problem," which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory approval. In this paper, we draw attention to a prior ethical question: whether the continuous learning that will occur in such systems after their initial deployment should be classified, and regulated, as medical research? We argue that there is a strong prima facie case that the use of continuous learning in medical ML systems should be categorized, and regulated, as research and that individuals whose treatment involves such systems should be treated as research subjects.


Asunto(s)
Aprendizaje Automático , Humanos , Aprendizaje Automático/ética , Investigación Biomédica/ética
3.
Camb Q Healthc Ethics ; : 1-10, 2023 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-36624634

RESUMEN

Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.

4.
J Am Med Inform Assoc ; 30(2): 361-366, 2023 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-36377970

RESUMEN

OBJECTIVES: Machine learning (ML) has the potential to facilitate "continual learning" in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such "adaptive" ML systems in medicine that have, thus far, been neglected in the literature. TARGET AUDIENCE: The target audiences for this tutorial are the developers of ML AI systems, healthcare regulators, the broader medical informatics community, and practicing clinicians. SCOPE: Discussions of adaptive ML systems to date have overlooked the distinction between 2 sorts of variance that such systems may exhibit-diachronic evolution (change over time) and synchronic variation (difference between cotemporaneous instantiations of the algorithm at different sites)-and underestimated the significance of the latter. We highlight the challenges that diachronic evolution and synchronic variation present for the quality of patient care, informed consent, and equity, and discuss the complex ethical trade-offs involved in the design of such systems.


Asunto(s)
Inteligencia Artificial , Medicina , Humanos , Aprendizaje Automático , Algoritmos , Atención a la Salud
5.
Camb Q Healthc Ethics ; : 1-10, 2022 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-36524245

RESUMEN

Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this article, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.

7.
J Med Ethics ; 46(7): 478-481, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32220870

RESUMEN

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI's progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.


Asunto(s)
Inteligencia Artificial , Confianza , Humanos , Reproducibilidad de los Resultados
8.
Hastings Cent Rep ; 50(1): 14-17, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32068275

RESUMEN

In the much-celebrated book Deep Medicine, Eric Topol argues that the development of artificial intelligence for health care will lead to a dramatic shift in the culture and practice of medicine. In the next several decades, he suggests, AI will become sophisticated enough that many of the everyday tasks of physicians could be delegated to it. Topol is perhaps the most articulate advocate of the benefits of AI in medicine, but he is hardly alone in spruiking its potential to allow physicians to dedicate more of their time and attention to providing empathetic care for their patients in the future. Unfortunately, several factors suggest a radically different picture for the future of health care. Far from facilitating a return to a time of closer doctor-patient relationships, the use of medical AI seems likely to further erode therapeutic relationships and threaten professional and patient satisfaction.

9.
J Med Ethics ; 45(12): 817-820, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31462453

RESUMEN

Advocates of physician-assisted suicide (PAS) often argue that, although the provision of PAS is morally permissible for persons with terminal, somatic illnesses, it is impermissible for patients suffering from psychiatric conditions. This claim is justified on the basis that psychiatric illnesses have certain morally relevant characteristics and/or implications that distinguish them from their somatic counterparts. In this paper, I address three arguments of this sort. First, that psychiatric conditions compromise a person's decision-making capacity. Second, that we cannot have sufficient certainty that a person's psychiatric condition is untreatable. Third, that the institutionalisation of PAS for mental illnesses presents morally unacceptable risks. I argue that, if we accept that PAS is permissible for patients with somatic conditions, then none of these three arguments are strong enough to demonstrate that the exclusion of psychiatric patients from access to PAS is justifiable.


Asunto(s)
Trastornos Mentales , Prejuicio , Suicidio Asistido/ética , Toma de Decisiones/ética , Humanos , Competencia Mental/psicología , Trastornos Mentales/diagnóstico , Prejuicio/ética , Prejuicio/psicología , Pronóstico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA