Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
5.
Am J Bioeth ; 24(7): 13-26, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38226965

ABSTRACT

When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.


Subject(s)
Judgment , Patient Preference , Humans , Personal Autonomy , Algorithms , Machine Learning/ethics , Decision Making/ethics
6.
Bioethics ; 38(5): 383-390, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38523587

ABSTRACT

After a wave of breakthroughs in image-based medical diagnostics and risk prediction models, machine learning (ML) has turned into a normal science. However, prominent researchers are claiming that another paradigm shift in medical ML is imminent-due to most recent staggering successes of large language models-from single-purpose applications toward generalist models, driven by natural language. This article investigates the implications of this paradigm shift for the ethical debate. Focusing on issues like trust, transparency, threats of patient autonomy, responsibility issues in the collaboration of clinicians and ML models, fairness, and privacy, it will be argued that the main problems will be continuous with the current debate. However, due to functioning of large language models, the complexity of all these problems increases. In addition, the article discusses some profound challenges for the clinical evaluation of large language models and threats to the reproducibility and replicability of studies about large language models in medicine due to corporate interests.


Subject(s)
Machine Learning , Humans , Machine Learning/ethics , Personal Autonomy , Trust , Privacy , Reproducibility of Results , Ethics, Medical
7.
Bioethics ; 38(5): 391-400, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38554069

ABSTRACT

Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms-showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.


Subject(s)
Algorithms , Artificial Intelligence , Humans , Female , Artificial Intelligence/ethics , Breast Neoplasms , Melanoma , Delivery of Health Care/ethics , Machine Learning/ethics , Social Justice , Prognosis
8.
Sci Eng Ethics ; 30(4): 27, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38888795

ABSTRACT

Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.


Subject(s)
Artificial Intelligence , Decision Making , Social Responsibility , Humans , Artificial Intelligence/ethics , Decision Making/ethics , Decision Support Techniques , Judgment , Machine Learning/ethics , Ownership , Robotics/ethics
10.
Psychol Med ; 51(15): 2515-2521, 2021 11.
Article in English | MEDLINE | ID: mdl-32536358

ABSTRACT

Recent advances in machine learning (ML) promise far-reaching improvements across medical care, not least within psychiatry. While to date no psychiatric application of ML constitutes standard clinical practice, it seems crucial to get ahead of these developments and address their ethical challenges early on. Following a short general introduction concerning ML in psychiatry, we do so by focusing on schizophrenia as a paradigmatic case. Based on recent research employing ML to further the diagnosis, treatment, and prediction of schizophrenia, we discuss three hypothetical case studies of ML applications with view to their ethical dimensions. Throughout this discussion, we follow the principlist framework by Tom Beauchamp and James Childress to analyse potential problems in detail. In particular, we structure our analysis around their principles of beneficence, non-maleficence, respect for autonomy, and justice. We conclude with a call for cautious optimism concerning the implementation of ML in psychiatry if close attention is paid to the particular intricacies of psychiatric disorders and its success evaluated based on tangible clinical benefit for patients.


Subject(s)
Machine Learning , Psychiatry/methods , Schizophrenia , Algorithms , Bioethics , Diagnosis, Computer-Assisted/ethics , Diagnosis, Computer-Assisted/methods , Humans , Machine Learning/ethics , Schizophrenia/diagnosis , Schizophrenia/therapy
11.
Psychol Med ; 51(15): 2522-2524, 2021 11.
Article in English | MEDLINE | ID: mdl-33975655

ABSTRACT

The clinical interview is the psychiatrist's data gathering procedure. However, the clinical interview is not a defined entity in the way that 'vitals' are defined as measurements of blood pressure, heart rate, respiration rate, temperature, and oxygen saturation. There are as many ways to approach a clinical interview as there are psychiatrists; and trainees can learn as many ways of performing and formulating the clinical interview as there are instructors (Nestler, 1990). Even in the same clinical setting, two clinicians might interview the same patient and conduct very different examinations and reach different treatment recommendations. From the perspective of data science, this mismatch is not one of personal style or idiosyncrasy but rather one of uncertain salience: neither the clinical interview nor the data thereby generated is operationalized and, therefore, neither can be rigorously evaluated, tested, or optimized.


Subject(s)
Interview, Psychological/methods , Machine Learning , Psychiatry/methods , Schizophrenia/diagnosis , Diagnosis, Computer-Assisted/ethics , Diagnosis, Computer-Assisted/methods , Humans , Machine Learning/ethics , Psychiatry/ethics
13.
Hum Brain Mapp ; 41(6): 1435-1444, 2020 04 15.
Article in English | MEDLINE | ID: mdl-31804003

ABSTRACT

Computer systems for medical diagnosis based on machine learning are not mere science fiction. Despite undisputed potential benefits, such systems may also raise problems. Two (interconnected) issues are particularly significant from an ethical point of view: The first issue is that epistemic opacity is at odds with a common desire for understanding and potentially undermines information rights. The second (related) issue concerns the assignment of responsibility in cases of failure. The core of the two issues seems to be that understanding and responsibility are concepts that are intrinsically tied to the discursive practice of giving and asking for reasons. The challenge is to find ways to make the outcomes of machine learning algorithms compatible with our discursive practice. This comes down to the claim that we should try to integrate discursive elements into machine learning algorithms. Under the title of "explainable AI" initiatives heading in this direction are already under way. Extensive research in this field is needed for finding adequate solutions.


Subject(s)
Algorithms , Diagnosis, Computer-Assisted/ethics , Machine Learning/ethics , Artificial Intelligence , Confidentiality , Evidence-Based Medicine , Humans , Magnetic Resonance Imaging
14.
Bull World Health Organ ; 98(4): 270-276, 2020 Apr 01.
Article in English | MEDLINE | ID: mdl-32284651

ABSTRACT

The application of digital technology to psychiatry research is rapidly leading to new discoveries and capabilities in the field of mobile health. However, the increase in opportunities to passively collect vast amounts of detailed information on study participants coupled with advances in statistical techniques that enable machine learning models to process such information has raised novel ethical dilemmas regarding researchers' duties to: (i) monitor adverse events and intervene accordingly; (ii) obtain fully informed, voluntary consent; (iii) protect the privacy of participants; and (iv) increase the transparency of powerful, machine learning models to ensure they can be applied ethically and fairly in psychiatric care. This review highlights emerging ethical challenges and unresolved ethical questions in mobile health research and provides recommendations on how mobile health researchers can address these issues in practice. Ultimately, the hope is that this review will facilitate continued discussion on how to achieve best practice in mobile health research within psychiatry.


L'application des technologies numériques à la recherche psychiatrique entraîne rapidement de nouvelles découvertes et capacités en matière de santé mobile. Cependant, la multiplication des opportunités de recueillir passivement d'immenses quantités d'informations détaillées sur les participants aux études combinée aux progrès des techniques statistiques permettant aux modèles d'apprentissage automatique de traiter de telles informations a soulevé de nouveaux dilemmes éthiques concernant l'obligation des chercheurs: (i) de surveiller les effets indésirables et d'intervenir en conséquence; (ii) d'obtenir un consentement pleinement éclairé et volontaire; (iii) de protéger la vie privée des participants; et enfin, (iv) d'améliorer la transparence des puissants modèles d'apprentissage automatique afin de garantir une application éthique et impartiale dans le domaine des soins psychiatriques. Ce rapport identifie les défis qui en découlent ainsi que les questions éthiques non résolues en matière de santé mobile. Il formule également des recommandations sur la façon dont les chercheurs en santé mobile peuvent résoudre ces problèmes dans la pratique. À terme, nous espérons que ce rapport favorisera la poursuite des discussions portant sur les moyens de définir des méthodes de recherche adéquates pour la santé mobile en psychiatrie.


La aplicación de la tecnología digital a la investigación en psiquiatría está conduciendo rápidamente a descubrimientos y capacidades nuevas en el ámbito de la salud móvil. No obstante, el incremento de las oportunidades para recopilar pasivamente grandes volúmenes de información detallada sobre los participantes en los estudios, junto con los avances en las técnicas de estadística que permiten a los modelos de aprendizaje automático procesar tal información, ha planteado nuevos dilemas éticos relativos a los deberes de los investigadores: (i) hacer un seguimiento de los eventos adversos e intervenir en consecuencia; (ii) obtener un consentimiento voluntario plenamente informado; (iii) proteger la privacidad de los participantes; y (iv) aumentar la transparencia de los modelos potentes de aprendizaje automático para asegurar que puedan aplicarse de manera ética y justa en la atención psiquiátrica. En este análisis se destacan tanto los desafíos éticos nuevos como las cuestiones éticas aún sin resolver en la investigación sobre la salud móvil y se formulan recomendaciones sobre cómo los investigadores de la salud móvil pueden abordar dichas cuestiones en la práctica. En última instancia, se espera que este análisis facilite un debate continuo sobre cómo lograr las mejores prácticas en la investigación de la salud móvil dentro de la psiquiatría.


Subject(s)
Ethics, Research , Machine Learning/ethics , Psychiatry , Telemedicine/ethics , Informed Consent , Privacy
15.
Eur J Health Law ; 27(3): 242-258, 2020 05 19.
Article in English | MEDLINE | ID: mdl-33652397

ABSTRACT

The use of machine learning (ML) in medicine is becoming increasingly fundamental to analyse complex problems by discovering associations among different types of information and to generate knowledge for medical decision support. Many regulatory and ethical issues should be considered. Some relevant EU provisions, such as the General Data Protection Regulation, are applicable. However, the regulatory framework for developing and marketing a new health technology implementing ML may be quite complex. Other issues include the legal liability and the attribution of negligence in case of errors. Some of the above-mentioned concerns could be, at least partially, resolved in case the ML software is classified as a 'medical device', a category covered by EU/national provisions. Concluding, the challenge is to understand how sustainable is the regulatory system in relation to the ML innovation and how legal procedures should be revised in order to adapt them to the current regulatory framework.


Subject(s)
Machine Learning/ethics , Machine Learning/legislation & jurisprudence , Machine Learning/standards , Medical Informatics , Software , Bias , Confidentiality/legislation & jurisprudence , Decision Making/ethics , Drug Development , Drug Discovery , Humans , Malpractice , Medical Device Legislation , Precision Medicine , Risk Management , Safety/legislation & jurisprudence , Trust
18.
Behav Sci Law ; 37(3): 214-222, 2019 May.
Article in English | MEDLINE | ID: mdl-30609102

ABSTRACT

For decades, our ability to predict suicide has remained at near-chance levels. Machine learning has recently emerged as a promising tool for advancing suicide science, particularly in the domain of suicide prediction. The present review provides an introduction to machine learning and its potential application to open questions in suicide research. Although only a few studies have implemented machine learning for suicide prediction, results to date indicate considerable improvement in accuracy and positive predictive value. Potential barriers to algorithm integration into clinical practice are discussed, as well as attendant ethical issues. Overall, machine learning approaches hold promise for accurate, scalable, and effective suicide risk detection; however, many critical questions and issues remain unexplored.


Subject(s)
Ethics, Medical , Machine Learning/legislation & jurisprudence , Suicide/ethics , Suicide/legislation & jurisprudence , Algorithms , Cluster Analysis , Decision Support Techniques , Humans , Longitudinal Studies , Machine Learning/ethics , Probability , Research , Risk Assessment/legislation & jurisprudence , Unsupervised Machine Learning/ethics , Unsupervised Machine Learning/legislation & jurisprudence , Unsupervised Machine Learning/statistics & numerical data , Suicide Prevention
19.
Sci Eng Ethics ; 25(5): 1389-1407, 2019 10.
Article in English | MEDLINE | ID: mdl-30357558

ABSTRACT

This paper argues that even though massive technological unemployment will likely be one of the results of automation, we will not need to institute mass-scale redistribution of wealth (such as would be involved in, e.g., instituting universal basic income) to deal with its consequences. Instead, reasons are given for cautious optimism about the standards of living the newly unemployed workers may expect in the (almost) fully-automated future. It is not claimed that these predictions will certainly bear out. Rather, they are no less likely to come to fruition than the predictions of those authors who predict that massive technological unemployment will lead to the suffering of the masses on such a scale that significant redistributive policies will have to be instituted to alleviate it. Additionally, the paper challenges the idea that the existence of a moral obligation to help the victims of massive unemployment justifies the coercive taking of anyone else's property.


Subject(s)
Income/trends , Moral Obligations , Technology/economics , Technology/ethics , Technology/trends , Unemployment/trends , Ethical Analysis , Forecasting , Humans , Machine Learning/economics , Machine Learning/ethics , Machine Learning/trends , Social Change , Social Conditions
SELECTION OF CITATIONS
SEARCH DETAIL