Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
5.
Am J Bioeth ; 24(10): 58-69, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38662360

RESUMO

A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called "update problem," which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory approval. In this paper, we draw attention to a prior ethical question: whether the continuous learning that will occur in such systems after their initial deployment should be classified, and regulated, as medical research? We argue that there is a strong prima facie case that the use of continuous learning in medical ML systems should be categorized, and regulated, as research and that individuals whose treatment involves such systems should be treated as research subjects.


Assuntos
Aprendizado de Máquina , Humanos , Aprendizado de Máquina/ética , Pesquisa Biomédica/ética
6.
Am J Bioeth ; 24(7): 13-26, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38226965

RESUMO

When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.


Assuntos
Julgamento , Preferência do Paciente , Humanos , Autonomia Pessoal , Algoritmos , Aprendizado de Máquina/ética , Tomada de Decisões/ética
7.
Am J Bioeth ; 24(9): 67-78, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38767971

RESUMO

Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.


Assuntos
Aprendizado de Máquina , Humanos , Aprendizado de Máquina/ética , Julgamento , Técnicas de Apoio para a Decisão , Tomada de Decisões/ética
8.
Bioethics ; 38(5): 383-390, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38523587

RESUMO

After a wave of breakthroughs in image-based medical diagnostics and risk prediction models, machine learning (ML) has turned into a normal science. However, prominent researchers are claiming that another paradigm shift in medical ML is imminent-due to most recent staggering successes of large language models-from single-purpose applications toward generalist models, driven by natural language. This article investigates the implications of this paradigm shift for the ethical debate. Focusing on issues like trust, transparency, threats of patient autonomy, responsibility issues in the collaboration of clinicians and ML models, fairness, and privacy, it will be argued that the main problems will be continuous with the current debate. However, due to functioning of large language models, the complexity of all these problems increases. In addition, the article discusses some profound challenges for the clinical evaluation of large language models and threats to the reproducibility and replicability of studies about large language models in medicine due to corporate interests.


Assuntos
Aprendizado de Máquina , Humanos , Aprendizado de Máquina/ética , Autonomia Pessoal , Confiança , Privacidade , Reprodutibilidade dos Testes , Ética Médica
9.
Bioethics ; 38(5): 391-400, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38554069

RESUMO

Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms-showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Feminino , Inteligência Artificial/ética , Neoplasias da Mama , Melanoma , Atenção à Saúde/ética , Aprendizado de Máquina/ética , Justiça Social , Prognóstico
10.
Sci Eng Ethics ; 30(5): 43, 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39259362

RESUMO

Machine unlearning (MU) is often analyzed in terms of how it can facilitate the "right to be forgotten." In this commentary, we show that MU can support the OECD's five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.


Assuntos
Inteligência Artificial , Confiança , Humanos , Inteligência Artificial/ética , Aprendizado de Máquina/ética , Aprendizagem
11.
Sci Eng Ethics ; 30(4): 27, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38888795

RESUMO

Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.


Assuntos
Inteligência Artificial , Tomada de Decisões , Responsabilidade Social , Humanos , Inteligência Artificial/ética , Tomada de Decisões/ética , Técnicas de Apoio para a Decisão , Julgamento , Aprendizado de Máquina/ética , Propriedade , Robótica/ética
13.
Psychol Med ; 51(15): 2522-2524, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33975655

RESUMO

The clinical interview is the psychiatrist's data gathering procedure. However, the clinical interview is not a defined entity in the way that 'vitals' are defined as measurements of blood pressure, heart rate, respiration rate, temperature, and oxygen saturation. There are as many ways to approach a clinical interview as there are psychiatrists; and trainees can learn as many ways of performing and formulating the clinical interview as there are instructors (Nestler, 1990). Even in the same clinical setting, two clinicians might interview the same patient and conduct very different examinations and reach different treatment recommendations. From the perspective of data science, this mismatch is not one of personal style or idiosyncrasy but rather one of uncertain salience: neither the clinical interview nor the data thereby generated is operationalized and, therefore, neither can be rigorously evaluated, tested, or optimized.


Assuntos
Entrevista Psicológica/métodos , Aprendizado de Máquina , Psiquiatria/métodos , Esquizofrenia/diagnóstico , Diagnóstico por Computador/ética , Diagnóstico por Computador/métodos , Humanos , Aprendizado de Máquina/ética , Psiquiatria/ética
14.
Psychol Med ; 51(15): 2515-2521, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-32536358

RESUMO

Recent advances in machine learning (ML) promise far-reaching improvements across medical care, not least within psychiatry. While to date no psychiatric application of ML constitutes standard clinical practice, it seems crucial to get ahead of these developments and address their ethical challenges early on. Following a short general introduction concerning ML in psychiatry, we do so by focusing on schizophrenia as a paradigmatic case. Based on recent research employing ML to further the diagnosis, treatment, and prediction of schizophrenia, we discuss three hypothetical case studies of ML applications with view to their ethical dimensions. Throughout this discussion, we follow the principlist framework by Tom Beauchamp and James Childress to analyse potential problems in detail. In particular, we structure our analysis around their principles of beneficence, non-maleficence, respect for autonomy, and justice. We conclude with a call for cautious optimism concerning the implementation of ML in psychiatry if close attention is paid to the particular intricacies of psychiatric disorders and its success evaluated based on tangible clinical benefit for patients.


Assuntos
Aprendizado de Máquina , Psiquiatria/métodos , Esquizofrenia , Algoritmos , Bioética , Diagnóstico por Computador/ética , Diagnóstico por Computador/métodos , Humanos , Aprendizado de Máquina/ética , Esquizofrenia/diagnóstico , Esquizofrenia/terapia
16.
Hum Brain Mapp ; 41(6): 1435-1444, 2020 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-31804003

RESUMO

Computer systems for medical diagnosis based on machine learning are not mere science fiction. Despite undisputed potential benefits, such systems may also raise problems. Two (interconnected) issues are particularly significant from an ethical point of view: The first issue is that epistemic opacity is at odds with a common desire for understanding and potentially undermines information rights. The second (related) issue concerns the assignment of responsibility in cases of failure. The core of the two issues seems to be that understanding and responsibility are concepts that are intrinsically tied to the discursive practice of giving and asking for reasons. The challenge is to find ways to make the outcomes of machine learning algorithms compatible with our discursive practice. This comes down to the claim that we should try to integrate discursive elements into machine learning algorithms. Under the title of "explainable AI" initiatives heading in this direction are already under way. Extensive research in this field is needed for finding adequate solutions.


Assuntos
Algoritmos , Diagnóstico por Computador/ética , Aprendizado de Máquina/ética , Inteligência Artificial , Confidencialidade , Medicina Baseada em Evidências , Humanos , Imageamento por Ressonância Magnética
17.
Bull World Health Organ ; 98(4): 270-276, 2020 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-32284651

RESUMO

The application of digital technology to psychiatry research is rapidly leading to new discoveries and capabilities in the field of mobile health. However, the increase in opportunities to passively collect vast amounts of detailed information on study participants coupled with advances in statistical techniques that enable machine learning models to process such information has raised novel ethical dilemmas regarding researchers' duties to: (i) monitor adverse events and intervene accordingly; (ii) obtain fully informed, voluntary consent; (iii) protect the privacy of participants; and (iv) increase the transparency of powerful, machine learning models to ensure they can be applied ethically and fairly in psychiatric care. This review highlights emerging ethical challenges and unresolved ethical questions in mobile health research and provides recommendations on how mobile health researchers can address these issues in practice. Ultimately, the hope is that this review will facilitate continued discussion on how to achieve best practice in mobile health research within psychiatry.


L'application des technologies numériques à la recherche psychiatrique entraîne rapidement de nouvelles découvertes et capacités en matière de santé mobile. Cependant, la multiplication des opportunités de recueillir passivement d'immenses quantités d'informations détaillées sur les participants aux études combinée aux progrès des techniques statistiques permettant aux modèles d'apprentissage automatique de traiter de telles informations a soulevé de nouveaux dilemmes éthiques concernant l'obligation des chercheurs: (i) de surveiller les effets indésirables et d'intervenir en conséquence; (ii) d'obtenir un consentement pleinement éclairé et volontaire; (iii) de protéger la vie privée des participants; et enfin, (iv) d'améliorer la transparence des puissants modèles d'apprentissage automatique afin de garantir une application éthique et impartiale dans le domaine des soins psychiatriques. Ce rapport identifie les défis qui en découlent ainsi que les questions éthiques non résolues en matière de santé mobile. Il formule également des recommandations sur la façon dont les chercheurs en santé mobile peuvent résoudre ces problèmes dans la pratique. À terme, nous espérons que ce rapport favorisera la poursuite des discussions portant sur les moyens de définir des méthodes de recherche adéquates pour la santé mobile en psychiatrie.


La aplicación de la tecnología digital a la investigación en psiquiatría está conduciendo rápidamente a descubrimientos y capacidades nuevas en el ámbito de la salud móvil. No obstante, el incremento de las oportunidades para recopilar pasivamente grandes volúmenes de información detallada sobre los participantes en los estudios, junto con los avances en las técnicas de estadística que permiten a los modelos de aprendizaje automático procesar tal información, ha planteado nuevos dilemas éticos relativos a los deberes de los investigadores: (i) hacer un seguimiento de los eventos adversos e intervenir en consecuencia; (ii) obtener un consentimiento voluntario plenamente informado; (iii) proteger la privacidad de los participantes; y (iv) aumentar la transparencia de los modelos potentes de aprendizaje automático para asegurar que puedan aplicarse de manera ética y justa en la atención psiquiátrica. En este análisis se destacan tanto los desafíos éticos nuevos como las cuestiones éticas aún sin resolver en la investigación sobre la salud móvil y se formulan recomendaciones sobre cómo los investigadores de la salud móvil pueden abordar dichas cuestiones en la práctica. En última instancia, se espera que este análisis facilite un debate continuo sobre cómo lograr las mejores prácticas en la investigación de la salud móvil dentro de la psiquiatría.


Assuntos
Ética em Pesquisa , Aprendizado de Máquina/ética , Psiquiatria , Telemedicina/ética , Consentimento Livre e Esclarecido , Privacidade
18.
Eur J Health Law ; 27(3): 242-258, 2020 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-33652397

RESUMO

The use of machine learning (ML) in medicine is becoming increasingly fundamental to analyse complex problems by discovering associations among different types of information and to generate knowledge for medical decision support. Many regulatory and ethical issues should be considered. Some relevant EU provisions, such as the General Data Protection Regulation, are applicable. However, the regulatory framework for developing and marketing a new health technology implementing ML may be quite complex. Other issues include the legal liability and the attribution of negligence in case of errors. Some of the above-mentioned concerns could be, at least partially, resolved in case the ML software is classified as a 'medical device', a category covered by EU/national provisions. Concluding, the challenge is to understand how sustainable is the regulatory system in relation to the ML innovation and how legal procedures should be revised in order to adapt them to the current regulatory framework.


Assuntos
Aprendizado de Máquina/ética , Aprendizado de Máquina/legislação & jurisprudência , Aprendizado de Máquina/normas , Informática Médica , Software , Viés , Confidencialidade/legislação & jurisprudência , Tomada de Decisões/ética , Desenvolvimento de Medicamentos , Descoberta de Drogas , Humanos , Imperícia , Legislação de Dispositivos Médicos , Medicina de Precisão , Gestão de Riscos , Segurança/legislação & jurisprudência , Confiança
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA