Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Ethics ; 25(1): 10, 2024 01 23.
Artigo em Inglês | MEDLINE | ID: mdl-38262986

RESUMO

BACKGROUND: While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs. METHODS: We conducted semi-structured interviews with 41 AI experts and analyzed the data using reflective thematic analysis. RESULTS: We developed three themes that expressed the considerations perceived by experts as essential for ensuring AI aligns with ethical practices within healthcare. The first theme explores the ethical significance of introducing AI with a clear and purposeful objective. The second theme focuses on how experts are concerned about the tension that exists between economic incentives and the importance of prioritizing the interests of doctors and patients. The third theme illustrates the need to develop context-sensitive AI for healthcare that is informed by its underlying theoretical foundations. CONCLUSIONS: The three themes collectively emphasized that beyond being innovative, AI must genuinely benefit healthcare and its stakeholders, meaning AI also aligns with intricate and context-specific healthcare practices. Our findings signal that instead of narrow product-specific AI guidance, ethical AI development may need a systemic, proactive perspective that includes the ethical considerations (objectives, actors, and context) and focuses on healthcare applications. Ethically developing AI involves a complex interplay between AI, ethics, healthcare, and multiple stakeholders.


Assuntos
Inteligência Artificial , Médicos , Humanos , Pesquisa Qualitativa
2.
Bioethics ; 37(5): 424-429, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36964989

RESUMO

Artificial intelligence (AI) based clinical decision support systems (CDSS) are becoming ever more widespread in healthcare and could play an important role in diagnostic and treatment processes. For this reason, AI-based CDSS has an impact on the doctor-patient relationship, shaping their decisions with its suggestions. We may be on the verge of a paradigm shift, where the doctor-patient relationship is no longer a dual relationship, but a triad. This paper analyses the role of AI-based CDSS for shared decision-making to better comprehend its promises and associated ethical issues. Moreover, it investigates how certain AI implementations may instead foster the inappropriate paradigm of paternalism. Understanding how AI relates to doctors and influences doctor-patient communication is essential to promote more ethical medical practice. Both doctors' and patients' autonomy need to be considered in the light of AI.


Assuntos
Inteligência Artificial , Médicos , Humanos , Tomada de Decisão Compartilhada , Relações Médico-Paciente , Paternalismo , Tomada de Decisões
3.
BMC Med Ethics ; 23(1): 131, 2022 12 09.
Artigo em Inglês | MEDLINE | ID: mdl-36494715

RESUMO

Healthcare cybersecurity is increasingly targeted by malicious hackers. This sector has many vulnerabilities and health data is very sensitive and valuable. Consequently, any damage caused by malicious intrusions is particularly alarming. The consequences of these attacks can be enormous and endanger patient care. Amongst the already-implemented cybersecurity measures and the ones that need to be further improved, this paper aims to demonstrate how penetration tests can greatly benefit healthcare cybersecurity. It is already proven that this approach has enforced cybersecurity in other sectors. However, it is not popular in healthcare since many prejudices still surround the hacking practice and there is a lack of education on hackers' categories and their ethics. The present analysis aims to comprehend what hacker ethics is and who ethical hackers are. Currently, hacker ethics has the status of personal ethics; however, to employ penetration testers in healthcare, it is recommended to draft an official code of ethics, comprising principles, standards, expectations, and best practices. Additionally, it is important to distinguish between malicious hackers and ethical hackers. Amongst the latter, penetration testers are only a sub-category. Acknowledging the subtle differences between ethical hackers and penetration testers allows to better understand why and how the latter can offer their services to healthcare facilities.


Assuntos
Segurança Computacional , Atenção à Saúde , Humanos , Instalações de Saúde
4.
Artif Intell Med ; 135: 102458, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36628794

RESUMO

Artificial intelligence (AI) has only partially (or not at all) been integrated into medical education, leading to growing concerns regarding how to train healthcare practitioners to handle the changes brought about by the introduction of AI. Programming lessons and other technical information into healthcare curricula has been proposed as a solution to support healthcare personnel in using AI or other future technology. However, integrating these core elements of computer science knowledge might not meet the observed need that students will benefit from gaining practical experience with AI in the direct application area. Therefore, this paper proposes a dynamic approach to case-based learning that utilizes the scenarios where AI is currently used in clinical practice as examples. This approach will support students' understanding of technical aspects. Case-based learning with AI as an example provides additional benefits: (1) it allows doctors to compare their thought processes to the AI suggestions and critically reflect on the assumptions and biases of AI and clinical practice; (2) it incentivizes doctors to discuss and address ethical issues inherent to technology and those already existing in current clinical practice; (3) it serves as a foundation for fostering interdisciplinary collaboration via discussion of different views between technologists, multidisciplinary experts, and healthcare professionals. The proposed knowledge shift from AI as a technical focus to AI as an example for case-based learning aims to encourage a different perspective on educational needs. Technical education does not need to compete with other essential clinical skills as it could serve as a basis for supporting them, which leads to better medical education and practice, ultimately benefiting patients.


Assuntos
Educação Médica , Médicos , Humanos , Inteligência Artificial , Aprendizagem , Pessoal de Saúde
5.
Digit Health ; 8: 20552076221074488, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35173981

RESUMO

Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology and expectations. This paper aims to fill the gap by defining minimal explainability standards to serve the views and needs of essential stakeholders in healthcare. In that sense, we propose to define minimal explainability criteria that can support doctors' understanding, meet patients' needs, and fulfill legal requirements. Therefore, explainability need not to be exhaustive but sufficient for doctors and patients to comprehend the artificial intelligence models' clinical implications and be integrated safely into clinical practice. Thus, minimally acceptable standards for explainability are context-dependent and should respond to the specific need and potential risks of each clinical scenario for a responsible and ethical implementation of artificial intelligence.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA