Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
BMC Med Ethics ; 25(1): 10, 2024 01 23.
Artigo em Inglês | MEDLINE | ID: mdl-38262986

RESUMO

BACKGROUND: While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs. METHODS: We conducted semi-structured interviews with 41 AI experts and analyzed the data using reflective thematic analysis. RESULTS: We developed three themes that expressed the considerations perceived by experts as essential for ensuring AI aligns with ethical practices within healthcare. The first theme explores the ethical significance of introducing AI with a clear and purposeful objective. The second theme focuses on how experts are concerned about the tension that exists between economic incentives and the importance of prioritizing the interests of doctors and patients. The third theme illustrates the need to develop context-sensitive AI for healthcare that is informed by its underlying theoretical foundations. CONCLUSIONS: The three themes collectively emphasized that beyond being innovative, AI must genuinely benefit healthcare and its stakeholders, meaning AI also aligns with intricate and context-specific healthcare practices. Our findings signal that instead of narrow product-specific AI guidance, ethical AI development may need a systemic, proactive perspective that includes the ethical considerations (objectives, actors, and context) and focuses on healthcare applications. Ethically developing AI involves a complex interplay between AI, ethics, healthcare, and multiple stakeholders.


Assuntos
Inteligência Artificial , Médicos , Humanos , Pesquisa Qualitativa
2.
Bioethics ; 37(5): 424-429, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36964989

RESUMO

Artificial intelligence (AI) based clinical decision support systems (CDSS) are becoming ever more widespread in healthcare and could play an important role in diagnostic and treatment processes. For this reason, AI-based CDSS has an impact on the doctor-patient relationship, shaping their decisions with its suggestions. We may be on the verge of a paradigm shift, where the doctor-patient relationship is no longer a dual relationship, but a triad. This paper analyses the role of AI-based CDSS for shared decision-making to better comprehend its promises and associated ethical issues. Moreover, it investigates how certain AI implementations may instead foster the inappropriate paradigm of paternalism. Understanding how AI relates to doctors and influences doctor-patient communication is essential to promote more ethical medical practice. Both doctors' and patients' autonomy need to be considered in the light of AI.


Assuntos
Inteligência Artificial , Médicos , Humanos , Tomada de Decisão Compartilhada , Relações Médico-Paciente , Paternalismo , Tomada de Decisões
3.
BMC Med Ethics ; 23(1): 131, 2022 12 09.
Artigo em Inglês | MEDLINE | ID: mdl-36494715

RESUMO

Healthcare cybersecurity is increasingly targeted by malicious hackers. This sector has many vulnerabilities and health data is very sensitive and valuable. Consequently, any damage caused by malicious intrusions is particularly alarming. The consequences of these attacks can be enormous and endanger patient care. Amongst the already-implemented cybersecurity measures and the ones that need to be further improved, this paper aims to demonstrate how penetration tests can greatly benefit healthcare cybersecurity. It is already proven that this approach has enforced cybersecurity in other sectors. However, it is not popular in healthcare since many prejudices still surround the hacking practice and there is a lack of education on hackers' categories and their ethics. The present analysis aims to comprehend what hacker ethics is and who ethical hackers are. Currently, hacker ethics has the status of personal ethics; however, to employ penetration testers in healthcare, it is recommended to draft an official code of ethics, comprising principles, standards, expectations, and best practices. Additionally, it is important to distinguish between malicious hackers and ethical hackers. Amongst the latter, penetration testers are only a sub-category. Acknowledging the subtle differences between ethical hackers and penetration testers allows to better understand why and how the latter can offer their services to healthcare facilities.


Assuntos
Segurança Computacional , Atenção à Saúde , Humanos , Instalações de Saúde
4.
J Multidiscip Healthc ; 17: 3971-3979, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39161538

RESUMO

Lévinas and Derrida speak of the ontological context of human relationships in the context of the absolute priority of the Other and the unconditional law of hospitality. This has direct implications for doctor-patient relationships in the context of health care. This paper explores these philosophical and practical implications in light of a paradox that exists in all hospitality: that hostility is inevitably intertwined with hospitality. The paper explores three ways hostility can present in doctor-patient relationships: in physical violence, through paternalism, and through the violence of categorisation. While acknowledging the paradox, and the complexity of solutions, the paper considers ways to minimize this hostility. In so doing, it encourages HCPs to overcome whatever is possible so as to do the impossible: provide unconditional hospitality.

5.
JMIR AI ; 3: e49795, 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39158953

RESUMO

BACKGROUND: The discourse surrounding medical artificial intelligence (AI) often focuses on narratives that either hype the technology's potential or predict dystopian futures. AI narratives have a significant influence on the direction of research, funding, and public opinion and thus shape the future of medicine. OBJECTIVE: The paper aims to offer critical reflections on AI narratives, with a specific focus on medical AI, and to raise awareness as to how people working with medical AI talk about AI and discharge their "narrative responsibility." METHODS: Qualitative semistructured interviews were conducted with 41 participants from different disciplines who were exposed to medical AI in their profession. The research represents a secondary analysis of data using a thematic narrative approach. The analysis resulted in 2 main themes, each with 2 other subthemes. RESULTS: Stories about the AI-physician interaction depicted either a competitive or collaborative relationship. Some participants argued that AI might replace physicians, as it performs better than physicians. However, others believed that physicians should not be replaced and that AI should rather assist and support physicians. The idea of excessive technological deferral and automation bias was discussed, highlighting the risk of "losing" decisional power. The possibility that AI could relieve physicians from burnout and allow them to spend more time with patients was also considered. Finally, a few participants reported an extremely optimistic account of medical AI, while the majority criticized this type of story. The latter lamented the existence of a "magical theory" of medical AI, identified with techno-solutionist positions. CONCLUSIONS: Most of the participants reported a nuanced view of technology, recognizing both its benefits and challenges and avoiding polarized narratives. However, some participants did contribute to the hype surrounding medical AI, comparing it to human capabilities and depicting it as superior. Overall, the majority agreed that medical AI should assist rather than replace clinicians. The study concludes that a balanced narrative (that focuses on the technology's present capabilities and limitations) is necessary to fully realize the potential of medical AI while avoiding unrealistic expectations and hype.

6.
Artif Intell Med ; 135: 102458, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36628794

RESUMO

Artificial intelligence (AI) has only partially (or not at all) been integrated into medical education, leading to growing concerns regarding how to train healthcare practitioners to handle the changes brought about by the introduction of AI. Programming lessons and other technical information into healthcare curricula has been proposed as a solution to support healthcare personnel in using AI or other future technology. However, integrating these core elements of computer science knowledge might not meet the observed need that students will benefit from gaining practical experience with AI in the direct application area. Therefore, this paper proposes a dynamic approach to case-based learning that utilizes the scenarios where AI is currently used in clinical practice as examples. This approach will support students' understanding of technical aspects. Case-based learning with AI as an example provides additional benefits: (1) it allows doctors to compare their thought processes to the AI suggestions and critically reflect on the assumptions and biases of AI and clinical practice; (2) it incentivizes doctors to discuss and address ethical issues inherent to technology and those already existing in current clinical practice; (3) it serves as a foundation for fostering interdisciplinary collaboration via discussion of different views between technologists, multidisciplinary experts, and healthcare professionals. The proposed knowledge shift from AI as a technical focus to AI as an example for case-based learning aims to encourage a different perspective on educational needs. Technical education does not need to compete with other essential clinical skills as it could serve as a basis for supporting them, which leads to better medical education and practice, ultimately benefiting patients.


Assuntos
Educação Médica , Médicos , Humanos , Inteligência Artificial , Aprendizagem , Pessoal de Saúde
7.
Digit Health ; 8: 20552076221074488, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35173981

RESUMO

Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology and expectations. This paper aims to fill the gap by defining minimal explainability standards to serve the views and needs of essential stakeholders in healthcare. In that sense, we propose to define minimal explainability criteria that can support doctors' understanding, meet patients' needs, and fulfill legal requirements. Therefore, explainability need not to be exhaustive but sufficient for doctors and patients to comprehend the artificial intelligence models' clinical implications and be integrated safely into clinical practice. Thus, minimally acceptable standards for explainability are context-dependent and should respond to the specific need and potential risks of each clinical scenario for a responsible and ethical implementation of artificial intelligence.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA