Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39120780

RESUMEN

Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the "empirical turn" in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased use of empirical methodologies from social sciences, the proliferation of health-related subtypes of technology ethics might have a comparable impact on current bioethical research. This systematic journal review analyses the reporting of ethical frameworks and non-empirical methods in argument-based research articles on digital technologies in medicine, health care and public health that have been published in high-impact bioethics journals. We focus on articles reporting non-empirical research in original contributions. Our aim is to describe currently used methods for the ethical analysis of ethical issues regarding the application of digital technologies in medicine, health care and public health. We confine our analysis to non-empirical methods because empirical methods have been well-researched elsewhere. Finally, we discuss our findings against the background of established methods for health technology assessment, the lack of a typology for non-empirical methods as well as conceptual and methodical change in bioethics. Our descriptive results may serve as a starting point for reflecting on whether current ethical frameworks and non-empirical methods are appropriate to research ethical issues deriving from the application of digital technologies in medicine, health care and public health.

2.
BMC Med Ethics ; 25(1): 78, 2024 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-39026308

RESUMEN

BACKGROUND: Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. METHODS: A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who often encounter patients lacking decision-making capacity. The questionnaire covered attitudes toward AI-driven preference prediction, availability and utilization of Clinical Ethics Support Services (CESS), and experiences with ethically challenging situations. Descriptive statistics and bivariate analysis was performed. Qualitative responses were analyzed using content analysis in a mixed inductive-deductive approach. RESULTS: Participants were predominantly male (69.3%), with ages ranging from 27 to 77. Most worked in nonacademic hospitals (82%). Physicians generally showed hesitance toward AI-driven preference prediction, citing concerns about the loss of individuality and humanity, lack of explicability in AI results, and doubts about AI's ability to encompass the ethical deliberation process. In contrast, physicians had a more positive opinion of CESS. Availability of CESS varied, with 81.8% of participants reporting access. Among those without access, 91.8% expressed a desire for CESS. Physicians' reluctance toward AI-driven preference prediction aligns with concerns about transparency, individuality, and human-machine interaction. While AI could enhance the accuracy of predictions and reduce surrogate burden, concerns about potential biases, de-humanisation, and lack of explicability persist. CONCLUSIONS: German physicians frequently encountering incapacitated patients exhibit hesitance toward AI-driven preference prediction but hold a higher esteem for CESS. Addressing concerns about individuality, explicability, and human-machine roles may facilitate the acceptance of AI in clinical ethics. Further research into patient and surrogate perspectives is needed to ensure AI aligns with patient preferences and values in complex medical decisions.


Asunto(s)
Anestesiólogos , Inteligencia Artificial , Actitud del Personal de Salud , Humanos , Inteligencia Artificial/ética , Masculino , Alemania , Femenino , Adulto , Encuestas y Cuestionarios , Persona de Mediana Edad , Anciano , Anestesiólogos/ética , Toma de Decisiones/ética , Médicos/ética , Médicos/psicología , Medicina Interna/ética , Toma de Decisiones Clínicas/ética
3.
Front Surg ; 11: 1287218, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38550794

RESUMEN

Introduction: Head-mounted displays (HMDs) that superimpose holograms onto patients are of particular surgical interest as they are believed to dramatically change surgical procedures by including safety warning and allowing real-time offsite consultations. Although there are promising benefits of mixed and augmented reality (MR/AR) technologies in surgery, they also raise new ethical concerns. The aim of this systematic review is to determine the full spectrum of ethical issues that is raised for surgeons in the intraoperative application of MR/AR technology. Methods: Five bibliographic databases were searched for publications on the use of MR/AR, HMDs and other devices, their intraoperative application in surgery, and ethical issues. We applied qualitative content analysis to the n = 50 articles included. Firstly, we coded the material with deductive categories derived from ethical frameworks for surgical innovations, complications and research. Secondly, clinical aspects with ethical relevance were inductively coded as ethical issues within the main categories. Thirdly, we pooled the ethical issues into themes and sub-themes. We report our findings according to the reporting guideline RESERVE. Results: We found n = 143 ethical issues across ten main themes, namely patient-physician relationship, informed consent, professionalism, research and innovation, legal and regulatory issues, functioning equipment and optimal operating conditions, allocation of resources, minimizing harm, good communication skills and the ability to exercise sound judgement. The five most prevalent ethical issues are "Need for continuous research and innovation", "Ensuring improvement of the learning curve", "MR/AR enables new maneuvers for surgeons", "Ensuring improvement of comfort, ergonomics, and usability of devices," and "Not withholding MR/AR if it performs better". Conclusions: Recognizing the evidence-based limitations of the intraoperative MR/AR application is of paramount importance to avoid ethical issues, but clinical trials in surgery pose particular ethical risks for patients. Regarding the digital surgeon, long-term impact on human workforce, potentially harmful "negative training," i.e., acquiring inappropriate behaviors, and the fear of surveillance need further attention. MR/AR technologies offer not only challenges but significant advantages, promoting a more equitable distribution of surgical expertise and optimizing healthcare. Aligned with the core principle of social justice, these technologies enable surgeons to collaborate globally, improving training conditions and addressing enduring global healthcare inequalities.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...