RESUMEN
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
Asunto(s)
Aprendizaje Automático , Humanos , Aprendizaje Automático/ética , Juicio , Técnicas de Apoyo para la Decisión , Toma de Decisiones/éticaRESUMEN
When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.
Asunto(s)
Juicio , Prioridad del Paciente , Humanos , Autonomía Personal , Algoritmos , Aprendizaje Automático/ética , Toma de Decisiones/éticaRESUMEN
BACKGROUND: Intersectionality is a concept that originated in Black feminist movements in the US-American context of the 1970s and 1980s, particularly in the work of feminist scholar and lawyer Kimberlé W. Crenshaw. Intersectional approaches aim to highlight the interconnectedness of gender and sexuality with other social categories, such as race, class, age, and ability to look at how individuals are discriminated against and privileged in institutions and societal power structures. Intersectionality is a "traveling concept", which also made its way into bioethical research. METHODS: We conducted a systematic review to answer the question of where and how the concept of intersectionality is applied in bioethical research. The PubMed and Web of Science databases were systematically searched and 192 articles addressing bioethical topics and intersectionality were finally included. RESULTS: The qualitative analysis resulted in a category system with five main categories: (1) application purpose and function, (2) social dimensions, (3) levels, (4) health-care disciplines and academic fields, and (5) challenges, limitations, and critique. The variety of academic fields and health-care disciplines working with the concept ranges from psychology, through gynaecology to palliative care and deaf studies. Important functions that the concept of intersectionality fulfils in bioethical research are making inequities visible, creating better health data collections and embracing self-reflection. Intersectionality is also a critical praxis and fits neatly into the overarching goal of bioethics to work toward social justice in health care. Intersectionality aims at making research results relevant for respective communities and patients, and informs the development of policies. CONCLUSIONS: This systematic review is, to the best of our knowledge, the first one to provide a full overview of the reference to intersectionality in bioethical scholarship. It creates a basis for future research that applies intersectionality as a theoretical and methodical tool for analysing bioethical questions.
Asunto(s)
Bioética , Humanos , Femenino , Feminismo , Discusiones BioéticasRESUMEN
BACKGROUND: Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. METHODS: A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who often encounter patients lacking decision-making capacity. The questionnaire covered attitudes toward AI-driven preference prediction, availability and utilization of Clinical Ethics Support Services (CESS), and experiences with ethically challenging situations. Descriptive statistics and bivariate analysis was performed. Qualitative responses were analyzed using content analysis in a mixed inductive-deductive approach. RESULTS: Participants were predominantly male (69.3%), with ages ranging from 27 to 77. Most worked in nonacademic hospitals (82%). Physicians generally showed hesitance toward AI-driven preference prediction, citing concerns about the loss of individuality and humanity, lack of explicability in AI results, and doubts about AI's ability to encompass the ethical deliberation process. In contrast, physicians had a more positive opinion of CESS. Availability of CESS varied, with 81.8% of participants reporting access. Among those without access, 91.8% expressed a desire for CESS. Physicians' reluctance toward AI-driven preference prediction aligns with concerns about transparency, individuality, and human-machine interaction. While AI could enhance the accuracy of predictions and reduce surrogate burden, concerns about potential biases, de-humanisation, and lack of explicability persist. CONCLUSIONS: German physicians frequently encountering incapacitated patients exhibit hesitance toward AI-driven preference prediction but hold a higher esteem for CESS. Addressing concerns about individuality, explicability, and human-machine roles may facilitate the acceptance of AI in clinical ethics. Further research into patient and surrogate perspectives is needed to ensure AI aligns with patient preferences and values in complex medical decisions.
Asunto(s)
Anestesiólogos , Inteligencia Artificial , Actitud del Personal de Salud , Humanos , Inteligencia Artificial/ética , Masculino , Alemania , Femenino , Adulto , Encuestas y Cuestionarios , Persona de Mediana Edad , Anciano , Anestesiólogos/ética , Toma de Decisiones/ética , Médicos/ética , Médicos/psicología , Medicina Interna/ética , Toma de Decisiones Clínicas/éticaRESUMEN
BACKGROUND: Clinical decision support systems (CDSSs) are increasingly being introduced into various domains of health care. Little is known so far about the impact of such systems on the health care professional-patient relationship, and there is a lack of agreement about whether and how patients should be informed about the use of CDSSs. OBJECTIVE: This study aims to explore, in an empirically informed manner, the potential implications for the health care professional-patient relationship and to underline the importance of this relationship when using CDSSs for both patients and future professionals. METHODS: Using a methodological triangulation, 15 medical students and 12 trainee nurses were interviewed in semistructured interviews and 18 patients were involved in focus groups between April 2021 and April 2022. All participants came from Germany. Three examples of CDSSs covering different areas of health care (ie, surgery, nephrology, and intensive home care) were used as stimuli in the study to identify similarities and differences regarding the use of CDSSs in different fields of application. The interview and focus group transcripts were analyzed using a structured qualitative content analysis. RESULTS: From the interviews and focus groups analyzed, three topics were identified that interdependently address the interactions between patients and health care professionals: (1) CDSSs and their impact on the roles of and requirements for health care professionals, (2) CDSSs and their impact on the relationship between health care professionals and patients (including communication requirements for shared decision-making), and (3) stakeholders' expectations for patient education and information about CDSSs and their use. CONCLUSIONS: The results indicate that using CDSSs could restructure established power and decision-making relationships between (future) health care professionals and patients. In addition, respondents expected that the use of CDSSs would involve more communication, so they anticipated an increased time commitment. The results shed new light on the existing discourse by demonstrating that the anticipated impact of CDSSs on the health care professional-patient relationship appears to stem less from the function of a CDSS and more from its integration in the relationship. Therefore, the anticipated effects on the relationship between health care professionals and patients could be specifically addressed in patient information about the use of CDSSs.
Asunto(s)
Comunicación , Toma de Decisiones Conjunta , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Femenino , Masculino , Adulto , Grupos Focales , Relaciones Profesional-Paciente , Persona de Mediana Edad , Entrevistas como Asunto , Personal de Salud/psicología , Alemania , Participación del Paciente , AncianoRESUMEN
Clinical decision support systems (CDSS) based on artificial intelligence (AI) are complex socio-technical innovations and are increasingly being used in medicine and nursing to improve the overall quality and efficiency of care, while also addressing limited financial and human resources. However, in addition to such intended clinical and organisational effects, far-reaching ethical, social and legal implications of AI-based CDSS on patient care and nursing are to be expected. To date, these normative-social implications have not been sufficiently investigated. The BMBF-funded project DESIREE (DEcision Support In Routine and Emergency HEalth Care: Ethical and Social Implications) has developed recommendations for the responsible design and use of clinical decision support systems. This article focuses primarily on ethical and social aspects of AI-based CDSS that could have a negative impact on patient health. Our recommendations are intended as additions to existing recommendations and are divided into the following action fields with relevance across all stakeholder groups: development, clinical use, information and consent, education and training, and (accompanying) research.
Asunto(s)
Inteligencia Artificial , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Inteligencia Artificial/ética , Inteligencia Artificial/normas , Sistemas de Apoyo a Decisiones Clínicas/ética , Sistemas de Apoyo a Decisiones Clínicas/normas , Alemania , Atención de Enfermería/ética , Atención de Enfermería/métodos , Atención de Enfermería/normas , Guías de Práctica Clínica como Asunto , Diseño de SoftwareRESUMEN
BACKGROUND: Physicians are likely to be asked to provide medical care to relatives or friends. Evidence suggests that most physicians treat loved ones during their active years. However, in the academic literature, critical approaches to the matter are dominating. Ethical guidelines often discourage physicians from treating family members and friends outside of exceptional circumstances. OBJECTIVE: This systematic review aims to identify reasons for and against treating family and friends as portrayed in the literature published. METHODS: A search string designed for the database "PubMed," snowball sampling, and hand searching was used to identify possibly eligible publications. Seventy-six publications were screened for all reasons presented in favour of and against physicians treating loved ones. Qualitative content analysis was used for data extraction. Combining a deductive and inductive approach, a coding system was developed. RESULTS: Many publications analysed represent articles portraying personal experiences; fewer show original research. Reasons against and in favour of treating family and friends were identified. Several publications specify conditions under which the treatment of loved ones may be legitimate. The reasons identified can be assigned to a micro or macro level of human interaction. CONCLUSIONS: This systematic review shows that the discourse of physicians treating loved ones is held predominantly in the context of personal experiences. The majority of authors seem to have a rather pragmatic interest in the topic, and systematic or analytic approaches are rare. While most authors mention various codes of ethics, several publications criticize these or consider them insufficient.
Ethical guidelines, such as the Code of Medical Ethics of the American Medical Association, ask physicians not to treat their family members and friends. However, studies show that most physicians are confronted with loved ones asking for medical interventions during their careers. The divide between the ethical guidelines and the physicians' actual practice demonstrates the ethical dilemma at hand. In this systematic review, literature addressing the topic of physicians treating family and friends (PTFF) is analysed. The majority of publications voice concerns about PTFF. A common reason against PTFF is the risk of losing objectivity. Other publications endorse PTFF, mentioning, for example, the possibility of saving costs. Specific situations in which PTFF is justified are presented as well. The analysis of publications on the topic indicates a rather clinical approach, less of a philosophical one. Several authors criticize too little assistance in this matter of the ethical guidelines. The examination of the existing literature on the topic of PTFF suggests that, as the circumstances are very context-specific, a universal answer applying to each case of PTFF will hardly be found.
RESUMEN
The medical profession is observing a rising number of calls to action considering the threat that climate change poses to global human health. Theory-led bioethical analyses of the scope and weight of physicians' normative duty towards climate protection and its conflict with individual patient care are currently scarce. This article offers an analysis of the normative issues at stake by using Korsgaard's neo-Kantian moral account of practical identities. We begin by showing the case of physicians' duty to climate protection, before we succinctly introduce Korsgaard's account. We subsequently show how the duty to climate protection can follow from physicians' identity of being a healthcare professional. We structure conflicts between individual patient care and climate protection, and show how a transformation in physicians' professional ethos is possible and what mechanisms could be used for doing so. An important limit of our analysis is that we mainly address the level of individual physicians and their practical identities, leaving out important measures to respond to climate change at the mesolevels and macrolevels of healthcare institutions and systems, respectively.
RESUMEN
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Humanos , Estudios Prospectivos , Investigación Empírica , Procesos de Grupo , Actitud del Personal de Salud , Investigación CualitativaRESUMEN
The One Health approach is a prominent paradigm for research and healthcare practice and increasingly applied in various fields. Theoretical and normative implications of the approach, however, remain underexposed so far, leading to conceptual incoherencies and uncertainties in the application of the concept. This article sheds light on two particularly influential theoretical flaws inherent to the One Health approach. The first difficulty relates to the question of whose health is considered in the One Health paradigm: humans and animals are obviously situated on a different level than the environment, so that the individual, population, and ecosystem dimensions need to be considered. The second theoretical flaw is related to the question of which concept of health can be meaningfully referred to when speaking of One Health. This problem is addressed by analyzing four key theoretical conceptions of health from the philosophy of medicine (well-being, natural functioning, capacity of achieving vital goals, and homeostasis and resilience) regarding their suitability for the aims of One Health initiatives. It appears that none of the concepts analyzed fully meets the demands of an equitable consideration of human, animal, and environmental health. Potential solutions lie in accepting that one concept of health is more appropriate for some entities than for others and/or forgoing the idea of a uniform conception of health. As a result of the analysis, the authors conclude that theoretical and normative assumptions underlying concrete One Health initiatives should be made more explicit.
Asunto(s)
Ecosistema , Salud Única , Animales , Humanos , Atención a la Salud , FilosofíaRESUMEN
The so-called "empirical turn" in bioethics gave rise to extensive theoretical and methodological debates and has significantly shaped the research landscape from two decades ago until the present day. Attentive observers of the evolution of the bioethical research field now notice a new trend towards the inclusion of data science methods for the treatment of ethical research questions. This new research domain of "digital bioethics" encompasses both studies replacing (or complementing) socio-empirical research on bioethical topics ("empirical digital bioethics") and argumentative approaches towards normative questions in the healthcare domain ("argumentative digital bioethics"). This article draws on insights taken from the debate on the "empirical turn" for sounding out perspectives for the newly developing field of "digital bioethics." We particularly discuss the disciplinary boundaries, chances and challenges, and potentially undesirable developments of the research field. The article closes with concrete suggestions on which debates need to be initiated and which measures need to be taken so that the path forward of "digital bioethics" will be a scientific success.
Asunto(s)
Bioética , Humanos , Investigación EmpíricaRESUMEN
BACKGROUND: Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use. METHODS: PubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract screened according to defined inclusion and exclusion criteria, resulting in 44 papers whose full texts were analysed using the Kuckartz method of qualitative text analysis. RESULTS: Artificial Intelligence might increase patient autonomy by improving the accuracy of predictions and allowing patients to receive their preferred treatment. It is thought to increase beneficence by providing reliable information, thereby, supporting surrogate decision-making. Some authors fear that reducing ethical decision-making to statistical correlations may limit autonomy. Others argue that AI may not be able to replicate the process of ethical deliberation because it lacks human characteristics. Concerns have been raised about issues of justice, as AI may replicate existing biases in the decision-making process. CONCLUSIONS: The prospective benefits of using AI in clinical ethical decision-making are manifold, but its development and use should be undertaken carefully to avoid ethical pitfalls. Several issues that are central to the discussion of Clinical Decision Support Systems, such as justice, explicability or human-machine interaction, have been neglected in the debate on AI for clinical ethics so far. TRIAL REGISTRATION: This review is registered at Open Science Framework ( https://osf.io/wvcs9 ).
Asunto(s)
Inteligencia Artificial , Toma de Decisiones Clínicas , Humanos , BeneficenciaRESUMEN
BACKGROUND: Chronic kidney disease (CKD), a major public health problem with differing disease etiologies, leads to complications, comorbidities, polypharmacy, and mortality. Monitoring disease progression and personalized treatment efforts are crucial for long-term patient outcomes. Physicians need to integrate different data levels, e.g., clinical parameters, biomarkers, and drug information, with medical knowledge. Clinical decision support systems (CDSS) can tackle these issues and improve patient management. Knowledge about the awareness and implementation of CDSS in Germany within the field of nephrology is scarce. PURPOSE: Nephrologists' attitude towards any CDSS and potential CDSS features of interest, like adverse event prediction algorithms, is important for a successful implementation. This survey investigates nephrologists' experiences with and expectations towards a useful CDSS for daily medical routine in the outpatient setting. METHODS: The 38-item questionnaire survey was conducted either by telephone or as a do-it-yourself online interview amongst nephrologists across all of Germany. Answers were collected and analysed using the Electronic Data Capture System REDCap, as well as Stata SE 15.1, and Excel. The survey consisted of four modules: experiences with CDSS (M1), expectations towards a helpful CDSS (M2), evaluation of adverse event prediction algorithms (M3), and ethical aspects of CDSS (M4). Descriptive statistical analyses of all questions were conducted. RESULTS: The study population comprised 54 physicians, with a response rate of about 80-100% per question. Most participants were aged between 51-60 years (45.1%), 64% were male, and most participants had been working in nephrology out-patient clinics for a median of 10.5 years. Overall, CDSS use was poor (81.2%), often due to lack of knowledge about existing CDSS. Most participants (79%) believed CDSS to be helpful in the management of CKD patients with a high willingness to try out a CDSS. Of all adverse event prediction algorithms, prediction of CKD progression (97.8%) and in-silico simulations of disease progression when changing, e. g., lifestyle or medication (97.7%) were rated most important. The spectrum of answers on ethical aspects of CDSS was diverse. CONCLUSION: This survey provides insights into experience with and expectations of out-patient nephrologists on CDSS. Despite the current lack of knowledge on CDSS, the willingness to integrate CDSS into daily patient care, and the need for adverse event prediction algorithms was high.
Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Insuficiencia Renal Crónica , Humanos , Masculino , Persona de Mediana Edad , Femenino , Nefrólogos , Motivación , Insuficiencia Renal Crónica/terapia , Encuestas y Cuestionarios , Progresión de la EnfermedadRESUMEN
Physicians frequently encounter situations in which their professional practice is intermingled with moral affordances stemming from other domains of the physician's lifeworld, such as family and friends, or from general morality pertaining to all humans. This article offers a typology of moral conflicts 'at the margins of professionalism' as well as a new theoretical framework for dealing with them. We start out by arguing that established theories of professional ethics do not offer sufficient guidance in situations where professional ethics overlaps with moral duties of other origins. Therefore, we introduce the moral theory developed by Christine M. Korsgaard, that centres around the concept of practical identity. We show how Korsgaard's account offers a framework for interpreting different types of moral conflicts 'at the margins of professionalism' to provide either orientation for solving the conflict or an explanation for the emotional and moral burden involved in moral dilemmas.
RESUMEN
Early interprofessional learning among nursing and medical students provides various benefits for future collaboration among professionals, and high-quality care for patients. Expert committees, thus, urge the integration of interprofessional education (IPE) in undergraduate studies to achieve significant sustainable improvements in health-care practice. In Germany, IPE interventions are already implemented in some health-care disciplines, but Health-care Ethics are scarcely regarded in undergraduate education. There are, however, several reasons why Health-care Ethics is particularly appropriate for teaching in an interprofessional format. Thus, after reviewing the legal framework and the current curricula of both professions, an IPE course on Health-care Ethics for medical and nursing students was developed and implemented, consisting of seven classes of 180 minutes each. Drawing on the evaluation results after two rounds of the course, this interprofessional education and practice guide reports on challenges, obstacles and perspectives for improvement of an IPE course on Health-care Ethics. It aims to provide guidance for teaching pioneers and innovators who implement similar projects and to foster practice-oriented and open discussion about the possibilities and limits of IPE in Health-care Ethics.
Asunto(s)
Ética , Estudiantes de Enfermería , Curriculum , Alemania , Humanos , Educación Interprofesional , Relaciones InterprofesionalesRESUMEN
The development of artificial intelligence (AI) in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders, such as depression, by using data from social media. These AI depression detectors (AIDDs) identify users who are at risk of depression prior to any contact with the healthcare system. The article focuses on the ethical implications of AIDDs regarding affected users' health-related autonomy. Firstly, it presents the (ethical) discussion of AI in medicine and, specifically, in mental health. Secondly, two models of AIDDs using social media data and different usage scenarios are introduced. Thirdly, the concept of patient autonomy, according to Beauchamp and Childress, is critically discussed. Since this concept does not encompass the specific challenges linked with the digital context of AIDDs in social media sufficiently, the current analysis suggests, finally, an extended concept of health-related digital autonomy.
Asunto(s)
Inteligencia Artificial , Medios de Comunicación Sociales , Atención a la Salud , Depresión , Humanos , Salud MentalRESUMEN
BACKGROUND: Machine learning-based clinical decision support systems (ML_CDSS) are increasingly employed in various sectors of health care aiming at supporting clinicians' practice by matching the characteristics of individual patients with a computerised clinical knowledge base. Some studies even indicate that ML_CDSS may surpass physicians' competencies regarding specific isolated tasks. From an ethical perspective, however, the usage of ML_CDSS in medical practice touches on a range of fundamental normative issues. This article aims to add to the ethical discussion by using professionalisation theory as an analytical lens for investigating how medical action at the micro level and the physician-patient relationship might be affected by the employment of ML_CDSS. MAIN TEXT: Professionalisation theory, as a distinct sociological framework, provides an elaborated account of what constitutes client-related professional action, such as medical action, at its core and why it is more than pure expertise-based action. Professionalisation theory is introduced by presenting five general structural features of professionalised medical practice: (i) the patient has a concern; (ii) the physician deals with the patient's concern; (iii) s/he gives assistance without patronising; (iv) s/he regards the patient in a holistic manner without building up a private relationship; and (v) s/he applies her/his general expertise to the particularities of the individual case. Each of these five key aspects are then analysed regarding the usage of ML_CDSS, thereby integrating the perspectives of professionalisation theory and medical ethics. CONCLUSIONS: Using ML_CDSS in medical practice requires the physician to pay special attention to those facts of the individual case that cannot be comprehensively considered by ML_CDSS, for example, the patient's personality, life situation or cultural background. Moreover, the more routinized the use of ML_CDSS becomes in clinical practice, the more that physicians need to focus on the patient's concern and strengthen patient autonomy, for instance, by adequately integrating digital decision support in shared decision-making.
Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Médicos , Ética Médica , Femenino , Humanos , Aprendizaje Automático , Relaciones Médico-PacienteRESUMEN
BACKGROUND: Patient advocacy organizations (PAOs) have an increasing influence on health policy and biomedical research, therefore, questions about the specific character of their responsibility arise: Can PAOs bear moral responsibility and, if so, to whom are they responsible, for what and on which normative basis? Although the concept of responsibility in healthcare is strongly discussed, PAOs particularly have rarely been systematically analyzed as morally responsible agents. The aim of the current paper is to analyze the character of PAOs' responsibility to provide guidance to themselves and to other stakeholders in healthcare. METHODS: Responsibility is presented as a concept with four reference points: (1) The subject, (2) the object, (3) the addressee and (4) the underlying normative standard. This four-point relationship is applied to PAOs and the dimensions of collectivity and prospectivity are analyzed in each reference point. RESULTS: Understood as collectives, PAOs are, in principle, capable of intentionality and able to act and, thus, fulfill one prerequisite for the attribution of moral responsibility. Given their common mission to represent those affected, PAOs can be seen as responsible for patients' representation and advocacy, primarily towards a certain group but secondarily in a broader social context. Various legal and political statements and the bioethical principles of justice, beneficence and empowerment can be used as a normative basis for attributing responsibility to PAOs. CONCLUSIONS: The understanding of responsibility as a four-point relation incorporating collective and forward-looking dimensions helps one to understand the PAOs' roles and responsibilities better. The analysis, thus, provides a basis for the debate about PAOs' contribution and cooperation in the healthcare sector.
Asunto(s)
Análisis Ético , Defensa del Paciente , Beneficencia , Humanos , Organizaciones , Justicia Social , Responsabilidad SocialRESUMEN
The decision-making environment in intensive care units (ICUs) is influenced by the transformation of intensive care medicine, the staffing situation and the increasing importance of patient autonomy. Normative implications of time in intensive care, which affect all three areas, have so far barely been considered. The study explores patterns of decision making concerning the continuation, withdrawal and withholding of therapies in intensive care. A triangulation of qualitative data collection methods was chosen. Data were collected through non-participant observation on a surgical ICU at an academic medical centre followed by semi-structured interviews with nurses and physicians. The transcribed interviews and observation notes were coded and analysed using qualitative content analysis according to Mayring. Three themes related to time emerged regarding the escalation or de-escalation of therapies: influence of time on prognosis, time as a scarce resource and timing in regards to decision making. The study also reveals the ambivalence of time as a norm for decision making. The challenge of dealing with time-related efforts in ICU care results from the tension between the need to wait to optimise patient care, which must be balanced against the significant time pressure which is characteristic of the ICU setting.
RESUMEN
BACKGROUND: Proficiency in medical terminology is an essential competence of physicians which ensures reliable and unambiguous communication in everyday clinical practice. The attendance of a course on medical terminology is mandatory for human and dental medicine students in Germany. Students' prerequisites when entering the course are diverse and the key learning objectives are achieved to a varying degree. METHODS: A new learning space, the "TERMInator", was developed at the University Medicine Greifswald to meet the medical students' individual learning needs better. The interactive e-learning course serves as a supplement to the seminars, lectures and tutorials to rehearse and practically apply the course contents at an individual pace. It uses gamification elements and is supplied via the learning platform Moodle. The TERMInator was pilot implemented in two consecutive winter terms (2018/19, 2019/20) and comprehensively evaluated based on the general course evaluations and an anonymous questionnaire covering aspects of content, layout and user friendliness of the TERMInator and questions concerning the students' learning preferences. RESULTS: The TERMInator was rated very positively overall, which was also fed back to the lecturers during the classes. Students appreciate the new e-learning tool greatly and stress that the TERMInator should be further expanded. The handling of the TERMInator was considered to be very easy and, therefore, almost no training time was needed. The tasks were easy to understand and considered a good supplement to the seminar contents. The extent and quality of the images were seen rather critically. The students' learning strategies differ. Although e-learning options were generally rated as very important, student tutorials were considered by far the most important. CONCLUSIONS: Medical terminology classes are characterised by heterogeneous learning groups and a high workload within a short time, which can lead to major challenges for the teaching staff. Complementary gamified e-learning tools are promising in view of the students' different knowledge levels and changing learning behaviour. TRIAL REGISTRATION: Not applicable.