Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 84
Filtrar
1.
AJOB Empir Bioeth ; : 1-8, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39250770

RESUMEN

BACKGROUND: Some have hypothesized that talk about suffering can be used by clinicians to motivate difficult decisions, especially to argue for reducing treatment at the end of life. We examined how talk about suffering is related to decision-making for critically ill patients, by evaluating transcripts of conversations between clinicians and patients' families. METHODS: We conducted a secondary qualitative content analysis of audio-recorded family meetings from a multicenter trial conducted in the adult intensive care units of five hospitals from 2012-2017 to look at how the term "suffering" and its variants were used. A coding guide was developed by consensus-oriented discussion by four members of the research team. Two coders independently evaluated each transcript. We followed an inductive approach to data analysis in reviewing transcripts; findings were iteratively discussed among study authors until consensus on key themes was reached. RESULTS: Of 146 available transcripts, 34 (23%) contained the word "suffer" or "suffering" at least once, with 58 distinct uses. Clinicians contributed 62% of first uses. Among uses describing the suffering of persons, 57% (n = 24) were related to a decision, but only 42% (n = 10) of decision-relevant uses accompanied a proposal to limit treatment, and only half of treatment-limiting uses (n = 5) were initiated by clinicians. The target terms had a variety of implicit meanings, including poor prognosis, reduced functioning, pain, discomfort, low quality of life, and emotional distress. Suffering was frequently attributed to persons who were unconscious. CONCLUSIONS: Our results did not support the claim that the term "suffering" and its variants are used primarily by clinicians to justify limiting treatment, and the terms were not commonly used in our sample when decisions were requested. Still, when these terms were used, they were often used in a decision-relevant fashion.

3.
Am J Bioeth ; : 1-13, 2024 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-39288291

RESUMEN

Given the need for enforceable guardrails for artificial intelligence (AI) that protect the public and allow for innovation, the U.S. Government recently issued a Blueprint for an AI Bill of Rights which outlines five principles of safe AI design, use, and implementation. One in particular, the right to notice and explanation, requires accurately informing the public about the use of AI that impacts them in ways that are easy to understand. Yet, in the healthcare setting, it is unclear what goal the right to notice and explanation serves, and the moral importance of patient-level disclosure. We propose three normative functions of this right: (1) to notify patients about their care, (2) to educate patients and promote trust, and (3) to meet standards for informed consent. Additional clarity is needed to guide practices that respect the right to notice and explanation of AI in healthcare while providing meaningful benefits to patients.

4.
Patient Educ Couns ; 130: 108418, 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39288559

RESUMEN

OBJECTIVE: To assess stakeholders' perspectives on integrating personalized risk scores (PRS) into left ventricular assist device (LVAD) implantation decisions and how these perspectives might impact shared decision making (SDM). METHODS: We conducted 40 in-depth interviews with physicians, nurse coordinators, patients, and caregivers about integrating PRS into LVAD implantation decisions. A codebook was developed to identify thematic patterns, and quotations were consolidated for analysis. We used Thematic Content Analysis in MAXQDA software to identify themes by abstracting relevant quotes. RESULTS: Clinicians had varying preferences regarding PRS integration into LVAD decision making, while patients and caregivers preferred real-time discussions about PRS with their physicians. Physicians voiced concerns about time constraints and suggested delegating PRS discussions to advanced practice providers or nurse coordinators. CONCLUSIONS: Integrating PRS information into LVAD decision aids presents both opportunities and challenges for SDM. Given variable preferences among clinicians and patients, clinicians should elicit patients' desired role in the decision-making process. Addressing time constraints and ensuring patient-centered care will be crucial for optimizing SDM. Practice implications Clinicians should elicit patient preferences for PRS information disclosure and address challenges, such as time constraints and delegation of PRS discussions to other team members.

5.
J Med Ethics ; 50(10): 655, 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39304296
6.
AJOB Empir Bioeth ; : 1-10, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39250769

RESUMEN

INTRODUCTION: Deep brain stimulation (DBS) is approved under a humanitarian device exemption to manage treatment-resistant obsessive-compulsive disorder (TR-OCD) in adults. It is possible that DBS may be trialed or used clinically off-label in children and adolescents with TR-OCD in the future. DBS is already used to manage treatment-resistant childhood dystonia. Evidence suggests it is a safe and effective intervention for certain types of dystonia. Important questions remain unanswered about the use of DBS in children and adolescents with TR-OCD, including whether mental health clinicians would refer pediatric patients for DBS, and who would be a good candidate for DBS. OBJECTIVES: To explore mental health clinicians' views on what clinical and psychosocial factors they would consider when determining which children with OCD would be good DBS candidates. MATERIALS AND METHODS: In depth, semi-structured interviews were conducted with n = 25 mental health clinicians who treat pediatric patients with OCD. The interviews were transcribed, coded, and analyzed using thematic content analysis. Three questions focused on key, clinical, and psychosocial factors for assessing candidacy were analyzed to explore respondent views on candidacy factors. Our analysis details nine overarching themes expressed by clinicians, namely the patient's previous OCD treatment, OCD severity, motivation to commit to treatment, presence of comorbid conditions, family environment, education on DBS, quality of life, accessibility to treatment, and patient age and maturity. CONCLUSIONS: Clinicians generally saw considering DBS treatment in youth as a last resort and only for very specific cases. DBS referral was predominantly viewed as acceptable for children with severe TR-OCD who have undertaken intensive, appropriate treatment without success, whose OCD has significantly reduced their quality of life, and who exhibit strong motivation to continue treatment given the right environment. Appropriate safeguards, eligibility criteria, and procedures should be discussed and identified before DBS for childhood TR-OCD becomes practice.

8.
Am J Bioeth ; : 1-8, 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38842351

RESUMEN

"Suffering" is a central concept within bioethics and often a crucial consideration in medical decision making. As used in practice, however, the concept risks being uninformative, ambiguous, or even misleading. In this paper, we consider a series of cases in which "suffering" is invoked and analyze them in light of prominent theories of suffering. We then outline ethical hazards that arise as a result of imprecise usage of the concept and offer practical recommendations for avoiding them. Appeals to suffering are often getting at something ethically important. But this is where the work of ethics begins, not where it ends.

9.
J Med Ethics ; 50(10): 670-675, 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-38749651

RESUMEN

The idea of a 'right to mental integrity', sometimes referred to as a 'right against mental interference,' is a relatively new concept in bioethics, making its way into debates about neurotechnological advances and the establishment of 'neurorights.' In this paper, we interrogate the idea of a right to mental integrity. First, we argue that some experts define the right to mental integrity so broadly that rights violations become ubiquitous, thereby trivialising some of the very harms the concept is meant to address. Second, rights-based framing results in an overemphasis on the normative importance of consent, implying that neurointerventions are permissible in cases where people consent to have their mental states influenced or read off, a confidence in consent that we argue is misguided. Third, the concept often collapses the ethics of brain inputs and brain outputs, potentially resulting in a loss of important conceptual nuance. Finally, we argue that the concept of a right to mental integrity is superfluous-what is wrong with most violations of mental integrity can be explained by existing concepts such as autonomy, manipulation, privacy, bodily rights, surveillance, harm and exploitation of vulnerabilities. We conclude that bioethicists and policy-makers ought to either make use of these concepts rather than arguing for the existence of a new right, or they need to avoid making rights violations ubiquitous by settling on a narrower and more rigorous definition of the right.


Asunto(s)
Derechos Humanos , Consentimiento Informado , Humanos , Consentimiento Informado/ética , Autonomía Personal , Bioética , Encéfalo
12.
Camb Q Healthc Ethics ; : 1-14, 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38602092

RESUMEN

The ongoing debate within neuroethics concerning the degree to which neuromodulation such as deep brain stimulation (DBS) changes the personality, identity, and agency (PIA) of patients has paid relatively little attention to the perspectives of prospective patients. Even less attention has been given to pediatric populations. To understand patients' views about identity changes due to DBS in obsessive-compulsive disorder (OCD), the authors conducted and analyzed semistructured interviews with adolescent patients with OCD and their parents/caregivers. Patients were asked about projected impacts to PIA generally due to DBS. All patient respondents and half of caregivers reported that DBS would impact patient self-identity in significant ways. For example, many patients expressed how DBS could positively impact identity by allowing them to explore their identities free from OCD. Others voiced concerns that DBS-related resolution of OCD might negatively impact patient agency and authenticity. Half of patients expressed that DBS may positively facilitate social access through relieving symptoms, while half indicated that DBS could increase social stigma. These views give insights into how to approach decision-making and informed consent if DBS for OCD becomes available for adolescents. They also offer insights into adolescent experiences of disability identity and "normalcy" in the context of OCD.

13.
Front Hum Neurosci ; 18: 1332451, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38435745

RESUMEN

Background: Artificial intelligence (AI)-based computer perception technologies (e.g., digital phenotyping and affective computing) promise to transform clinical approaches to personalized care in psychiatry and beyond by offering more objective measures of emotional states and behavior, enabling precision treatment, diagnosis, and symptom monitoring. At the same time, passive and continuous nature by which they often collect data from patients in non-clinical settings raises ethical issues related to privacy and self-determination. Little is known about how such concerns may be exacerbated by the integration of neural data, as parallel advances in computer perception, AI, and neurotechnology enable new insights into subjective states. Here, we present findings from a multi-site NCATS-funded study of ethical considerations for translating computer perception into clinical care and contextualize them within the neuroethics and neurorights literatures. Methods: We conducted qualitative interviews with patients (n = 20), caregivers (n = 20), clinicians (n = 12), developers (n = 12), and clinician developers (n = 2) regarding their perspective toward using PC in clinical care. Transcripts were analyzed in MAXQDA using Thematic Content Analysis. Results: Stakeholder groups voiced concerns related to (1) perceived invasiveness of passive and continuous data collection in private settings; (2) data protection and security and the potential for negative downstream/future impacts on patients of unintended disclosure; and (3) ethical issues related to patients' limited versus hyper awareness of passive and continuous data collection and monitoring. Clinicians and developers highlighted that these concerns may be exacerbated by the integration of neural data with other computer perception data. Discussion: Our findings suggest that the integration of neurotechnologies with existing computer perception technologies raises novel concerns around dignity-related and other harms (e.g., stigma, discrimination) that stem from data security threats and the growing potential for reidentification of sensitive data. Further, our findings suggest that patients' awareness and preoccupation with feeling monitored via computer sensors ranges from hypo- to hyper-awareness, with either extreme accompanied by ethical concerns (consent vs. anxiety and preoccupation). These results highlight the need for systematic research into how best to implement these technologies into clinical care in ways that reduce disruption, maximize patient benefits, and mitigate long-term risks associated with the passive collection of sensitive emotional, behavioral and neural data.

14.
Am J Bioeth ; 24(1): 3-12, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36635972

RESUMEN

The concept of personhood has been central to bioethics debates about abortion, the treatment of patients in a vegetative or minimally conscious states, as well as patients with advanced dementia. More recently, the concept has been employed to think about new questions related to human-brain organoids, artificial intelligence, uploaded minds, human-animal chimeras, and human embryos, to name a few. A common move has been to ask what these entities have in common with persons (in the normative sense), and then draw conclusions about what we do (or do not) owe them. This paper argues that at best the concept of "personhood" is unhelpful to much of bioethics today and at worst it is harmful and pernicious. I suggest that we (bioethicists) stop using the concept of personhood and instead ask normative questions more directly (e.g., how ought we to treat this being and why?) and use other philosophical concepts (e.g., interests, sentience, recognition respect) to help us answer them. It is time for bioethics to end talk about personhood.


Asunto(s)
Aborto Inducido , Bioética , Embarazo , Femenino , Animales , Humanos , Personeidad , Inteligencia Artificial , Obligaciones Morales
15.
AJOB Neurosci ; 15(1): 51-58, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37379054

RESUMEN

Questions about when to limit unhelpful treatments are often raised in general medicine but are less commonly considered in psychiatry. Here we describe a survey of U.S. psychiatrists intended to characterize their attitudes about the management of suicidal ideation in patients with severely treatment-refractory illness. Respondents (n = 212) received one of two cases describing a patient with suicidal ideation due to either borderline personality disorder or major depressive disorder. Both patients were described as receiving all guideline-based and plausible emerging treatments. Respondents rated the expected helpfulness and likelihood of recommending each of four types of intervention: hospitalization, additional medication changes, additional neurostimulation, and additional psychotherapy. Across both cases, most respondents said they were likely to provide each intervention, except for additional neurostimulation in borderline personality disorder, while fewer thought each intervention would be helpful. Substantial minorities of respondents indicated that they would provide an intervention they did not think was likely to be helpful. Our results suggest that while most psychiatrists recognize the possibility that some patients are unlikely to be helped by available treatments, many would continue to offer such treatments.


Asunto(s)
Trastorno Depresivo Mayor , Psiquiatría , Humanos , Psiquiatras , Trastorno Depresivo Mayor/terapia , Psicoterapia/métodos , Atención al Paciente
16.
J Med Ethics ; 2023 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-37979976

RESUMEN

Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool's computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily epistemic in nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish 'source' from 'functional' explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML.

17.
Artículo en Inglés | MEDLINE | ID: mdl-37781644

RESUMEN

Approximately 10-20% of children with obsessive-compulsive disorder (OCD) have treatment-resistant presentations, and there is likely interest in developing interventions for this patient group, which may include deep brain stimulation (DBS). The World Society for Stereotactic and Functional Neurosurgery has argued that at least two successful randomized controlled trials should be available before DBS treatment for a psychiatric disorder is considered "established." The FDA approved DBS for adults with treatment-resistant OCD under a humanitarian device exemption (HDE) in 2009, which requires that a device be used to manage or treat a condition impacting 8,000 or fewer patients annually in the United States. DBS is currently offered to children ages 7 and older with treatment-resistant dystonia under an HDE. Ethical and empirical work are needed to evaluate whether and under what conditions it might be appropriate to offer DBS for treatment-resistant childhood OCD. To address this gap, we report qualitative data from semi-structured interviews with 25 clinicians with expertise in this area. First, we report clinician perspectives on acceptable levels of evidence to offer DBS in this patient population. Second, we describe their perspectives on institutional policies or protocols that might be needed to effectively provide care for this patient population.

18.
Neuroethics ; 16(3)2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37905206

RESUMEN

Introduction: Deep brain stimulation (DBS) is utilized to treat pediatric refractory dystonia and its use in pediatric patients is expected to grow. One important question concerns the impact of hope and unrealistic optimism on decision-making, especially in "last resort" intervention scenarios such as DBS for refractory conditions. Objective: This study examined stakeholder experiences and perspectives on hope and unrealistic optimism in the context of decision-making about DBS for childhood dystonia and provides insights for clinicians seeking to implement effective communication strategies. Materials and Methods: Semi-structured interviews with clinicians (n = 29) and caregivers (n = 44) were conducted, transcribed, and coded. Results: Using thematic content analysis, four major themes from clinician interviews and five major themes from caregiver interviews related to hopes and expectations were identified. Clinicians expressed concerns about caregiver false hopes (86%, 25/29) and desperation (68.9%, 20/29) in light of DBS being a last resort. As a result, 68.9% of clinicians (20/29) expressed that they intentionally tried to lower caregiver expectations about DBS outcomes. Clinicians also expressed concern that, on the flip side, unrealistic pessimism drives away some patients who might otherwise benefit from DBS (34.5%, 10/29). Caregivers viewed DBS as the last option that they had to try (61.3%, 27/44), and 73% of caregivers (32/44) viewed themselves as having high hopes but reasonable expectations. Fewer than half (43%, 19/44) expressed that they struggled setting outcome expectations due to the uncertainty of DBS, and 50% of post-DBS caregivers (14/28) expressed some negative feelings post treatment due to unmet expectations. 43% of caregivers (19/44) had experiences with clinicians who tried to set low expectations about the potential benefits of DBS. Conclusion: Thoughtful clinician-stakeholder discussion is needed to ensure realistic outcome expectations.

20.
Am J Bioeth ; 23(10): 17-27, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37487184

RESUMEN

In this paper, we contend with whether we still need traditional ethics education as part of healthcare professional training given the abilities of chatGPT (generative pre-trained transformer) and other large language models (LLM). We reflect on common programmatic goals to assess the current strengths and limitations of LLMs in helping to build ethics competencies among future clinicians. Through an actual case analysis, we highlight areas in which chatGPT and other LLMs are conducive to common bioethics education goals. We also comment on where such technologies remain an imperfect substitute for human-led ethics teaching and learning. Finally, we conclude that the relative strengths of chatGPT warrant its consideration as a teaching and learning tool in ethics education in ways that account for current limitations and build in flexibility as the technology evolves.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA