Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Hastings Cent Rep ; 54(3): 28-34, 2024 May.
Article in English | MEDLINE | ID: mdl-38842853

ABSTRACT

In 1971, two years before Roe v. Wade affirmed federal protection for abortion, Judith Jarvis Thomson attempted to demonstrate the wrongs of forced gestation through analogy: you awake to find that the world's most esteemed violinist is wholly, physically dependent on you for life support. Here, the authors suggest that Thomson's intuition, that there is a relevant similarity between providing living kidney support and forced gestation, is realized in the contemporary practice of living organ donation. After detailing the robust analogy between living kidney donation and gestation, we turn to current ethical guidelines incorporated in the United Network for Organ Sharing's requirements for legally authorized organ donation and transplantation. We conclude that if, as we-and Thomson-suggest, organ donation and gestation are relevantly similar, then the ethical framework supporting donation may aid in articulating ethical grounds that will be compelling in informing the legal grounds for a defense of abortion.


Subject(s)
Abortion, Induced , Tissue and Organ Procurement , Humans , Tissue and Organ Procurement/ethics , Tissue and Organ Procurement/legislation & jurisprudence , Abortion, Induced/ethics , Abortion, Induced/legislation & jurisprudence , Female , Pregnancy , United States , Living Donors/ethics , Kidney Transplantation/ethics , Organ Transplantation/ethics
2.
Chest ; 2024 May 22.
Article in English | MEDLINE | ID: mdl-38788895

ABSTRACT

Artificial intelligence (AI) is being used increasingly in health care, and without an ethically supportable, standard approach to knowing when patients should be informed about AI, hospital systems and clinicians run the risk of fostering mistrust among their patients and the public. Therefore, hospital leaders need guidance on when to tell patients about the use of AI in their care. In this article, we provide such guidance. To determine which AI technologies fall into each of the identified categories (no notification or no informed consent [IC], notification only, and formal IC), we propose that AI use-cases should be evaluated using the following criteria: (1) AI model autonomy, (2) departure from standards of practice, (3) whether the AI model is patient facing, (4) clinical risk introduced by the model, and (5) administrative burdens. We take each of these in turn, using a case example of AI in health care to illustrate our proposed framework. As AI becomes more commonplace in health care, our proposal may serve as a starting point for creating consensus on standards for notification and IC for the use of AI in patient care.

3.
J Clin Ethics ; 33(1): 50-57, 2022.
Article in English | MEDLINE | ID: mdl-35302519

ABSTRACT

In this article, we discuss the case of Michael Johnson, an African-American man who sought treatment for respiratory distress due to COVID-19, but who was adamant that he did not want to be intubated due to his belief that ventilators directly cause death. This case prompted reflection about the ways in which a false belief can create uncertainty and complexity for clinicians who are responsible for evaluating decision-making capacity (DMC). In our analysis, we consider the extent to which Mr. Johnson demonstrated capacity according to each of Appelbaum's criteria.1 Although it was fairly clear that Mr. Johnson lacked DMC on the basis of both understanding and appreciation, we found ourselves reflecting upon the false belief that seemed to motivate his refusal. This led us to further consider the ways in which our current social and political environment can complicate evaluations of patients' preferences and reasons for declining life-sustaining interventions. In particular, we consider the impact of the role of misinformation and systemic racism in preparing the grounds for false beliefs.In this article, we discuss the case of Michael Johnson, an African-American man who sought treatment for respiratory distress due to COVID-19, but who was adamant that he did not want to be intubated due to his belief that ventilators directly cause death. This case prompted reflection about the ways in which a false belief can create uncertainty and complexity for clinicians who are responsible for evaluating decision-making capacity (DMC). In our analysis, we consider the extent to which Mr. Johnson demonstrated capacity according to each of Appelbaum's criteria.1 Although it was fairly clear that Mr. Johnson lacked DMC on the basis of both understanding and appreciation, we found ourselves reflecting upon the false belief that seemed to motivate his refusal. This led us to further consider the ways in which our current social and political environment can complicate evaluations of patients' preferences and reasons for declining life-sustaining interventions. In particular, we consider the impact of the role of misinformation and systemic racism in preparing the grounds for false beliefs.


Subject(s)
Decision Making , Health Knowledge, Attitudes, Practice , Mental Competency , Black or African American/psychology , COVID-19/ethnology , COVID-19/therapy , Health Knowledge, Attitudes, Practice/ethnology , Humans , Life Support Care , Male , Treatment Refusal/ethnology , Ventilators, Mechanical
5.
Theor Med Bioeth ; 41(2-3): 67-82, 2020 06.
Article in English | MEDLINE | ID: mdl-32333140

ABSTRACT

Within the evidence-based medicine (EBM) construct, clinical expertise is acknowledged to be both derived from primary experience and necessary for optimal medical practice. Primary experience in medical practice, however, remains undervalued. Clinicians' primary experience tends to be dismissed by EBM as unsystematic or anecdotal, a source of bias rather than knowledge, never serving as the "best" evidence to support a clinical decision. The position that clinical expertise is necessary but that primary experience is untrustworthy in clinical decision-making is epistemically incoherent. Here we argue for the value and utility of knowledge gained from primary experience for the practice of medicine. Primary experience provides knowledge necessary to diagnose, treat, and assess response in individual patients. Hierarchies of evidence, when advanced as guides for clinical decisions, mistake the relationship between propositional and experiential knowledge. We argue that primary experience represents a kind of medical knowledge distinct from the propositional knowledge produced by clinical research, both of which are crucial to determining the best diagnosis and course of action for particular patients.


Subject(s)
Clinical Competence/standards , Knowledge , Problem-Based Learning/standards , Humans , Problem-Based Learning/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...