Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 456
Filter
Add more filters

Publication year range
1.
Hum Genomics ; 18(1): 45, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38720401

ABSTRACT

BACKGROUND: Implementing genomic sequencing into newborn screening programs allows for significant expansion in the number and scope of conditions detected. We sought to explore public preferences and perspectives on which conditions to include in genomic newborn screening (gNBS). METHODS: We recruited English-speaking members of the Australian public over 18 years of age, using social media, and invited them to participate in online focus groups. RESULTS: Seventy-five members of the public aged 23-72 participated in one of fifteen focus groups. Participants agreed that if prioritisation of conditions was necessary, childhood-onset conditions were more important to include than later-onset conditions. Despite the purpose of the focus groups being to elicit public preferences, participants wanted to defer to others, such as health professionals or those with a lived experience of each condition, to make decisions about which conditions to include. Many participants saw benefit in including conditions with no available treatment. Participants agreed that gNBS should be fully publicly funded. CONCLUSION: How many and which conditions are included in a gNBS program will be a complex decision requiring detailed assessment of benefits and costs alongside public and professional engagement. Our study provides support for implementing gNBS for treatable childhood-onset conditions.


Subject(s)
Neonatal Screening , Humans , Infant, Newborn , Australia , Adult , Female , Male , Middle Aged , Aged , Genomics , Focus Groups , Public Opinion , Genetic Testing , Young Adult
2.
PLoS Comput Biol ; 20(3): e1011933, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38512898

ABSTRACT

This perspective is part of an international effort to improve epidemiological models with the goal of reducing the unintended consequences of infectious disease interventions. The scenarios in which models are applied often involve difficult trade-offs that are well recognised in public health ethics. Unless these trade-offs are explicitly accounted for, models risk overlooking contested ethical choices and values, leading to an increased risk of unintended consequences. We argue that such risks could be reduced if modellers were more aware of ethical frameworks and had the capacity to explicitly account for the relevant values in their models. We propose that public health ethics can provide a conceptual foundation for developing this capacity. After reviewing relevant concepts in public health and clinical ethics, we discuss examples from the COVID-19 pandemic to illustrate the current separation between public health ethics and infectious disease modelling. We conclude by describing practical steps to build the capacity for ethically aware modelling. Developing this capacity constitutes a critical step towards ethical practice in computational modelling of public health interventions, which will require collaboration with experts on public health ethics, decision support, behavioural interventions, and social determinants of health, as well as direct consultation with communities and policy makers.


Subject(s)
Communicable Diseases , Pandemics , Humans , Pandemics/prevention & control , Public Health , Communicable Diseases/epidemiology , Computer Simulation
3.
Am J Transplant ; 24(6): 918-927, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38514013

ABSTRACT

Xenotransplantation offers the potential to meet the critical need for heart and lung transplantation presently constrained by the current human donor organ supply. Much was learned over the past decades regarding gene editing to prevent the immune activation and inflammation that cause early organ injury, and strategies for maintenance of immunosuppression to promote longer-term xenograft survival. However, many scientific questions remain regarding further requirements for genetic modification of donor organs, appropriate contexts for xenotransplantation research (including nonhuman primates, recently deceased humans, and living human recipients), and risk of xenozoonotic disease transmission. Related ethical questions include the appropriate selection of clinical trial participants, challenges with obtaining informed consent, animal rights and welfare considerations, and cost. Research involving recently deceased humans has also emerged as a potentially novel way to understand how xeno-organs will impact the human body. Clinical xenotransplantation and research involving decedents also raise ethical questions and will require consensus regarding regulatory oversight and protocol review. These considerations and the related opportunities for xenotransplantation research were discussed in a workshop sponsored by the National Heart, Lung, and Blood Institute, and are summarized in this meeting report.


Subject(s)
Heart Transplantation , Lung Transplantation , Transplantation, Heterologous , Transplantation, Heterologous/ethics , Humans , Lung Transplantation/ethics , Animals , United States , Heart Transplantation/ethics , National Heart, Lung, and Blood Institute (U.S.) , Biomedical Research/ethics , Tissue Donors/supply & distribution , Tissue Donors/ethics
4.
J Med Ethics ; 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38429089

ABSTRACT

Stem cell-derived embryo models (SCEMs) are model embryos used in scientific research to gain a better understanding of early embryonic development. The way humans develop from a single-cell zygote to a complex multicellular organism remains poorly understood. However, research looking at embryo development is difficult because of restrictions on the use of human embryos in research. Stem cell embryo models could reduce the need for human embryos, allowing us to both understand early development and improve assisted reproductive technologies. There have been several rapid advances in creating SCEMs in recent years. These advances potentially provide a new avenue to study early human development. The benefits of SCEMs are predicated on the claim that they are different from embryos and should, therefore, be exempt from existing regulations that apply to embryos (such as the 14-day rule). SCEMs are proposed as offering a model that can capture the inner workings of the embryo but lack its moral sensitivities. However, the ethical basis for making this distinction has not been clearly explained. In this current controversy, we focus on the ethical justification for treating SCEMs differently to embryos, based on considerations of moral status.

5.
J Med Ethics ; 2024 Jul 18.
Article in English | MEDLINE | ID: mdl-39025642

ABSTRACT

The Supreme Court of the United States has recently been petitioned to revisit legal issues pertaining to the lawfulness of imposing a vaccine mandate on individuals with proof of natural immunity during the COVID-19 pandemic. While the petition accepts that the protection of public health during COVID-19 was an important governmental interest, the petitioners maintain that the imposition of a vaccine mandate on individuals with natural immunity was not 'substantially related' to accomplishing that purpose. In this short report, we outline how some of the petition's general arguments interact with points we raised in a 2022 article in this journal defending natural immunity exemptions, in light of new evidence. In particular, we reflect on new evidence pertaining to differences between vaccine-induced immunity, natural immunity, and so-called 'hybrid' immunity. We suggest that the nuanced nature of this evidence highlights the importance of making fine-grained judgements about proportionality and necessity when considering vaccine mandates. We conclude by claiming that if future pandemics necessitate the imposition of vaccine mandates, then those seeking to justify them should clearly articulate the relevance (and the evidence) for the comparative protection of vaccine-induced, natural, and hybrid immunity.

6.
Am J Bioeth ; 24(7): 13-26, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38226965

ABSTRACT

When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.


Subject(s)
Judgment , Patient Preference , Humans , Personal Autonomy , Algorithms , Machine Learning/ethics , Decision Making/ethics
7.
Bioethics ; 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38887844

ABSTRACT

This article objects to two arguments that William MacAskill gives in What We Owe the Future in support of optimism about the prospects of longtermism, that is, the prospects of positively influencing the longterm future. First, it grants that he is right that, whereas humans sometimes benefit others as an end, they rarely harm them as an end, but argues that this bias towards positive motivation is counteracted by the fact that it is practically easier to harm than to benefit. For this greater easiness makes it likely both that accidental effects will be harmful rather than beneficial and that the means or side-effects of the actions people perform with the aim of benefiting themselves and those close to them will tend to be harmful to others. Secondly, while our article agrees with him that values could lock-in, it contends that the value of longtermism is unlikely to lock in as long as human beings have not been morally enhanced but remain partial in favor of themselves and those near and dear.

8.
Bioethics ; 38(5): 391-400, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38554069

ABSTRACT

Machine-learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well-represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well-represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms-showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.


Subject(s)
Algorithms , Artificial Intelligence , Humans , Female , Artificial Intelligence/ethics , Breast Neoplasms , Melanoma , Delivery of Health Care/ethics , Machine Learning/ethics , Social Justice , Prognosis
9.
Sci Eng Ethics ; 30(1): 3, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38315257

ABSTRACT

Human brain organoids are three-dimensional masses of tissues derived from human stem cells that partially recapitulate the characteristics of the human brain. They have promising applications in many fields, from basic research to applied medicine. However, ethical concerns have been raised regarding the use of human brain organoids. These concerns primarily relate to the possibility that brain organoids may become conscious in the future. This possibility is associated with uncertainties about whether and in what sense brain organoids could have consciousness and what the moral significance of that would be. These uncertainties raise further concerns regarding consent from stem cell donors who may not be sufficiently informed to provide valid consent to the use of their donated cells in human brain organoid research. Furthermore, the possibility of harm to the brain organoids raises question about the scope of the donor's autonomy in consenting to research involving these entities. Donor consent does not establish the reasonableness of the risk and harms to the organoids, which ethical oversight must ensure by establishing some measures to mitigate them. To address these concerns, we provide three proposals for the consent procedure for human brain organoid research. First, it is vital to obtain project-specific consent rather than broad consent. Second, donors should be assured that appropriate measures will be taken to protect human brain organoids during research. Lastly, these assurances should be fulfilled through the implementation of precautionary measures. These proposals aim to enhance the ethical framework surrounding human brain organoid research.


Subject(s)
Brain , Consciousness , Humans , Tissue Donors , Organoids , Informed Consent
10.
Sci Eng Ethics ; 30(4): 28, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39012561

ABSTRACT

The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed 'Mindplexes' or 'Cloudminds', aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.


Subject(s)
Brain-Computer Interfaces , Internet , Personal Autonomy , Privacy , Humans , Brain-Computer Interfaces/ethics , Social Responsibility , Blockchain/ethics , Computer Security/ethics , Ownership/ethics , Politics , Cognition , Safety , Technology/ethics
11.
Health Care Anal ; 2024 Jan 12.
Article in English | MEDLINE | ID: mdl-38214808

ABSTRACT

This paper explores the dilemma faced by mental healthcare professionals in balancing treatment of mental disorders with promoting patient well-being and flourishing. With growing calls for a more explicit focus on patient flourishing in mental healthcare, we address two inter-related challenges: the lack of consensus on defining positive mental health and flourishing, and how professionals should respond to patients with controversial views on what is good for them. We discuss the relationship dynamics between healthcare providers and patients, proposing that 'liberal' approaches can provide a pragmatic framework to address disagreements about well-being in the context of flourishing-oriented mental healthcare. We acknowledge the criticisms of these approaches, including the potential for unintended paternalism and distrust. To mitigate these risks, we conclude by suggesting a mechanism to minimize the likelihood of unintended paternalism and foster patient trust.

12.
Camb Q Healthc Ethics ; : 1-13, 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38606432

ABSTRACT

Advances in brain-brain interface technologies raise the possibility that two or more individuals could directly link their minds, sharing thoughts, emotions, and sensory experiences. This paper explores conceptual and ethical issues posed by such mind-merging technologies in the context of clinical neuroethics. Using hypothetical examples along a spectrum from loosely connected pairs to fully merged minds, the authors sketch out a range of factors relevant to identifying the degree of a merger. They then consider potential new harms like loss of identity, psychological domination, loss of mental privacy, and challenges for notions of autonomy and patient benefit when applied to merged minds. While radical technologies may seem to necessitate new ethical paradigms, the authors suggest the individual-focus underpinning clinical ethics can largely accommodate varying degrees of mind mergers so long as individual patient interests remain identifiable. However, advanced decisionmaking and directives may have limitations in addressing the dilemmas posed. Overall, mind-merging possibilities amplify existing challenges around loss of identity, relating to others, autonomy, privacy, and the delineation of patient interests. This paper lays the groundwork for developing resources to address the novel issues raised, while suggesting the technologies reveal continuity with current healthcare ethics tensions.

13.
Ethics Inf Technol ; 26(1): 16, 2024.
Article in English | MEDLINE | ID: mdl-38450175

ABSTRACT

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient's values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient's values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.

14.
J Gastroenterol Hepatol ; 38(10): 1669-1676, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37277693

ABSTRACT

BACKGROUND: Successful implementation of artificial intelligence in gastroenterology and hepatology practice requires more than technology. There are ethical, legal, and social issues that need to be settled. AIM: A group consisting of AI developers (engineer), AI users (gastroenterologist, hepatologist, and surgeon) and AI regulators (ethicist and administrator) formed a Working Group to draft these Positions Statements with the objective of arousing public and professional interest and dialogue, to promote ethical considerations when implementing AI technology, to suggest to policy makers and health authorities relevant factors to take into account when approving and regulating the use of AI tools, and to engage the profession in preparing for change in clinical practice. STATEMENTS: These series of Position Statements point out the salient issues to maintain the trust between care provider and care receivers, and to legitimize the use of a non-human tool in healthcare delivery. It is based on fundamental principles such as respect, autonomy, privacy, responsibility, and justice. Enforcing the use of AI without considering these factor risk damaging the doctor-patient relationship.


Subject(s)
Gastroenterologists , Gastroenterology , Humans , Artificial Intelligence , Physician-Patient Relations , Singapore
15.
Prenat Diagn ; 43(2): 226-234, 2023 02.
Article in English | MEDLINE | ID: mdl-35929376

ABSTRACT

Prenatal screening for sex chromosome aneuploidies (SCAs) is increasingly available through expanded non-invasive prenatal testing (NIPT). NIPT for SCAs raises complex ethical issues for clinical providers, prospective parents and future children. This paper discusses the ethical issues that arise around NIPT for SCAs and current guidelines and protocols for management. The first section outlines current practice and the limitations of NIPT for SCAs. It then outlines key guidelines before discussing the ethical issues raised by this use of NIPT. We conclude that while screening for SCAs should be made available for people seeking to use NIPT, its implementation requires careful consideration of what, when and how information is provided to users.


Subject(s)
Aneuploidy , Prenatal Diagnosis , Pregnancy , Female , Child , Humans , Prospective Studies , Prenatal Diagnosis/methods , Sex Chromosome Aberrations , Sex Chromosomes
16.
J Med Ethics ; 2023 Oct 06.
Article in English | MEDLINE | ID: mdl-37802640

ABSTRACT

We review recent research to introduce human brain organoids into the brains of infant rats. This research shows these organoids integrate and function to affect behaviour in rats. We argue that this raises issues of moral status that will imminently arise and must be addressed through functional studies of these new life forms. We situate this research in the broader context of the biological revolution, arguing we already have the technological power to create fully human embodied animals. This raises profound, so far unaddressed ethical issues which call for urgent attention.

17.
J Med Ethics ; 49(6): 423-427, 2023 06.
Article in English | MEDLINE | ID: mdl-35985805

ABSTRACT

Psychiatric involvement in patient morality is controversial. If psychiatrists are tasked with shaping patient morality, the coercive potential of psychiatry is increased, treatment may be unfairly administered on the basis of patients' moral beliefs rather than medical need, moral disputes could damage the therapeutic relationship and, in any case, we are often uncertain or conflicted about what is morally right. Yet, there is also a strong case for the view that psychiatry often works through improving patient morality and, therefore, should aim to do so. Our goal is to offer a practical and ethical path through this conflict. We argue that the default psychiatric approach to patient morality should be procedural, whereby patients are helped to express their own moral beliefs. Such a procedural approach avoids the brunt of objections to psychiatric involvement in patient morality. However, in a small subset of cases where patients' moral beliefs are sufficiently distorted or underdeveloped, we claim that psychiatrists should move to a substantive approach and shape the content of those beliefs when they are relevant to psychiatric outcomes. The substantive approach is prone to the above objections but we argue it is nevertheless justified in this subset of cases.


Subject(s)
Moral Development , Psychiatry , Humans , Morals , Dissent and Disputes
18.
J Med Ethics ; 49(3): 211-220, 2023 03.
Article in English | MEDLINE | ID: mdl-35636917

ABSTRACT

We provide ethical criteria to establish when vaccine mandates for healthcare workers are ethically justifiable. The relevant criteria are the utility of the vaccine for healthcare workers, the utility for patients (both in terms of prevention of transmission of infection and reduction in staff shortage), and the existence of less restrictive alternatives that can achieve comparable benefits. Healthcare workers have professional obligations to promote the interests of patients that entail exposure to greater risks or infringement of autonomy than ordinary members of the public. Thus, we argue that when vaccine mandates are justified on the basis of these criteria, they are not unfairly discriminatory and the level of coercion they involve is ethically acceptable-and indeed comparable to that already accepted in healthcare employment contracts. Such mandates might be justified even when general population mandates are not. Our conclusion is that, given current evidence, those ethical criteria justify mandates for influenza vaccination, but not COVID-19 vaccination, for healthcare workers. We extend our arguments to other vaccines.


Subject(s)
COVID-19 , Influenza Vaccines , Influenza, Human , Humans , Influenza, Human/prevention & control , Health Personnel , Vaccination
19.
J Med Ethics ; 2023 Feb 08.
Article in English | MEDLINE | ID: mdl-36754610

ABSTRACT

We argue that, in certain circumstances, doctors might be professionally justified to provide abortions even in those jurisdictions where abortion is illegal. That it is at least professionally permissible does not mean that they have an all-things-considered ethical justification or obligation to provide illegal abortions or that professional obligations or professional permissibility trump legal obligations. It rather means that professional organisations should respect and indeed protect doctors' positive claims of conscience to provide abortions if they plausibly track what is in the best medical interests of their patients. It is the responsibility of state authorities to enforce the law, but it is the responsibility of professional organisations to uphold the highest standards of medical ethics, even when they conflict with the law. Whatever the legal sanctions in place, healthcare professionals should not be sanctioned by the professional bodies for providing abortions according to professional standards, even if illegally. Indeed, professional organisation should lobby to offer protection to such professionals. Our arguments have practical implications for what healthcare professionals and healthcare professional organisations may or should do in those jurisdictions that legally prohibit abortion, such as some US States after the reversal of Roe v Wade.

20.
J Med Ethics ; 49(4): 252-260, 2023 04.
Article in English | MEDLINE | ID: mdl-36543531

ABSTRACT

Despite advances in palliative care, some patients still suffer significantly at the end of life. Terminal Sedation (TS) refers to the use of sedatives in dying patients until the point of death. The following limits are commonly applied: (1) symptoms should be refractory, (2) sedatives should be administered proportionally to symptoms and (3) the patient should be imminently dying. The term 'Expanded TS' (ETS) can be used to describe the use of sedation at the end of life outside one or more of these limits.In this paper, we explore and defend ETS, focusing on jurisdictions where assisted dying is lawful. We argue that ETS is morally permissible: (1) in cases of non-refractory suffering where earlier treatments are likely to fail, (2) where gradual sedation is likely to be ineffective or where unconsciousness is a clinically desirable outcome, (3) where the patient meets all criteria for assisted dying or (4) where the patient has greater than 2 weeks to live, is suffering intolerably, and sedation is considered to be the next best treatment option for their suffering.While remaining two distinct practices, there is scope for some convergence between the criteria for assisted dying and the criteria for ETS. Dying patients who are currently ineligible for TS, or even assisted dying, should not be left to suffer. ETS provides one means to bridge this gap.


Subject(s)
Euthanasia , Suicide, Assisted , Terminal Care , Humans , Palliative Care , Hypnotics and Sedatives , Death
SELECTION OF CITATIONS
SEARCH DETAIL