ABSTRACT
The COVID-19 pandemic has highlighted the importance of biosafety in the biomedical sciences. While it is often assumed that biosafety is a purely technical matter that has little to do with philosophy or the humanities, biosafety raises important ethical issues that have not been adequately examined in the scientific or bioethics literature. This article reviews some pivotal events in the history of biosafety and biosecurity and explores three different biosafety topics that generate significant ethical concerns, i.e., risk assessment, risk management, and risk distribution. The article also discusses the role of democratic governance in the oversight of biosafety and offers some suggestions for incorporating bioethics into biosafety practice, education, and policy.
Subject(s)
Bioethics , COVID-19 , SARS-CoV-2 , Humans , COVID-19/prevention & control , Risk Assessment , Containment of Biohazards/ethics , Pandemics/ethics , Risk Management/ethics , Bioethical Issues , Security Measures/ethicsABSTRACT
In this article, we defend an approach to autonomous vehicle ethics and policy based on the precautionary principle. We argue that a precautionary approach is warranted, given the significant scientific and moral uncertainties related to autonomous vehicles, especially higher-level ones. While higher-level autonomous vehicles may offer many important benefits to society, they also pose significant risks, which are not fully understood at this juncture. Risk management strategies traditionally used by government officials to make decisions about new technologies cannot be applied to higher-level autonomous vehicles because these strategies require accurate and reliable probability estimates concerning the outcomes of different policy options and extensive agreement about values, which are not currently available for autonomous vehicles. Although we describe our approach as precautionary, that does not mean that we are opposed to autonomous vehicle development and deployment, because autonomous vehicles offer benefits that should be pursued. The optimal approach to managing the risks of autonomous vehicles is to take reasonable precautions; that is, to adopt policies that attempt to deal with serious risks in a responsible way without depriving society of important benefits.
ABSTRACT
It is a common practice in qualitative research to transcribe audio or video files from interviews or focus groups and then destroy the files at some future time, usually after validating the transcript or concluding the research. We argue that it is time to rethink this practice and that retention of original qualitative data-including audio and video recordings-should be the default stance in most cases.
Subject(s)
Records , Research Personnel , Humans , Video Recording , Focus Groups , Qualitative ResearchABSTRACT
Investigating research misconduct allegations against top officials can create significant conflicts of interest (COIs) for universities that may require changes to existing oversight frameworks. One way of addressing some of these challenges is to develop policies and procedures that specifically address investigation of allegations of misconduct involving top university officials. Steps can also be taken now regardless of whether such a body is created. Federal and university research misconduct regulations and policies may need to be revised to provide institutions with clearer guidance on how to deal with misconduct allegations against top officials. For their part, institutions may benefit from proactively creating and transparently disclosing their own processes for independent investigation of research misconduct allegations against senior officials.
ABSTRACT
Openness is widely regarded as a pillar of scientific ethics because it promotes reproducibility and progress in science and benefits society. However, the sharing of scientific information can sometimes adversely impact the interests of human research participants, human communities or populations, scientists, and private research sponsors; and may threaten national security. Because openness may conflict with other important social values, solutions to ethical and policy dilemmas should include meaningful input from those who are impacted by the sharing and use of scientific information, including research participants, communities, and the public. Data sharing and use policies should be reviewed and revised periodically to account for ongoing changes in science, technology, and society.
ABSTRACT
We extracted, coded, and analyzed data from 343 Office of Research Integrity (ORI) case summaries published in the Federal Register and other venues from May 1993 to July 2023 to test hypotheses concerning the relationship between the severity of ORI administrative actions and various demographic and institutional factors. We found that factors indicative of the severity of the respondent's misconduct or a pattern of misbehavior were associated with the severity of ORI administrative actions. Being required by ORI to retract or correct publications and aggravating factors, such as interfering with an investigation, were both positively associated with receiving a funding debarment and with receiving an administrative action longer than three years. Admitting one's guilt and being found to have committed plagiarism (only) were negatively associated with receiving a funding debarment but were neither positively nor negatively associated with receiving an administrative action longer than three years. Other factors, such as the respondent's race/ethnicity, gender, academic position, administrative position, or their institution's NIH funding level or extramural vs. intramural or foreign vs. US status, were neither positively nor negatively associated with the severity of administrative actions. Overall, our findings suggest that ORI has acted fairly when imposing administrative actions on respondents and has followed DHHS guidelines.
ABSTRACT
Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. In the interests of fostering a wider conversation about how generative AI may be used, we have developed a preliminary set of recommendations for its use in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.
Subject(s)
Editorial Policies , Publishing , Humans , Scholarly Communication , Artificial Intelligence , TechnologyABSTRACT
Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. In the interests of fostering a wider conversation about how generative AI may be used, we have developed a preliminary set of recommendations for its use in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.
Subject(s)
Bioethics , Publishing , Humans , Editorial Policies , Scholarly Communication , Artificial IntelligenceABSTRACT
Green bioethics is an area of research and scholarship that examines the impact of healthcare practices and policies on the environment and emphasises environmental values, such as ecological sustainability and stewardship. Some green bioethicists have argued that healthcare providers should inform patients about the environmental impacts of treatments and advocate for options that minimise adverse impacts. While disclosure of information pertaining to the environmental impacts of treatments could facilitate autonomous decision-making and strengthen the patient-provider relationship in situations where patients have clearly expressed environmental concerns, it may have the opposite effect in other situations if makes patients feel like they are being judged or manipulated. We argue, therefore, that there is not a generalisable duty to disclose environmental impact information to all patients during the consent process. Providers who practice green bioethics should focus on advocating for system-level changes in healthcare financing, organisation and delivery and use discretion when bringing up environmental concerns in their encounters with patients.
ABSTRACT
Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. In the interests of fostering a wider conversation about how generative AI may be used, we have developed a preliminary set of recommendations for its use in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.
Subject(s)
Bioethics , Publishing , Humans , Scholarly Communication , Artificial IntelligenceABSTRACT
In the last decade, there has been increased recognition of the importance of disclosing and managing non-financial conflicts of interests to safeguard the objectivity, integrity, and trustworthiness of scientific research. While funding agencies and academic institutions have had policies for addressing non-financial interests in grant peer review and research oversight since the 1990s, scientific journals have been only recently begun to develop such policies. An impediment to the formulation of effective journal policies is that non-financial interests can be difficult to recognize and define. Journals can overcome this problem by providing guidance concerning the types of non-financial interests that should be disclosed, including direct research interests, direct professional interests, expert testimony, involvement in litigation, holding a leadership position in a non-governmental organization, providing technical or scientific advice to a non-governmental organization, and personal or professional relationships. The guidance should apply to authors, editors, and reviewers.
ABSTRACT
Scientists who manage research laboratories often face ethical dilemmas related to conflicts between their different roles, such as researcher, mentor, entrepreneur, and manager. It is not known how often uncertainty about conflicting role obligations leads scientists to engage in unethical conduct, but this probably occurs more often than many people would like to think. In this paper, we reflect on ethical decision-making in scientific laboratory management with special attention to how different roles create conflicting obligations and expectations that may produce moral uncertainty and lead to violations of research norms, especially when combined with self-interest and other factors that increase the risk of misbehavior. We also offer some suggestions and guidance for investigators and research institutions.
ABSTRACT
Sometimes researchers explicitly or implicitly conceive of authorship in terms of moral or ethical rights to authorship when they are dealing with authorship issues. Because treating authorship as a right can encourage unethical behaviours, such as honorary and ghost authorship, buying and selling authorship, and unfair treatment of researchers, we recommend that researchers not conceive of authorship in this way but view it as a description about contributions to research. However, we acknowledge that the arguments we have given for this position are largely speculative and that more empirical research is needed to better ascertain the benefits and risks of treating authorship on scientific publications as a right.