RESUMEN
Time limits on organ viability from retrieval to implantation shape the US system for human organ transplantation. Preclinical research has demonstrated that emerging biopreservation technologies can prolong organ viability, perhaps indefinitely. These technologies could transform transplantation into a scheduled procedure without geographic or time constraints, permitting organ assessment and potential preconditioning of the recipients. However, the safety and efficacy of advanced biopreservation with prolonged storage of vascularized organs followed by reanimation will require new regulatory oversight, as clinicians and transplant centers are not trained in the engineering techniques involved or equipped to assess the manipulated organs. Although the Food and Drug Administration is best situated to provide that process oversight, the agency has until now declined to oversee organ quality and has excluded vascularized organs from the oversight framework of human cells, tissues, and cellular-based and tissue-based products. Integration of advanced biopreservation technologies will require new facilities for organ preservation, storage, and reanimation plus ethical guidance on immediate organ use versus preservation, national allocation, and governance of centralized organ banks. Realization of the long-term benefit of advanced biopreservation requires anticipation of the necessary legal and ethical oversight tools and that process should begin now.
RESUMEN
Researchers are rapidly developing and deploying highly portable MRI technology to conduct field-based research. The new technology will widen access to include new investigators in remote and unconventional settings and will facilitate greater inclusion of rural, economically disadvantaged, and historically underrepresented populations. To address the ethical, legal, and societal issues raised by highly accessible and portable MRI, an interdisciplinary Working Group (WG) engaged in a multi-year structured process of analysis and consensus building, informed by empirical research on the perspectives of experts and the general public. This article presents the WG's consensus recommendations. These recommendations address technology quality control, design and oversight of research, including safety of research participants and others in the scanning environment, engagement of diverse participants, therapeutic misconception, use of artificial intelligence algorithms to acquire and analyze MRI data, data privacy and security, return of results and managing incidental findings, and research participant data access and control.
RESUMEN
Psychiatry is rapidly adopting digital phenotyping and artificial intelligence/machine learning tools to study mental illness based on tracking participants' locations, online activity, phone and text message usage, heart rate, sleep, physical activity, and more. Existing ethical frameworks for return of individual research results (IRRs) are inadequate to guide researchers for when, if, and how to return this unprecedented number of potentially sensitive results about each participant's real-world behavior. To address this gap, we convened an interdisciplinary expert working group, supported by a National Institute of Mental Health grant. Building on established guidelines and the emerging norm of returning results in participant-centered research, we present a novel framework specific to the ethical, legal, and social implications of returning IRRs in digital phenotyping research. Our framework offers researchers, clinicians, and Institutional Review Boards (IRBs) urgently needed guidance, and the principles developed here in the context of psychiatry will be readily adaptable to other therapeutic areas.
Asunto(s)
Trastornos Mentales , Psiquiatría , Humanos , Inteligencia Artificial , Trastornos Mentales/terapia , Comités de Ética en Investigación , InvestigadoresRESUMEN
Acute kidney injury (AKI), which is a common complication of acute illnesses, affects the health of individuals in community, acute care and post-acute care settings. Although the recognition, prevention and management of AKI has advanced over the past decades, its incidence and related morbidity, mortality and health care burden remain overwhelming. The rapid growth of digital technologies has provided a new platform to improve patient care, and reports show demonstrable benefits in care processes and, in some instances, in patient outcomes. However, despite great progress, the potential benefits of using digital technology to manage AKI has not yet been fully explored or implemented in clinical practice. Digital health studies in AKI have shown variable evidence of benefits, and the digital divide means that access to digital technologies is not equitable. Upstream research and development costs, limited stakeholder participation and acceptance, and poor scalability of digital health solutions have hindered their widespread implementation and use. Here, we provide recommendations from the Acute Disease Quality Initiative consensus meeting, which involved experts in adult and paediatric nephrology, critical care, pharmacy and data science, at which the use of digital health for risk prediction, prevention, identification and management of AKI and its consequences was discussed.
Asunto(s)
Lesión Renal Aguda , Nefrología , Adulto , Niño , Humanos , Enfermedad Aguda , Consenso , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/terapia , Lesión Renal Aguda/etiología , Cuidados CríticosRESUMEN
This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value.
RESUMEN
Voice-based AI-powered digital assistants, such as Alexa, Siri, and Google Assistant, present an exciting opportunity to translate healthcare from the hospital to the home. But building a digital, medical panopticon can raise many legal and ethical challenges if not designed and implemented thoughtfully. This paper highlights the benefits and explores some of the challenges of using digital assistants to detect early signs of cognitive impairment, focusing on issues such as consent, bycatching, privacy, and regulatory oversight. By using a fictional but plausible near-future hypothetical, we demonstrate why an "ethics-by-design" approach is necessary for consumer-monitoring tools that may be used to identify health concerns for their users.
Asunto(s)
Enfermedad de Alzheimer , Fabaceae , Enfermedad de Alzheimer/diagnóstico , PrivacidadRESUMEN
Applications of biometrics in various societal contexts have been increasing in the United States, and policy debates about potential restrictions and expansions for specific biometrics (such as facial recognition and DNA identification) have been intensifying. Empirical data about public perspectives on different types of biometrics can inform these debates. We surveyed 4048 adults to explore perspectives regarding experience and comfort with six types of biometrics; comfort providing biometrics in distinct scenarios; trust in social actors to use two types of biometrics (facial images and DNA) responsibly; acceptability of facial images in eight scenarios; and perceived effectiveness of facial images for five tasks. Respondents were generally comfortable with biometrics. Trust in social actors to use biometrics responsibly appeared to be context specific rather than dependent on biometric type. Contrary to expectations given mounting attention to dataveillance concerns, we did not find sociodemographic factors to influence perspectives on biometrics in obvious ways. These findings underscore a need for qualitative approaches to understand the contextual factors that trigger strong opinions of comfort with and acceptability of biometrics in different settings, by different actors, and for different purposes and to identify the informational needs relevant to the development of appropriate policies and oversight.
RESUMEN
Facial imaging and facial recognition technologies, now common in our daily lives, also are increasingly incorporated into health care processes, enabling touch-free appointment check-in, matching patients accurately, and assisting with the diagnosis of certain medical conditions. The use, sharing, and storage of facial data is expected to expand in coming years, yet little is documented about the perspectives of patients and participants regarding these uses. We developed a pair of surveys to gather public perspectives on uses of facial images and facial recognition technologies in healthcare and in health-related research in the United States. We used Qualtrics Panels to collect responses from general public respondents using two complementary and overlapping survey instruments; one focused on six types of biometrics (including facial images and DNA) and their uses in a wide range of societal contexts (including healthcare and research) and the other focused on facial imaging, facial recognition technology, and related data practices in health and research contexts specifically. We collected responses from a diverse group of 4,048 adults in the United States (2,038 and 2,010, from each survey respectively). A majority of respondents (55.5%) indicated they were equally worried about the privacy of medical records, DNA, and facial images collected for precision health research. A vignette was used to gauge willingness to participate in a hypothetical precision health study, with respondents split as willing to (39.6%), unwilling to (30.1%), and unsure about (30.3%) participating. Nearly one-quarter of respondents (24.8%) reported they would prefer to opt out of the DNA component of a study, and 22.0% reported they would prefer to opt out of both the DNA and facial imaging component of the study. Few indicated willingness to pay a fee to opt-out of the collection of their research data. Finally, respondents were offered options for ideal governance design of their data, as "open science"; "gated science"; and "closed science." No option elicited a majority response. Our findings indicate that while a majority of research participants might be comfortable with facial images and facial recognition technologies in healthcare and health-related research, a significant fraction expressed concern for the privacy of their own face-based data, similar to the privacy concerns of DNA data and medical records. A nuanced approach to uses of face-based data in healthcare and health-related research is needed, taking into consideration storage protection plans and the contexts of use.
Asunto(s)
Reconocimiento Facial Automatizado/métodos , Investigación Biomédica/métodos , Manejo de Datos/métodos , Atención a la Salud/métodos , Reconocimiento Facial , Difusión de la Información/métodos , Opinión Pública , Adolescente , Adulto , Anciano , Femenino , Humanos , Masculino , Registros Médicos , Persona de Mediana Edad , Privacidad , Encuestas y Cuestionarios , Estados Unidos , Adulto JovenRESUMEN
Neurotechnology has traditionally been central to the diagnosis and treatment of neurological disorders. While these devices have initially been utilized in clinical and research settings, recent advancements in neurotechnology have yielded devices that are more portable, user-friendly, and less expensive. These improvements allow laypeople to monitor their brain waves and interface their brains with external devices. Such improvements have led to the rise of wearable neurotechnology that is marketed to the consumer. While many of the consumer devices are marketed for innocuous applications, such as use in video games, there is potential for them to be repurposed for medical use. How do we manage neurotechnologies that skirt the line between medical and consumer applications and what can be done to ensure consumer safety? Here, we characterize neurotechnology based on medical and consumer applications and summarize currently marketed uses of consumer-grade wearable headsets. We lay out concerns that may arise due to the similar claims associated with both medical and consumer devices, the possibility of consumer devices being repurposed for medical uses, and the potential for medical uses of neurotechnology to influence commercial markets related to employment and self-enhancement.
RESUMEN
The individual right of access to one's own data is a crucial privacy protection long recognized in U.S. federal privacy laws. Mobile health devices and research software used in citizen science often fall outside the HIPAA Privacy Rule, leaving participants without HIPAA's right of access to one's own data. Absent state laws requiring access, the law of contract, as reflected in end-user agreements and terms of service, governs individuals' ability to find out how much data is being stored and how it might be shared with third parties. Efforts to address this problem by establishing norms of individual access to data from mobile health research unfortunately can run afoul of the FDA's investigational device exemption requirements.
Asunto(s)
Ciencia Ciudadana/ética , Confidencialidad/legislación & jurisprudencia , Acceso de los Pacientes a los Registros/legislación & jurisprudencia , Privacidad/legislación & jurisprudencia , Programas Informáticos/legislación & jurisprudencia , Telemedicina , Equipos y Suministros , Health Insurance Portability and Accountability Act , Humanos , Estados Unidos , United States Food and Drug AdministrationRESUMEN
Mobile devices with health apps, direct-to-consumer genetic testing, crowd-sourced information, and other data sources have enabled research by new classes of researchers. Independent researchers, citizen scientists, patient-directed researchers, self-experimenters, and others are not covered by federal research regulations because they are not recipients of federal financial assistance or conducting research in anticipation of a submission to the FDA for approval of a new drug or medical device. This article addresses the difficult policy challenge of promoting the welfare and interests of research participants, as well as the public, in the absence of regulatory requirements and without discouraging independent, innovative scientific inquiry. The article recommends a series of measures, including education, consultation, transparency, self-governance, and regulation to strike the appropriate balance.
Asunto(s)
Investigación Biomédica/legislación & jurisprudencia , Computadoras de Mano , Ética en Investigación , Aplicaciones Móviles , Políticas , Telemedicina , Investigación Biomédica/tendencias , Guías como Asunto , Humanos , Investigadores/clasificación , Estados UnidosRESUMEN
Delivering high quality genomics-informed care to patients requires accurate test results whose clinical implications are understood. While other actors, including state agencies, professional organizations, and clinicians, are involved, this article focuses on the extent to which the federal agencies that play the most prominent roles - the Centers for Medicare and Medicaid Services enforcing CLIA and the FDA - effectively ensure that these elements are met and concludes by suggesting possible ways to improve their oversight of genomic testing.
Asunto(s)
Genómica/legislación & jurisprudencia , Genómica/métodos , Genómica/normas , Secuenciación de Nucleótidos de Alto Rendimiento , Calidad de la Atención de Salud , Análisis de Secuencia de ADN , Centers for Medicare and Medicaid Services, U.S. , Humanos , Laboratorios/legislación & jurisprudencia , Legislación de Dispositivos Médicos , Programas Informáticos/legislación & jurisprudencia , Estados Unidos , United States Food and Drug AdministrationRESUMEN
Regulatory policy for genomic testing may be subject to biases that favor reliance on existing regulatory frameworks even when those frameworks carry unintended legal consequences or may be poorly tailored to the challenges genomic testing presents. This article explores three examples drawn from genetic privacy regulation, oversight of clinical uses of genomic information, and regulation of genomic software. Overreliance on expedient regulatory approaches has a potential to undercut complete and durable solutions.