Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Med Law Rev ; 2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39257157

ABSTRACT

This article argues that the integration of artificial intelligence (AI) into healthcare, particularly under the European Union's Artificial Intelligence Act (AI-Act), poses significant implications for the doctor-patient relationship. While historically paternalistic, Western medicine now emphasises patient autonomy within a consumeristic paradigm, aided by technological advancements. However, hospitals worldwide are adopting AI more rapidly than before, potentially reshaping patient care dynamics. Three potential pathways emerge: enhanced patient autonomy, increased doctor control via AI, or disempowerment of both parties as decision-making shifts to private entities. This article contends that without addressing flaws in the AI-Act's risk-based approach, private entities could be empowered at the expense of patient autonomy. While proposed directives like the AI Liability Directive (AILD) and the revised Directive on Liability for Defective Products (revised PLD) aim to mitigate risks, they may not address the limitations of the AI-Act. Caution must be exercised in the future interpretation of the emerging regulatory architecture to protect patient autonomy and to preserve the central role of healthcare professionals in the care of their patients.

2.
Am J Law Med ; 49(2-3): 250-266, 2023 07.
Article in English | MEDLINE | ID: mdl-38344795

ABSTRACT

Artificial intelligence (AI) is being tested and deployed in major hospitals to monitor patients, leading to improved health outcomes, lower costs, and time savings. This uptake is in its infancy, with new applications being considered. In this Article, the challenges of deploying AI in mental health wards are examined by reference to AI surveillance systems, suicide prediction and hospital administration. The examination highlights risks surrounding patient privacy, informed consent, and data considerations. Overall, these risks indicate that AI should only be used in a psychiatric ward after careful deliberation, caution, and ongoing reappraisal.


Subject(s)
Artificial Intelligence , Mental Health , Humans , Psychiatric Department, Hospital , Informed Consent
4.
J Am Med Dir Assoc ; 25(9): 105105, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38909630

ABSTRACT

This article proposes a framework for examining the ethical and legal concerns for using artificial intelligence (AI) in post-acute and long-term care (PA-LTC). It argues that established frameworks on health, AI, and the law should be adapted to specific care contexts. For residents in PA-LTC, their social, psychological, and mobility needs should act as a gauge for examining the benefits and risks of integrating AI into their care. Using those needs as a gauge, 4 areas of particular concern are identified. First, the threat that AI poses to the autonomy of residents can undermine their core needs. Second, how discrimination and bias in algorithmic decision-making can undermine Medicare coverage for PA-LTC, causing doctors' recommendations to be ignored and denying residents the care they are entitled to. Third, privacy rules concerning data use may undermine developers' ability to train accurate AI systems, limiting their usefulness in PA-LTC contexts. Fourth, the importance of obtaining consent before AI is used and discussions about how that care should continue if there are concerns about an ongoing decline in cognition. Together, these considerations elevate existing frameworks and adapt them to the context-specific case of PA-LTC. It is hoped that future research will examine the legal implications of these matters in each of these specific cases.


Subject(s)
Artificial Intelligence , Long-Term Care , Artificial Intelligence/ethics , Humans , Long-Term Care/legislation & jurisprudence , United States , Subacute Care , Nursing Homes/ethics , Aged
5.
Asian Bioeth Rev ; 16(3): 373-389, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39022374

ABSTRACT

This paper examines the Saudi Food and Drug Authority's (SFDA) Guidance on Artificial Intelligence (AI) and Machine Learning (ML) technologies based Medical Devices (the MDS-G010). The SFDA has pioneered binding requirements designed for manufacturers to obtain Medical Device Marketing Authorization. The regulation of AI in health is at an early stage worldwide. Therefore, it is critical to examine the scope and nature of the MDS-G010, its influences, and its future directions. It is argued that the guidance is a patchwork of existing international best practices concerning AI regulation, incorporates adapted forms of non-AI-based guidelines, and builds on existing legal requirements in the SFDA's existing regulatory architecture. There is particular congruence with the approaches of the US Food and Drug Administration (FDA) and the International Medical Device Regulators Forum (IMDRF), but the SFDA goes beyond those approaches to incorporate other best practices into its guidance. Additionally, the binding nature of the MDS-G010 is complex. There are binding 'components' within the guidance, but the incorporation of non-binding international best practices which are subordinate to national law results in a lack of clarity about how penalties for non-compliance will operate.

6.
J Law Med Ethics ; 51(2): 287-300, 2023.
Article in English | MEDLINE | ID: mdl-37655571

ABSTRACT

This article examines the legal and ethical challenges for the provision of healthcare in the metaverse. It proposes that the issues arising in the metaverse are an extension of those found in telehealth and virtual health communities, albeit with greater complexity. It argues that international collaboration between policymakers, lawmakers, and researchers is required to regulate this space and facilitate the safe and effective development of meta-medicine.


Subject(s)
Telemedicine , Humans , Health Facilities , Research Personnel
7.
Stud Health Technol Inform ; 305: 640-643, 2023 Jun 29.
Article in English | MEDLINE | ID: mdl-37387113

ABSTRACT

The growing accessibility of large health datasets and AI's ability to analyze them offers significant potential to transform public health and epidemiology. AI-driven interventions in preventive, diagnostic, and therapeutic healthcare are becoming more prevalent, but they raise ethical concerns, particularly regarding patient safety and privacy. This study presents a thorough analysis of ethical and legal principles found in the literature on AI applications in public health. A comprehensive search yielded 22 publications for review, revealing ethical principles such as equity, bias, privacy, security, safety, transparency, confidentiality, accountability, social justice, and autonomy. Additionally, five key ethical challenges were identified. The study emphasizes the importance of addressing these ethical and legal concerns and encourages further research to establish comprehensive guidelines for responsible AI implementation in public health.


Subject(s)
Artificial Intelligence , Public Health , Humans , Social Responsibility , Health Facilities , Patient Safety
8.
Article in English | MEDLINE | ID: mdl-36743720

ABSTRACT

Background: The rates of mental health disorders such as anxiety and depression are at an all-time high especially since the onset of COVID-19, and the need for readily available digital health care solutions has never been greater. Wearable devices have increasingly incorporated sensors that were previously reserved for hospital settings. The availability of wearable device features that address anxiety and depression is still in its infancy, but consumers will soon have the potential to self-monitor moods and behaviors using everyday commercially-available devices. Objective: This study aims to explore the features of wearable devices that can be used for monitoring anxiety and depression. Methods: Six bibliographic databases, including MEDLINE, EMBASE, PsycINFO, IEEE Xplore, ACM Digital Library, and Google Scholar were used as search engines for this review. Two independent reviewers performed study selection and data extraction, while two other reviewers justified the cross-checking of extracted data. A narrative approach for synthesizing the data was utilized. Results: From 2408 initial results, 58 studies were assessed and highlighted according to our inclusion criteria. Wrist-worn devices were identified in the bulk of our studies (n = 42 or 71%). For the identification of anxiety and depression, we reported 26 methods for assessing mood, with the State-Trait Anxiety Inventory being the joint most common along with the Diagnostic and Statistical Manual of Mental Disorders (n = 8 or 14%). Finally, n = 26 or 46% of studies highlighted the smartphone as a wearable device host device. Conclusion: The emergence of affordable, consumer-grade biosensors offers the potential for new approaches to support mental health therapies for illnesses such as anxiety and depression. We believe that purposefully-designed wearable devices that combine the expertise of technologists and clinical experts can play a key role in self-care monitoring and diagnosis.

SELECTION OF CITATIONS
SEARCH DETAIL