Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 4 de 4
1.
PLOS Digit Health ; 3(6): e0000513, 2024 Jun.
Article En | MEDLINE | ID: mdl-38843115

Healthcare delivery organizations (HDOs) in the US must contend with the potential for AI to worsen health inequities. But there is no standard set of procedures for HDOs to adopt to navigate these challenges. There is an urgent need for HDOs to present a unified approach to proactively address the potential for AI to worsen health inequities. Amidst this background, Health AI Partnership (HAIP) launched a community of practice to convene stakeholders from across HDOs to tackle challenges related to the use of AI. On February 15, 2023, HAIP hosted an inaugural workshop focused on the question, "Our health care delivery setting is considering adopting a new solution that uses AI. How do we assess the potential future impact on health inequities?" This topic emerged as a common challenge faced by all HDOs participating in HAIP. The workshop had 2 main goals. First, we wanted to ensure participants could talk openly without reservations about challenging topics such as health equity. The second goal was to develop an actionable, generalizable framework that could be immediately put into practice. The workshop engaged 77 participants with 100% representation from all 10 HDOs and invited ecosystem partners. In an accompanying Research Article, we share the Health Equity Across the AI Lifecycle (HEAAL) framework. We invite and encourage HDOs to test the HEAAL framework internally and share feedback so that we can continue to refine and maintain the set of procedures. The HEAAL framework reveals the challenges associated with rigorously assessing the potential for AI to worsen health inequities. Significant investment in personnel, capabilities, and data infrastructure is required, and the level of investment needed could be beyond reach for most HDOs. We look forward to expanding our community of practice to assist HDOs around the world.

2.
PLOS Digit Health ; 3(5): e0000390, 2024 May.
Article En | MEDLINE | ID: mdl-38723025

The use of data-driven technologies such as Artificial Intelligence (AI) and Machine Learning (ML) is growing in healthcare. However, the proliferation of healthcare AI tools has outpaced regulatory frameworks, accountability measures, and governance standards to ensure safe, effective, and equitable use. To address these gaps and tackle a common challenge faced by healthcare delivery organizations, a case-based workshop was organized, and a framework was developed to evaluate the potential impact of implementing an AI solution on health equity. The Health Equity Across the AI Lifecycle (HEAAL) is co-designed with extensive engagement of clinical, operational, technical, and regulatory leaders across healthcare delivery organizations and ecosystem partners in the US. It assesses 5 equity assessment domains-accountability, fairness, fitness for purpose, reliability and validity, and transparency-across the span of eight key decision points in the AI adoption lifecycle. It is a process-oriented framework containing 37 step-by-step procedures for evaluating an existing AI solution and 34 procedures for evaluating a new AI solution in total. Within each procedure, it identifies relevant key stakeholders and data sources used to conduct the procedure. HEAAL guides how healthcare delivery organizations may mitigate the potential risk of AI solutions worsening health inequities. It also informs how much resources and support are required to assess the potential impact of AI solutions on health inequities.

3.
Article En | MEDLINE | ID: mdl-38767890

OBJECTIVES: Surface the urgent dilemma that healthcare delivery organizations (HDOs) face navigating the US Food and Drug Administration (FDA) final guidance on the use of clinical decision support (CDS) software. MATERIALS AND METHODS: We use sepsis as a case study to highlight the patient safety and regulatory compliance tradeoffs that 6129 hospitals in the United States must navigate. RESULTS: Sepsis CDS remains in broad, routine use. There is no commercially available sepsis CDS system that is FDA cleared as a medical device. There is no public disclosure of an HDO turning off sepsis CDS due to regulatory compliance concerns. And there is no public disclosure of FDA enforcement action against an HDO for using sepsis CDS that is not cleared as a medical device. DISCUSSION AND CONCLUSION: We present multiple policy interventions that would relieve the current tension to enable HDOs to utilize artificial intelligence to improve patient care while also addressing FDA concerns about product safety, efficacy, and equity.

4.
Am J Ophthalmol ; 214: 134-142, 2020 06.
Article En | MEDLINE | ID: mdl-32171769

Artificial intelligence (AI) describes systems capable of making decisions of high cognitive complexity; autonomous AI systems in healthcare are AI systems that make clinical decisions without human oversight. Such rigorously validated medical diagnostic AI systems hold great promise for improving access to care, increasing accuracy, and lowering cost, while enabling specialist physicians to provide the greatest value by managing and treating patients whose outcomes can be improved. Ensuring that autonomous AI provides these benefits requires evaluation of the autonomous AI's effect on patient outcome, design, validation, data usage, and accountability, from a bioethics and accountability perspective. We performed a literature review of bioethical principles for AI, and derived evaluation rules for autonomous AI, grounded in bioethical principles. The rules include patient outcome, validation, reference standard, design, data usage, and accountability for medical liability. Application of the rules explains successful US Food and Drug Administration (FDA) de novo authorization of an example, the first autonomous point-of-care diabetic retinopathy examination de novo authorized by the FDA, after a preregistered clinical trial. Physicians need to become competent in understanding the potential risks and benefits of autonomous AI, and understand its design, safety, efficacy and equity, validation, and liability, as well as how its data were obtained. The autonomous AI evaluation rules introduced here can help physicians understand limitations and risks as well as the potential benefits of autonomous AI for their patients.


Artificial Intelligence , Ethics, Medical , Liability, Legal , Ophthalmology/standards , Risk Assessment , Safety Management , Humans
...