Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
PLOS Digit Health ; 3(5): e0000390, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38723025

RESUMEN

The use of data-driven technologies such as Artificial Intelligence (AI) and Machine Learning (ML) is growing in healthcare. However, the proliferation of healthcare AI tools has outpaced regulatory frameworks, accountability measures, and governance standards to ensure safe, effective, and equitable use. To address these gaps and tackle a common challenge faced by healthcare delivery organizations, a case-based workshop was organized, and a framework was developed to evaluate the potential impact of implementing an AI solution on health equity. The Health Equity Across the AI Lifecycle (HEAAL) is co-designed with extensive engagement of clinical, operational, technical, and regulatory leaders across healthcare delivery organizations and ecosystem partners in the US. It assesses 5 equity assessment domains-accountability, fairness, fitness for purpose, reliability and validity, and transparency-across the span of eight key decision points in the AI adoption lifecycle. It is a process-oriented framework containing 37 step-by-step procedures for evaluating an existing AI solution and 34 procedures for evaluating a new AI solution in total. Within each procedure, it identifies relevant key stakeholders and data sources used to conduct the procedure. HEAAL guides how healthcare delivery organizations may mitigate the potential risk of AI solutions worsening health inequities. It also informs how much resources and support are required to assess the potential impact of AI solutions on health inequities.

2.
Sci Adv ; 10(18): eadk3452, 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38691601

RESUMEN

Machine learning (ML) methods are proliferating in scientific research. However, the adoption of these methods has been accompanied by failures of validity, reproducibility, and generalizability. These failures can hinder scientific progress, lead to false consensus around invalid claims, and undermine the credibility of ML-based science. ML methods are often applied and fail in similar ways across disciplines. Motivated by this observation, our goal is to provide clear recommendations for conducting and reporting ML-based science. Drawing from an extensive review of past literature, we present the REFORMS checklist (recommendations for machine-learning-based science). It consists of 32 questions and a paired set of guidelines. REFORMS was developed on the basis of a consensus of 19 researchers across computer science, data science, mathematics, social sciences, and biomedical sciences. REFORMS can serve as a resource for researchers when designing and implementing a study, for referees when reviewing papers, and for journals when enforcing standards for transparency and reproducibility.


Asunto(s)
Consenso , Aprendizaje Automático , Humanos , Reproducibilidad de los Resultados , Ciencia
3.
Patterns (N Y) ; 2(11): 100336, 2021 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-34820643

RESUMEN

In this work, we survey a breadth of literature that has revealed the limitations of predominant practices for dataset collection and use in the field of machine learning. We cover studies that critically review the design and development of datasets with a focus on negative societal impacts and poor outcomes for system performance. We also cover approaches to filtering and augmenting data and modeling techniques aimed at mitigating the impact of bias in datasets. Finally, we discuss works that have studied data practices, cultures, and disciplinary norms and discuss implications for the legal, ethical, and functional challenges the field continues to face. Based on these findings, we advocate for the use of both qualitative and quantitative approaches to more carefully document and analyze datasets during the creation and usage phases.

4.
Patterns (N Y) ; 1(8): 100150, 2020 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-33294879

RESUMEN

The contribution of Black female scholars to our understanding of data and their limits of representation hint at a more empathetic vision for data science that we should all learn from.

5.
Patterns (N Y) ; 1(4): 100066, 2020 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-32835309

RESUMEN

In data science, there's long been an acknowledgment of the way data can flatten and dehumanize the people they represent. This limitation becomes most obvious when considering the pure inability of such numbers and figures to truly capture the reality of lives lost in this pandemic.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA