Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
JMIR Med Inform ; 12: e51274, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38836556

RESUMEN

Background: The problem list (PL) is a repository of diagnoses for patients' medical conditions and health-related issues. Unfortunately, over time, our PLs have become overloaded with duplications, conflicting entries, and no-longer-valid diagnoses. The lack of a standardized structure for review adds to the challenges of clinical use. Previously, our default electronic health record (EHR) organized the PL primarily via alphabetization, with other options available, for example, organization by clinical systems or priority settings. The system's PL was built with limited groupers, resulting in many diagnoses that were inconsistent with the expected clinical systems or not associated with any clinical systems at all. As a consequence of these limited EHR configuration options, our PL organization has poorly supported clinical use over time, particularly as the number of diagnoses on the PL has increased. Objective: We aimed to measure the accuracy of sorting PL diagnoses into PL system groupers based on Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) concept groupers implemented in our EHR. Methods: We transformed and developed 21 system- or condition-based groupers, using 1211 SNOMED CT hierarchal concepts refined with Boolean logic, to reorganize the PL in our EHR. To evaluate the clinical utility of our new groupers, we extracted all diagnoses on the PLs from a convenience sample of 50 patients with 3 or more encounters in the previous year. To provide a spectrum of clinical diagnoses, we included patients from all ages and divided them by sex in a deidentified format. Two physicians independently determined whether each diagnosis was correctly attributed to the expected clinical system grouper. Discrepancies were discussed, and if no consensus was reached, they were adjudicated by a third physician. Descriptive statistics and Cohen κ statistics for interrater reliability were calculated. Results: Our 50-patient sample had a total of 869 diagnoses (range 4-59; median 12, IQR 9-24). The reviewers initially agreed on 821 system attributions. Of the remaining 48 items, 16 required adjudication with the tie-breaking third physician. The calculated κ statistic was 0.7. The PL groupers appropriately associated diagnoses to the expected clinical system with a sensitivity of 97.6%, a specificity of 58.7%, a positive predictive value of 96.8%, and an F1-score of 0.972. Conclusions: We found that PL organization by clinical specialty or condition using SNOMED CT concept groupers accurately reflects clinical systems. Our system groupers were subsequently adopted by our vendor EHR in their foundation system for PL organization.

2.
PLOS Digit Health ; 3(6): e0000513, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38843115

RESUMEN

Healthcare delivery organizations (HDOs) in the US must contend with the potential for AI to worsen health inequities. But there is no standard set of procedures for HDOs to adopt to navigate these challenges. There is an urgent need for HDOs to present a unified approach to proactively address the potential for AI to worsen health inequities. Amidst this background, Health AI Partnership (HAIP) launched a community of practice to convene stakeholders from across HDOs to tackle challenges related to the use of AI. On February 15, 2023, HAIP hosted an inaugural workshop focused on the question, "Our health care delivery setting is considering adopting a new solution that uses AI. How do we assess the potential future impact on health inequities?" This topic emerged as a common challenge faced by all HDOs participating in HAIP. The workshop had 2 main goals. First, we wanted to ensure participants could talk openly without reservations about challenging topics such as health equity. The second goal was to develop an actionable, generalizable framework that could be immediately put into practice. The workshop engaged 77 participants with 100% representation from all 10 HDOs and invited ecosystem partners. In an accompanying Research Article, we share the Health Equity Across the AI Lifecycle (HEAAL) framework. We invite and encourage HDOs to test the HEAAL framework internally and share feedback so that we can continue to refine and maintain the set of procedures. The HEAAL framework reveals the challenges associated with rigorously assessing the potential for AI to worsen health inequities. Significant investment in personnel, capabilities, and data infrastructure is required, and the level of investment needed could be beyond reach for most HDOs. We look forward to expanding our community of practice to assist HDOs around the world.

3.
J Am Med Inform Assoc ; 31(7): 1622-1627, 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38767890

RESUMEN

OBJECTIVES: Surface the urgent dilemma that healthcare delivery organizations (HDOs) face navigating the US Food and Drug Administration (FDA) final guidance on the use of clinical decision support (CDS) software. MATERIALS AND METHODS: We use sepsis as a case study to highlight the patient safety and regulatory compliance tradeoffs that 6129 hospitals in the United States must navigate. RESULTS: Sepsis CDS remains in broad, routine use. There is no commercially available sepsis CDS system that is FDA cleared as a medical device. There is no public disclosure of an HDO turning off sepsis CDS due to regulatory compliance concerns. And there is no public disclosure of FDA enforcement action against an HDO for using sepsis CDS that is not cleared as a medical device. DISCUSSION AND CONCLUSION: We present multiple policy interventions that would relieve the current tension to enable HDOs to utilize artificial intelligence to improve patient care while also addressing FDA concerns about product safety, efficacy, and equity.


Asunto(s)
Inteligencia Artificial , Sistemas de Apoyo a Decisiones Clínicas , Seguridad del Paciente , United States Food and Drug Administration , Inteligencia Artificial/legislación & jurisprudencia , Estados Unidos , Humanos , Sepsis , Adhesión a Directriz , Atención a la Salud
4.
PLOS Digit Health ; 3(5): e0000514, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38809946

RESUMEN

Research on the applications of artificial intelligence (AI) tools in medicine has increased exponentially over the last few years but its implementation in clinical practice has not seen a commensurate increase with a lack of consensus on implementing and maintaining such tools. This systematic review aims to summarize frameworks focusing on procuring, implementing, monitoring, and evaluating AI tools in clinical practice. A comprehensive literature search, following PRSIMA guidelines was performed on MEDLINE, Wiley Cochrane, Scopus, and EBSCO databases, to identify and include articles recommending practices, frameworks or guidelines for AI procurement, integration, monitoring, and evaluation. From the included articles, data regarding study aim, use of a framework, rationale of the framework, details regarding AI implementation involving procurement, integration, monitoring, and evaluation were extracted. The extracted details were then mapped on to the Donabedian Plan, Do, Study, Act cycle domains. The search yielded 17,537 unique articles, out of which 47 were evaluated for inclusion based on their full texts and 25 articles were included in the review. Common themes extracted included transparency, feasibility of operation within existing workflows, integrating into existing workflows, validation of the tool using predefined performance indicators and improving the algorithm and/or adjusting the tool to improve performance. Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), & Act (24%, n = 6). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC) and 2 (1.2%) were from lower-middle-income countries (LMICs). Healthcare professionals cite the implementation of AI tools within clinical settings as challenging owing to low levels of evidence focusing on integration in the Do and Act domains. The current healthcare AI landscape calls for increased data sharing and knowledge translation to facilitate common goals and reap maximum clinical benefit.

5.
PLOS Digit Health ; 3(5): e0000390, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38723025

RESUMEN

The use of data-driven technologies such as Artificial Intelligence (AI) and Machine Learning (ML) is growing in healthcare. However, the proliferation of healthcare AI tools has outpaced regulatory frameworks, accountability measures, and governance standards to ensure safe, effective, and equitable use. To address these gaps and tackle a common challenge faced by healthcare delivery organizations, a case-based workshop was organized, and a framework was developed to evaluate the potential impact of implementing an AI solution on health equity. The Health Equity Across the AI Lifecycle (HEAAL) is co-designed with extensive engagement of clinical, operational, technical, and regulatory leaders across healthcare delivery organizations and ecosystem partners in the US. It assesses 5 equity assessment domains-accountability, fairness, fitness for purpose, reliability and validity, and transparency-across the span of eight key decision points in the AI adoption lifecycle. It is a process-oriented framework containing 37 step-by-step procedures for evaluating an existing AI solution and 34 procedures for evaluating a new AI solution in total. Within each procedure, it identifies relevant key stakeholders and data sources used to conduct the procedure. HEAAL guides how healthcare delivery organizations may mitigate the potential risk of AI solutions worsening health inequities. It also informs how much resources and support are required to assess the potential impact of AI solutions on health inequities.

6.
NPJ Digit Med ; 7(1): 87, 2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38594344

RESUMEN

When integrating AI tools in healthcare settings, complex interactions between technologies and primary users are not always fully understood or visible. This deficient and ambiguous understanding hampers attempts by healthcare organizations to adopt AI/ML, and it also creates new challenges for researchers to identify opportunities for simplifying adoption and developing best practices for the use of AI-based solutions. Our study fills this gap by documenting the process of designing, building, and maintaining an AI solution called SepsisWatch at Duke University Health System. We conducted 20 interviews with the team of engineers and scientists that led the multi-year effort to build the tool, integrate it into practice, and maintain the solution. This "Algorithm Journey Map" enumerates all social and technical activities throughout the AI solution's procurement, development, integration, and full lifecycle management. In addition to mapping the "who?" and "what?" of the adoption of the AI tool, we also show several 'lessons learned' throughout the algorithm journey maps including modeling assumptions, stakeholder inclusion, and organizational structure. In doing so, we identify generalizable insights about how to recognize and navigate barriers to AI/ML adoption in healthcare settings. We expect that this effort will further the development of best practices for operationalizing and sustaining ethical principles-in algorithmic systems.

7.
Ann Emerg Med ; 2024 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-38441514

RESUMEN

STUDY OBJECTIVE: This study aimed to (1) develop and validate a natural language processing model to identify the presence of pulmonary embolism (PE) based on real-time radiology reports and (2) identify low-risk PE patients based on previously validated risk stratification scores using variables extracted from the electronic health record at the time of diagnosis. The combination of these approaches yielded an natural language processing-based clinical decision support tool that can identify patients presenting to the emergency department (ED) with low-risk PE as candidates for outpatient management. METHODS: Data were curated from all patients who received a PE-protocol computed tomography pulmonary angiogram (PE-CTPA) imaging study in the ED of a 3-hospital academic health system between June 1, 2018 and December 31, 2020 (n=12,183). The "preliminary" radiology reports from these imaging studies made available to ED clinicians at the time of diagnosis were adjudicated as positive or negative for PE by the clinical team. The reports were then divided into development, internal validation, and temporal validation cohorts in order to train, test, and validate an natural language processing model that could identify the presence of PE based on unstructured text. For risk stratification, patient- and encounter-level data elements were curated from the electronic health record and used to compute a real-time simplified pulmonary embolism severity (sPESI) score at the time of diagnosis. Chart abstraction was performed on all low-risk PE patients admitted for inpatient management. RESULTS: When applied to the internal validation and temporal validation cohorts, the natural language processing model identified the presence of PE from radiology reports with an area under the receiver operating characteristic curve of 0.99, sensitivity of 0.86 to 0.87, and specificity of 0.99. Across cohorts, 10.5% of PE-CTPA studies were positive for PE, of which 22.2% were classified as low-risk by the sPESI score. Of all low-risk PE patients, 74.3% were admitted for inpatient management. CONCLUSION: This study demonstrates that a natural language processing-based model utilizing real-time radiology reports can accurately identify patients with PE. Further, this model, used in combination with a validated risk stratification score (sPESI), provides a clinical decision support tool that accurately identifies patients in the ED with low-risk PE as candidates for outpatient management.

8.
J Am Med Inform Assoc ; 31(3): 705-713, 2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38031481

RESUMEN

OBJECTIVE: The complexity and rapid pace of development of algorithmic technologies pose challenges for their regulation and oversight in healthcare settings. We sought to improve our institution's approach to evaluation and governance of algorithmic technologies used in clinical care and operations by creating an Implementation Guide that standardizes evaluation criteria so that local oversight is performed in an objective fashion. MATERIALS AND METHODS: Building on a framework that applies key ethical and quality principles (clinical value and safety, fairness and equity, usability and adoption, transparency and accountability, and regulatory compliance), we created concrete guidelines for evaluating algorithmic technologies at our institution. RESULTS: An Implementation Guide articulates evaluation criteria used during review of algorithmic technologies and details what evidence supports the implementation of ethical and quality principles for trustworthy health AI. Application of the processes described in the Implementation Guide can lead to algorithms that are safer as well as more effective, fair, and equitable upon implementation, as illustrated through 4 examples of technologies at different phases of the algorithmic lifecycle that underwent evaluation at our academic medical center. DISCUSSION: By providing clear descriptions/definitions of evaluation criteria and embedding them within standardized processes, we streamlined oversight processes and educated communities using and developing algorithmic technologies within our institution. CONCLUSIONS: We developed a scalable, adaptable framework for translating principles into evaluation criteria and specific requirements that support trustworthy implementation of algorithmic technologies in patient care and healthcare operations.


Asunto(s)
Inteligencia Artificial , Instituciones de Salud , Humanos , Algoritmos , Centros Médicos Académicos , Cooperación del Paciente
9.
Hosp Pediatr ; 14(1): 11-20, 2024 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-38053467

RESUMEN

OBJECTIVES: Early warning scores detecting clinical deterioration in pediatric inpatients have wide-ranging performance and use a limited number of clinical features. This study developed a machine learning model leveraging multiple static and dynamic clinical features from the electronic health record to predict the composite outcome of unplanned transfer to the ICU within 24 hours and inpatient mortality within 48 hours in hospitalized children. METHODS: Using a retrospective development cohort of 17 630 encounters across 10 388 patients, 2 machine learning models (light gradient boosting machine [LGBM] and random forest) were trained on 542 features and compared with our institutional Pediatric Early Warning Score (I-PEWS). RESULTS: The LGBM model significantly outperformed I-PEWS based on receiver operating characteristic curve (AUROC) for the composite outcome of ICU transfer or mortality for both internal validation and temporal validation cohorts (AUROC 0.785 95% confidence interval [0.780-0.791] vs 0.708 [0.701-0.715] for temporal validation) as well as lead-time before deterioration events (median 11 hours vs 3 hours; P = .004). However, LGBM performance as evaluated by precision recall curve was lesser in the temporal validation cohort with associated decreased positive predictive value (6% vs 29%) and increased number needed to evaluate (17 vs 3) compared with I-PEWS. CONCLUSIONS: Our electronic health record based machine learning model demonstrated improved AUROC and lead-time in predicting clinical deterioration in pediatric inpatients 24 to 48 hours in advance compared with I-PEWS. Further work is needed to optimize model positive predictive value to allow for integration into clinical practice.


Asunto(s)
Deterioro Clínico , Puntuación de Alerta Temprana , Niño , Humanos , Estudios Retrospectivos , Aprendizaje Automático , Niño Hospitalizado , Curva ROC
10.
JAMA Netw Open ; 6(12): e2345022, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38100115
11.
J Am Med Inform Assoc ; 31(1): 274-280, 2023 12 22.
Artículo en Inglés | MEDLINE | ID: mdl-37669138

RESUMEN

INTRODUCTION: The pitfalls of label leakage, contamination of model input features with outcome information, are well established. Unfortunately, avoiding label leakage in clinical prediction models requires more nuance than the common advice of applying "no time machine rule." FRAMEWORK: We provide a framework for contemplating whether and when model features pose leakage concerns by considering the cadence, perspective, and applicability of predictions. To ground these concepts, we use real-world clinical models to highlight examples of appropriate and inappropriate label leakage in practice. RECOMMENDATIONS: Finally, we provide recommendations to support clinical and technical stakeholders as they evaluate the leakage tradeoffs associated with model design, development, and implementation decisions. By providing common language and dimensions to consider when designing models, we hope the clinical prediction community will be better prepared to develop statistically valid and clinically useful machine learning models.


Asunto(s)
Instituciones de Salud , Lenguaje , Aprendizaje Automático , Atención a la Salud
12.
JMIR Form Res ; 7: e43963, 2023 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-37733427

RESUMEN

BACKGROUND: Machine learning (ML)-driven clinical decision support (CDS) continues to draw wide interest and investment as a means of improving care quality and value, despite mixed real-world implementation outcomes. OBJECTIVE: This study aimed to explore the factors that influence the integration of a peripheral arterial disease (PAD) identification algorithm to implement timely guideline-based care. METHODS: A total of 12 semistructured interviews were conducted with individuals from 3 stakeholder groups during the first 4 weeks of integration of an ML-driven CDS. The stakeholder groups included technical, administrative, and clinical members of the team interacting with the ML-driven CDS. The ML-driven CDS identified patients with a high probability of having PAD, and these patients were then reviewed by an interdisciplinary team that developed a recommended action plan and sent recommendations to the patient's primary care provider. Pseudonymized transcripts were coded, and thematic analysis was conducted by a multidisciplinary research team. RESULTS: Three themes were identified: positive factors translating in silico performance to real-world efficacy, organizational factors and data structure factors affecting clinical impact, and potential challenges to advancing equity. Our study found that the factors that led to successful translation of in silico algorithm performance to real-world impact were largely nontechnical, given adequate efficacy in retrospective validation, including strong clinical leadership, trustworthy workflows, early consideration of end-user needs, and ensuring that the CDS addresses an actionable problem. Negative factors of integration included failure to incorporate the on-the-ground context, the lack of feedback loops, and data silos limiting the ML-driven CDS. The success criteria for each stakeholder group were also characterized to better understand how teams work together to integrate ML-driven CDS and to understand the varying needs across stakeholder groups. CONCLUSIONS: Longitudinal and multidisciplinary stakeholder engagement in the development and integration of ML-driven CDS underpins its effective translation into real-world care. Although previous studies have focused on the technical elements of ML-driven CDS, our study demonstrates the importance of including administrative and operational leaders as well as an early consideration of clinicians' needs. Seeing how different stakeholder groups have this more holistic perspective also permits more effective detection of context-driven health care inequities, which are uncovered or exacerbated via ML-driven CDS integration through structural and organizational challenges. Many of the solutions to these inequities lie outside the scope of ML and require coordinated systematic solutions for mitigation to help reduce disparities in the care of patients with PAD.

13.
Patterns (N Y) ; 4(4): 100710, 2023 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-37123436

RESUMEN

The Duke Institute for Health Innovation (DIHI) was launched in 2013. Frontline staff members submit proposals for innovation projects that align with strategic priorities set by organizational leadership. Funded projects receive operational and technical support from institute staff members and a transdisciplinary network of collaborators to develop and implement solutions as part of routine clinical care, ranging from machine learning algorithms to mobile applications. DIHI's operations are shaped by four guiding principles: build to show value, build to integrate, build to scale, and build responsibly. Between 2013 and 2021, more than 600 project proposals have been submitted to DIHI. More than 85 innovation projects, both through the application process and other strategic partnerships, have been supported and implemented. DIHI's funding has incubated 12 companies, engaged more than 300 faculty members, staff members, and students, and contributed to more than 50 peer-reviewed publications. DIHI's practices can serve as a model for other health systems to systematically source, develop, implement, and scale innovations.

15.
Clin Infect Dis ; 76(2): 299-306, 2023 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-36125084

RESUMEN

BACKGROUND: Human immunodeficiency virus (HIV) pre-exposure prophylaxis (PrEP) is underutilized in the southern United States. Rapid identification of individuals vulnerable to diagnosis of HIV using electronic health record (EHR)-based tools may augment PrEP uptake in the region. METHODS: Using machine learning, we developed EHR-based models to predict incident HIV diagnosis as a surrogate for PrEP candidacy. We included patients from a southern medical system with encounters between October 2014 and August 2016, training the model to predict incident HIV diagnosis between September 2016 and August 2018. We obtained 74 EHR variables as potential predictors. We compared Extreme Gradient Boosting (XGBoost) versus least absolute shrinkage selection operator (LASSO) logistic regression models, and assessed performance, overall and among women, using area under the receiver operating characteristic curve (AUROC) and area under precision recall curve (AUPRC). RESULTS: Of 998 787 eligible patients, 162 had an incident HIV diagnosis, of whom 49 were women. The XGBoost model outperformed the LASSO model for the total cohort, achieving an AUROC of 0.89 and AUPRC of 0.01. The female-only cohort XGBoost model resulted in an AUROC of 0.78 and AUPRC of 0.00025. The most predictive variables for the overall cohort were race, sex, and male partner. The strongest positive predictors for the female-only cohort were history of pelvic inflammatory disease, drug use, and tobacco use. CONCLUSIONS: Our machine-learning models were able to effectively predict incident HIV diagnoses including among women. This study establishes feasibility of using these models to identify persons most suitable for PrEP in the South.


Asunto(s)
Infecciones por VIH , Profilaxis Pre-Exposición , Humanos , Masculino , Femenino , Estados Unidos/epidemiología , VIH , Registros Electrónicos de Salud , Aprendizaje Automático , Profilaxis Pre-Exposición/métodos , Infecciones por VIH/diagnóstico , Infecciones por VIH/epidemiología , Infecciones por VIH/prevención & control
16.
Front Med (Lausanne) ; 9: 946937, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36341258

RESUMEN

Background: Understanding performance of convolutional neural networks (CNNs) for binary (benign vs. malignant) lesion classification based on real world images is important for developing a meaningful clinical decision support (CDS) tool. Methods: We developed a CNN based on real world smartphone images with histopathological ground truth and tested the utility of structured electronic health record (EHR) data on model performance. Model accuracy was compared against three board-certified dermatologists for clinical validity. Results: At a classification threshold of 0.5, the sensitivity was 79 vs. 77 vs. 72%, and specificity was 64 vs. 65 vs. 57% for image-alone vs. combined image and clinical data vs. clinical data-alone models, respectively. The PPV was 68 vs. 69 vs. 62%, AUC was 0.79 vs. 0.79 vs. 0.69, and AP was 0.78 vs. 0.79 vs. 0.64 for image-alone vs. combined data vs. clinical data-alone models. Older age, male sex, and number of prior dermatology visits were important positive predictors for malignancy in the clinical data-alone model. Conclusion: Additional clinical data did not significantly improve CNN image model performance. Model accuracy for predicting malignant lesions was comparable to dermatologists (model: 71.31% vs. 3 dermatologists: 77.87, 69.88, and 71.93%), validating clinical utility. Prospective validation of the model in primary care setting will enhance understanding of the model's clinical utility.

17.
J Public Health Manag Pract ; 28(6): E778-E788, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36194821

RESUMEN

CONTEXT: In the United States, COVID-19 vaccines have been unequally distributed between different racial and ethnic groups. Public reporting of race and ethnicity data for COVID-19 vaccination has the potential to help guide public health responses aimed at promoting vaccination equity. However, there is evidence that such data are not readily available. OBJECTIVES: This study sought to assess gaps and discrepancies in COVID-19 vaccination reporting in 10 large US cities in July 2021. DESIGN, SETTING, AND PARTICIPANTS: For the 10 cities selected, we collected COVID-19 vaccination and population data using publicly available resources, such as state health department Web sites and the US Census Bureau American Community Survey. We examined vaccination plans and news sources to identify initial proposals and evidence of implementation of COVID-19 vaccination best practices. MAIN OUTCOME MEASURE: We performed quantitative assessment of associations of the number of vaccination best practices implemented with COVID-19 racial and ethnic vaccination equity. We additionally assessed gaps and discrepancies in COVID-19 vaccination reporting between states. RESULTS: Our analysis did not show that COVID-19 vaccination inequity was associated with the number of vaccination best practices implemented. However, gaps and variation in reporting of racial and ethnic demographic vaccination data inhibited our ability to effectively assess whether vaccination programs were reaching minority populations. CONCLUSIONS: Lack of consistent public reporting and transparency of COVID-19 vaccination data has likely hindered public health responses by impeding the ability to track the effectiveness of strategies that target vaccine equity.


Asunto(s)
COVID-19 , Etnicidad , COVID-19/epidemiología , COVID-19/prevención & control , Vacunas contra la COVID-19/uso terapéutico , Ciudades , Humanos , Estados Unidos/epidemiología , Vacunación
18.
Am J Transplant ; 22(10): 2293-2301, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35583111

RESUMEN

Health equity research in transplantation has largely relied on national data sources, yet the availability of social determinants of health (SDOH) data varies widely among these sources. We sought to characterize the extent to which national data sources contain SDOH data applicable to end-stage organ disease (ESOD) and transplant patients. We reviewed 10 active national data sources based in the United States. For each data source, we examined patient inclusion criteria and explored strengths and limitations regarding SDOH data, using the National Institutes of Health PhenX toolkit of SDOH as a data collection instrument. Of the 28 SDOH variables reviewed, eight-core demographic variables were included in ≥80% of the data sources, and seven variables that described elements of social status ranged between 30 and 60% inclusion. Variables regarding identity, healthcare access, and social need were poorly represented (≤20%) across the data sources, and five of these variables were included in none of the data sources. The results of our review highlight the need for improved SDOH data collection systems in ESOD and transplant patients via: enhanced inter-registry collaboration, incorporation of standardized SDOH variables into existing data sources, and transplant center and consortium-based investigation and innovation.


Asunto(s)
Equidad en Salud , Trasplante de Órganos , Recolección de Datos , Humanos , Almacenamiento y Recuperación de la Información , Determinantes Sociales de la Salud , Estados Unidos/epidemiología
19.
J Am Med Inform Assoc ; 29(9): 1631-1636, 2022 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-35641123

RESUMEN

Artificial intelligence/machine learning models are being rapidly developed and used in clinical practice. However, many models are deployed without a clear understanding of clinical or operational impact and frequently lack monitoring plans that can detect potential safety signals. There is a lack of consensus in establishing governance to deploy, pilot, and monitor algorithms within operational healthcare delivery workflows. Here, we describe a governance framework that combines current regulatory best practices and lifecycle management of predictive models being used for clinical care. Since January 2021, we have successfully added models to our governance portfolio and are currently managing 52 models.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Algoritmos , Atención a la Salud
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...