Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
JAMA ; 2024 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-39405321

RESUMO

This Viewpoint explores how artificial intelligence technologies can adopt a clinical practice framework to identify use cases and outline the technology's objectives and potential uses in modern health care.

2.
J Pain Symptom Manage ; 68(6): 539-547.e3, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39237028

RESUMO

CONTEXT: Prognostication challenges contribute to delays in advance care planning (ACP) for patients with cancer near the end of life (EOL). OBJECTIVES: Examine a quality improvement mortality prediction algorithm intervention's impact on ACP documentation and EOL care. METHODS: We implemented a validated mortality risk prediction machine learning model for solid malignancy patients admitted from the emergency department (ED) to a dedicated solid malignancy unit at Duke University Hospital. Clinicians received an email when a patient was identified as high-risk. We compared ACP documentation and EOL care outcomes before and after the notification intervention. We excluded patients with intensive care unit (ICU) admission in the first 24 hours. Comparisons involved chi-square/Fisher's exact tests and Wilcoxon rank sum tests; comparisons stratified by physician specialty employ Cochran-Mantel-Haenszel tests. RESULTS: Preintervention and postintervention cohorts comprised 88 and 77 patients, respectively. Most were White, non-Hispanic/Latino, and married. ACP conversations were documented for 2.3% of hospitalizations preintervention vs. 80.5% postintervention (P<0.001), and if the attending physician notified was a palliative care specialist (4.1% vs. 84.6%) or oncologist (0% vs. 76.3%) (P<0.001). There were no differences between groups in length of stay (LOS), hospice referral, code status change, ICU admissions or LOS, 30-day readmissions, 30-day ED visits, and inpatient and 30-day deaths. CONCLUSION: Identifying patients with cancer and high mortality risk via machine learning elicited a substantial increase in documented ACP conversations but did not impact EOL care. Our intervention showed promise in changing clinician behavior. Further integration of this model in clinical practice is ongoing.


Assuntos
Planejamento Antecipado de Cuidados , Aprendizado de Máquina , Neoplasias , Melhoria de Qualidade , Assistência Terminal , Humanos , Masculino , Feminino , Neoplasias/terapia , Idoso , Pessoa de Meia-Idade , Documentação , Serviço Hospitalar de Emergência , Algoritmos
3.
JMIR Med Inform ; 12: e51274, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38836556

RESUMO

Background: The problem list (PL) is a repository of diagnoses for patients' medical conditions and health-related issues. Unfortunately, over time, our PLs have become overloaded with duplications, conflicting entries, and no-longer-valid diagnoses. The lack of a standardized structure for review adds to the challenges of clinical use. Previously, our default electronic health record (EHR) organized the PL primarily via alphabetization, with other options available, for example, organization by clinical systems or priority settings. The system's PL was built with limited groupers, resulting in many diagnoses that were inconsistent with the expected clinical systems or not associated with any clinical systems at all. As a consequence of these limited EHR configuration options, our PL organization has poorly supported clinical use over time, particularly as the number of diagnoses on the PL has increased. Objective: We aimed to measure the accuracy of sorting PL diagnoses into PL system groupers based on Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) concept groupers implemented in our EHR. Methods: We transformed and developed 21 system- or condition-based groupers, using 1211 SNOMED CT hierarchal concepts refined with Boolean logic, to reorganize the PL in our EHR. To evaluate the clinical utility of our new groupers, we extracted all diagnoses on the PLs from a convenience sample of 50 patients with 3 or more encounters in the previous year. To provide a spectrum of clinical diagnoses, we included patients from all ages and divided them by sex in a deidentified format. Two physicians independently determined whether each diagnosis was correctly attributed to the expected clinical system grouper. Discrepancies were discussed, and if no consensus was reached, they were adjudicated by a third physician. Descriptive statistics and Cohen κ statistics for interrater reliability were calculated. Results: Our 50-patient sample had a total of 869 diagnoses (range 4-59; median 12, IQR 9-24). The reviewers initially agreed on 821 system attributions. Of the remaining 48 items, 16 required adjudication with the tie-breaking third physician. The calculated κ statistic was 0.7. The PL groupers appropriately associated diagnoses to the expected clinical system with a sensitivity of 97.6%, a specificity of 58.7%, a positive predictive value of 96.8%, and an F1-score of 0.972. Conclusions: We found that PL organization by clinical specialty or condition using SNOMED CT concept groupers accurately reflects clinical systems. Our system groupers were subsequently adopted by our vendor EHR in their foundation system for PL organization.

4.
PLOS Digit Health ; 3(6): e0000513, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38843115

RESUMO

Healthcare delivery organizations (HDOs) in the US must contend with the potential for AI to worsen health inequities. But there is no standard set of procedures for HDOs to adopt to navigate these challenges. There is an urgent need for HDOs to present a unified approach to proactively address the potential for AI to worsen health inequities. Amidst this background, Health AI Partnership (HAIP) launched a community of practice to convene stakeholders from across HDOs to tackle challenges related to the use of AI. On February 15, 2023, HAIP hosted an inaugural workshop focused on the question, "Our health care delivery setting is considering adopting a new solution that uses AI. How do we assess the potential future impact on health inequities?" This topic emerged as a common challenge faced by all HDOs participating in HAIP. The workshop had 2 main goals. First, we wanted to ensure participants could talk openly without reservations about challenging topics such as health equity. The second goal was to develop an actionable, generalizable framework that could be immediately put into practice. The workshop engaged 77 participants with 100% representation from all 10 HDOs and invited ecosystem partners. In an accompanying Research Article, we share the Health Equity Across the AI Lifecycle (HEAAL) framework. We invite and encourage HDOs to test the HEAAL framework internally and share feedback so that we can continue to refine and maintain the set of procedures. The HEAAL framework reveals the challenges associated with rigorously assessing the potential for AI to worsen health inequities. Significant investment in personnel, capabilities, and data infrastructure is required, and the level of investment needed could be beyond reach for most HDOs. We look forward to expanding our community of practice to assist HDOs around the world.

5.
PLOS Digit Health ; 3(5): e0000390, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38723025

RESUMO

The use of data-driven technologies such as Artificial Intelligence (AI) and Machine Learning (ML) is growing in healthcare. However, the proliferation of healthcare AI tools has outpaced regulatory frameworks, accountability measures, and governance standards to ensure safe, effective, and equitable use. To address these gaps and tackle a common challenge faced by healthcare delivery organizations, a case-based workshop was organized, and a framework was developed to evaluate the potential impact of implementing an AI solution on health equity. The Health Equity Across the AI Lifecycle (HEAAL) is co-designed with extensive engagement of clinical, operational, technical, and regulatory leaders across healthcare delivery organizations and ecosystem partners in the US. It assesses 5 equity assessment domains-accountability, fairness, fitness for purpose, reliability and validity, and transparency-across the span of eight key decision points in the AI adoption lifecycle. It is a process-oriented framework containing 37 step-by-step procedures for evaluating an existing AI solution and 34 procedures for evaluating a new AI solution in total. Within each procedure, it identifies relevant key stakeholders and data sources used to conduct the procedure. HEAAL guides how healthcare delivery organizations may mitigate the potential risk of AI solutions worsening health inequities. It also informs how much resources and support are required to assess the potential impact of AI solutions on health inequities.

6.
J Am Med Inform Assoc ; 31(7): 1622-1627, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38767890

RESUMO

OBJECTIVES: Surface the urgent dilemma that healthcare delivery organizations (HDOs) face navigating the US Food and Drug Administration (FDA) final guidance on the use of clinical decision support (CDS) software. MATERIALS AND METHODS: We use sepsis as a case study to highlight the patient safety and regulatory compliance tradeoffs that 6129 hospitals in the United States must navigate. RESULTS: Sepsis CDS remains in broad, routine use. There is no commercially available sepsis CDS system that is FDA cleared as a medical device. There is no public disclosure of an HDO turning off sepsis CDS due to regulatory compliance concerns. And there is no public disclosure of FDA enforcement action against an HDO for using sepsis CDS that is not cleared as a medical device. DISCUSSION AND CONCLUSION: We present multiple policy interventions that would relieve the current tension to enable HDOs to utilize artificial intelligence to improve patient care while also addressing FDA concerns about product safety, efficacy, and equity.


Assuntos
Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Segurança do Paciente , United States Food and Drug Administration , Inteligência Artificial/legislação & jurisprudência , Estados Unidos , Humanos , Sepse , Fidelidade a Diretrizes , Atenção à Saúde
7.
PLOS Digit Health ; 3(5): e0000514, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38809946

RESUMO

Research on the applications of artificial intelligence (AI) tools in medicine has increased exponentially over the last few years but its implementation in clinical practice has not seen a commensurate increase with a lack of consensus on implementing and maintaining such tools. This systematic review aims to summarize frameworks focusing on procuring, implementing, monitoring, and evaluating AI tools in clinical practice. A comprehensive literature search, following PRSIMA guidelines was performed on MEDLINE, Wiley Cochrane, Scopus, and EBSCO databases, to identify and include articles recommending practices, frameworks or guidelines for AI procurement, integration, monitoring, and evaluation. From the included articles, data regarding study aim, use of a framework, rationale of the framework, details regarding AI implementation involving procurement, integration, monitoring, and evaluation were extracted. The extracted details were then mapped on to the Donabedian Plan, Do, Study, Act cycle domains. The search yielded 17,537 unique articles, out of which 47 were evaluated for inclusion based on their full texts and 25 articles were included in the review. Common themes extracted included transparency, feasibility of operation within existing workflows, integrating into existing workflows, validation of the tool using predefined performance indicators and improving the algorithm and/or adjusting the tool to improve performance. Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), & Act (24%, n = 6). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC) and 2 (1.2%) were from lower-middle-income countries (LMICs). Healthcare professionals cite the implementation of AI tools within clinical settings as challenging owing to low levels of evidence focusing on integration in the Do and Act domains. The current healthcare AI landscape calls for increased data sharing and knowledge translation to facilitate common goals and reap maximum clinical benefit.

8.
NPJ Digit Med ; 7(1): 87, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38594344

RESUMO

When integrating AI tools in healthcare settings, complex interactions between technologies and primary users are not always fully understood or visible. This deficient and ambiguous understanding hampers attempts by healthcare organizations to adopt AI/ML, and it also creates new challenges for researchers to identify opportunities for simplifying adoption and developing best practices for the use of AI-based solutions. Our study fills this gap by documenting the process of designing, building, and maintaining an AI solution called SepsisWatch at Duke University Health System. We conducted 20 interviews with the team of engineers and scientists that led the multi-year effort to build the tool, integrate it into practice, and maintain the solution. This "Algorithm Journey Map" enumerates all social and technical activities throughout the AI solution's procurement, development, integration, and full lifecycle management. In addition to mapping the "who?" and "what?" of the adoption of the AI tool, we also show several 'lessons learned' throughout the algorithm journey maps including modeling assumptions, stakeholder inclusion, and organizational structure. In doing so, we identify generalizable insights about how to recognize and navigate barriers to AI/ML adoption in healthcare settings. We expect that this effort will further the development of best practices for operationalizing and sustaining ethical principles-in algorithmic systems.

9.
Ann Emerg Med ; 84(2): 118-127, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38441514

RESUMO

STUDY OBJECTIVE: This study aimed to (1) develop and validate a natural language processing model to identify the presence of pulmonary embolism (PE) based on real-time radiology reports and (2) identify low-risk PE patients based on previously validated risk stratification scores using variables extracted from the electronic health record at the time of diagnosis. The combination of these approaches yielded an natural language processing-based clinical decision support tool that can identify patients presenting to the emergency department (ED) with low-risk PE as candidates for outpatient management. METHODS: Data were curated from all patients who received a PE-protocol computed tomography pulmonary angiogram (PE-CTPA) imaging study in the ED of a 3-hospital academic health system between June 1, 2018 and December 31, 2020 (n=12,183). The "preliminary" radiology reports from these imaging studies made available to ED clinicians at the time of diagnosis were adjudicated as positive or negative for PE by the clinical team. The reports were then divided into development, internal validation, and temporal validation cohorts in order to train, test, and validate an natural language processing model that could identify the presence of PE based on unstructured text. For risk stratification, patient- and encounter-level data elements were curated from the electronic health record and used to compute a real-time simplified pulmonary embolism severity (sPESI) score at the time of diagnosis. Chart abstraction was performed on all low-risk PE patients admitted for inpatient management. RESULTS: When applied to the internal validation and temporal validation cohorts, the natural language processing model identified the presence of PE from radiology reports with an area under the receiver operating characteristic curve of 0.99, sensitivity of 0.86 to 0.87, and specificity of 0.99. Across cohorts, 10.5% of PE-CTPA studies were positive for PE, of which 22.2% were classified as low-risk by the sPESI score. Of all low-risk PE patients, 74.3% were admitted for inpatient management. CONCLUSION: This study demonstrates that a natural language processing-based model utilizing real-time radiology reports can accurately identify patients with PE. Further, this model, used in combination with a validated risk stratification score (sPESI), provides a clinical decision support tool that accurately identifies patients in the ED with low-risk PE as candidates for outpatient management.


Assuntos
Serviço Hospitalar de Emergência , Processamento de Linguagem Natural , Embolia Pulmonar , Humanos , Embolia Pulmonar/diagnóstico por imagem , Embolia Pulmonar/diagnóstico , Masculino , Feminino , Pessoa de Meia-Idade , Angiografia por Tomografia Computadorizada , Registros Eletrônicos de Saúde , Medição de Risco/métodos , Idoso , Assistência Ambulatorial , Sistemas de Apoio a Decisões Clínicas , Adulto , Estudos Retrospectivos
10.
Hosp Pediatr ; 14(1): 11-20, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-38053467

RESUMO

OBJECTIVES: Early warning scores detecting clinical deterioration in pediatric inpatients have wide-ranging performance and use a limited number of clinical features. This study developed a machine learning model leveraging multiple static and dynamic clinical features from the electronic health record to predict the composite outcome of unplanned transfer to the ICU within 24 hours and inpatient mortality within 48 hours in hospitalized children. METHODS: Using a retrospective development cohort of 17 630 encounters across 10 388 patients, 2 machine learning models (light gradient boosting machine [LGBM] and random forest) were trained on 542 features and compared with our institutional Pediatric Early Warning Score (I-PEWS). RESULTS: The LGBM model significantly outperformed I-PEWS based on receiver operating characteristic curve (AUROC) for the composite outcome of ICU transfer or mortality for both internal validation and temporal validation cohorts (AUROC 0.785 95% confidence interval [0.780-0.791] vs 0.708 [0.701-0.715] for temporal validation) as well as lead-time before deterioration events (median 11 hours vs 3 hours; P = .004). However, LGBM performance as evaluated by precision recall curve was lesser in the temporal validation cohort with associated decreased positive predictive value (6% vs 29%) and increased number needed to evaluate (17 vs 3) compared with I-PEWS. CONCLUSIONS: Our electronic health record based machine learning model demonstrated improved AUROC and lead-time in predicting clinical deterioration in pediatric inpatients 24 to 48 hours in advance compared with I-PEWS. Further work is needed to optimize model positive predictive value to allow for integration into clinical practice.


Assuntos
Deterioração Clínica , Escore de Alerta Precoce , Criança , Humanos , Estudos Retrospectivos , Aprendizado de Máquina , Criança Hospitalizada , Curva ROC
11.
J Am Med Inform Assoc ; 31(3): 705-713, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38031481

RESUMO

OBJECTIVE: The complexity and rapid pace of development of algorithmic technologies pose challenges for their regulation and oversight in healthcare settings. We sought to improve our institution's approach to evaluation and governance of algorithmic technologies used in clinical care and operations by creating an Implementation Guide that standardizes evaluation criteria so that local oversight is performed in an objective fashion. MATERIALS AND METHODS: Building on a framework that applies key ethical and quality principles (clinical value and safety, fairness and equity, usability and adoption, transparency and accountability, and regulatory compliance), we created concrete guidelines for evaluating algorithmic technologies at our institution. RESULTS: An Implementation Guide articulates evaluation criteria used during review of algorithmic technologies and details what evidence supports the implementation of ethical and quality principles for trustworthy health AI. Application of the processes described in the Implementation Guide can lead to algorithms that are safer as well as more effective, fair, and equitable upon implementation, as illustrated through 4 examples of technologies at different phases of the algorithmic lifecycle that underwent evaluation at our academic medical center. DISCUSSION: By providing clear descriptions/definitions of evaluation criteria and embedding them within standardized processes, we streamlined oversight processes and educated communities using and developing algorithmic technologies within our institution. CONCLUSIONS: We developed a scalable, adaptable framework for translating principles into evaluation criteria and specific requirements that support trustworthy implementation of algorithmic technologies in patient care and healthcare operations.


Assuntos
Inteligência Artificial , Instalações de Saúde , Humanos , Algoritmos , Centros Médicos Acadêmicos , Cooperação do Paciente
12.
JAMA Netw Open ; 6(12): e2345022, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-38100115
13.
J Am Med Inform Assoc ; 31(1): 274-280, 2023 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-37669138

RESUMO

INTRODUCTION: The pitfalls of label leakage, contamination of model input features with outcome information, are well established. Unfortunately, avoiding label leakage in clinical prediction models requires more nuance than the common advice of applying "no time machine rule." FRAMEWORK: We provide a framework for contemplating whether and when model features pose leakage concerns by considering the cadence, perspective, and applicability of predictions. To ground these concepts, we use real-world clinical models to highlight examples of appropriate and inappropriate label leakage in practice. RECOMMENDATIONS: Finally, we provide recommendations to support clinical and technical stakeholders as they evaluate the leakage tradeoffs associated with model design, development, and implementation decisions. By providing common language and dimensions to consider when designing models, we hope the clinical prediction community will be better prepared to develop statistically valid and clinically useful machine learning models.


Assuntos
Instalações de Saúde , Idioma , Aprendizado de Máquina , Atenção à Saúde
14.
JMIR Form Res ; 7: e43963, 2023 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-37733427

RESUMO

BACKGROUND: Machine learning (ML)-driven clinical decision support (CDS) continues to draw wide interest and investment as a means of improving care quality and value, despite mixed real-world implementation outcomes. OBJECTIVE: This study aimed to explore the factors that influence the integration of a peripheral arterial disease (PAD) identification algorithm to implement timely guideline-based care. METHODS: A total of 12 semistructured interviews were conducted with individuals from 3 stakeholder groups during the first 4 weeks of integration of an ML-driven CDS. The stakeholder groups included technical, administrative, and clinical members of the team interacting with the ML-driven CDS. The ML-driven CDS identified patients with a high probability of having PAD, and these patients were then reviewed by an interdisciplinary team that developed a recommended action plan and sent recommendations to the patient's primary care provider. Pseudonymized transcripts were coded, and thematic analysis was conducted by a multidisciplinary research team. RESULTS: Three themes were identified: positive factors translating in silico performance to real-world efficacy, organizational factors and data structure factors affecting clinical impact, and potential challenges to advancing equity. Our study found that the factors that led to successful translation of in silico algorithm performance to real-world impact were largely nontechnical, given adequate efficacy in retrospective validation, including strong clinical leadership, trustworthy workflows, early consideration of end-user needs, and ensuring that the CDS addresses an actionable problem. Negative factors of integration included failure to incorporate the on-the-ground context, the lack of feedback loops, and data silos limiting the ML-driven CDS. The success criteria for each stakeholder group were also characterized to better understand how teams work together to integrate ML-driven CDS and to understand the varying needs across stakeholder groups. CONCLUSIONS: Longitudinal and multidisciplinary stakeholder engagement in the development and integration of ML-driven CDS underpins its effective translation into real-world care. Although previous studies have focused on the technical elements of ML-driven CDS, our study demonstrates the importance of including administrative and operational leaders as well as an early consideration of clinicians' needs. Seeing how different stakeholder groups have this more holistic perspective also permits more effective detection of context-driven health care inequities, which are uncovered or exacerbated via ML-driven CDS integration through structural and organizational challenges. Many of the solutions to these inequities lie outside the scope of ML and require coordinated systematic solutions for mitigation to help reduce disparities in the care of patients with PAD.

15.
Patterns (N Y) ; 4(4): 100710, 2023 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-37123436

RESUMO

The Duke Institute for Health Innovation (DIHI) was launched in 2013. Frontline staff members submit proposals for innovation projects that align with strategic priorities set by organizational leadership. Funded projects receive operational and technical support from institute staff members and a transdisciplinary network of collaborators to develop and implement solutions as part of routine clinical care, ranging from machine learning algorithms to mobile applications. DIHI's operations are shaped by four guiding principles: build to show value, build to integrate, build to scale, and build responsibly. Between 2013 and 2021, more than 600 project proposals have been submitted to DIHI. More than 85 innovation projects, both through the application process and other strategic partnerships, have been supported and implemented. DIHI's funding has incubated 12 companies, engaged more than 300 faculty members, staff members, and students, and contributed to more than 50 peer-reviewed publications. DIHI's practices can serve as a model for other health systems to systematically source, develop, implement, and scale innovations.

17.
Clin Infect Dis ; 76(2): 299-306, 2023 01 13.
Artigo em Inglês | MEDLINE | ID: mdl-36125084

RESUMO

BACKGROUND: Human immunodeficiency virus (HIV) pre-exposure prophylaxis (PrEP) is underutilized in the southern United States. Rapid identification of individuals vulnerable to diagnosis of HIV using electronic health record (EHR)-based tools may augment PrEP uptake in the region. METHODS: Using machine learning, we developed EHR-based models to predict incident HIV diagnosis as a surrogate for PrEP candidacy. We included patients from a southern medical system with encounters between October 2014 and August 2016, training the model to predict incident HIV diagnosis between September 2016 and August 2018. We obtained 74 EHR variables as potential predictors. We compared Extreme Gradient Boosting (XGBoost) versus least absolute shrinkage selection operator (LASSO) logistic regression models, and assessed performance, overall and among women, using area under the receiver operating characteristic curve (AUROC) and area under precision recall curve (AUPRC). RESULTS: Of 998 787 eligible patients, 162 had an incident HIV diagnosis, of whom 49 were women. The XGBoost model outperformed the LASSO model for the total cohort, achieving an AUROC of 0.89 and AUPRC of 0.01. The female-only cohort XGBoost model resulted in an AUROC of 0.78 and AUPRC of 0.00025. The most predictive variables for the overall cohort were race, sex, and male partner. The strongest positive predictors for the female-only cohort were history of pelvic inflammatory disease, drug use, and tobacco use. CONCLUSIONS: Our machine-learning models were able to effectively predict incident HIV diagnoses including among women. This study establishes feasibility of using these models to identify persons most suitable for PrEP in the South.


Assuntos
Infecções por HIV , Profilaxia Pré-Exposição , Humanos , Masculino , Feminino , Estados Unidos/epidemiologia , HIV , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Profilaxia Pré-Exposição/métodos , Infecções por HIV/diagnóstico , Infecções por HIV/epidemiologia , Infecções por HIV/prevenção & controle
18.
Front Med (Lausanne) ; 9: 946937, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36341258

RESUMO

Background: Understanding performance of convolutional neural networks (CNNs) for binary (benign vs. malignant) lesion classification based on real world images is important for developing a meaningful clinical decision support (CDS) tool. Methods: We developed a CNN based on real world smartphone images with histopathological ground truth and tested the utility of structured electronic health record (EHR) data on model performance. Model accuracy was compared against three board-certified dermatologists for clinical validity. Results: At a classification threshold of 0.5, the sensitivity was 79 vs. 77 vs. 72%, and specificity was 64 vs. 65 vs. 57% for image-alone vs. combined image and clinical data vs. clinical data-alone models, respectively. The PPV was 68 vs. 69 vs. 62%, AUC was 0.79 vs. 0.79 vs. 0.69, and AP was 0.78 vs. 0.79 vs. 0.64 for image-alone vs. combined data vs. clinical data-alone models. Older age, male sex, and number of prior dermatology visits were important positive predictors for malignancy in the clinical data-alone model. Conclusion: Additional clinical data did not significantly improve CNN image model performance. Model accuracy for predicting malignant lesions was comparable to dermatologists (model: 71.31% vs. 3 dermatologists: 77.87, 69.88, and 71.93%), validating clinical utility. Prospective validation of the model in primary care setting will enhance understanding of the model's clinical utility.

19.
J Public Health Manag Pract ; 28(6): E778-E788, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36194821

RESUMO

CONTEXT: In the United States, COVID-19 vaccines have been unequally distributed between different racial and ethnic groups. Public reporting of race and ethnicity data for COVID-19 vaccination has the potential to help guide public health responses aimed at promoting vaccination equity. However, there is evidence that such data are not readily available. OBJECTIVES: This study sought to assess gaps and discrepancies in COVID-19 vaccination reporting in 10 large US cities in July 2021. DESIGN, SETTING, AND PARTICIPANTS: For the 10 cities selected, we collected COVID-19 vaccination and population data using publicly available resources, such as state health department Web sites and the US Census Bureau American Community Survey. We examined vaccination plans and news sources to identify initial proposals and evidence of implementation of COVID-19 vaccination best practices. MAIN OUTCOME MEASURE: We performed quantitative assessment of associations of the number of vaccination best practices implemented with COVID-19 racial and ethnic vaccination equity. We additionally assessed gaps and discrepancies in COVID-19 vaccination reporting between states. RESULTS: Our analysis did not show that COVID-19 vaccination inequity was associated with the number of vaccination best practices implemented. However, gaps and variation in reporting of racial and ethnic demographic vaccination data inhibited our ability to effectively assess whether vaccination programs were reaching minority populations. CONCLUSIONS: Lack of consistent public reporting and transparency of COVID-19 vaccination data has likely hindered public health responses by impeding the ability to track the effectiveness of strategies that target vaccine equity.


Assuntos
COVID-19 , Etnicidade , COVID-19/epidemiologia , COVID-19/prevenção & controle , Vacinas contra COVID-19/uso terapêutico , Cidades , Humanos , Estados Unidos/epidemiologia , Vacinação
20.
J Am Med Inform Assoc ; 29(9): 1631-1636, 2022 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-35641123

RESUMO

Artificial intelligence/machine learning models are being rapidly developed and used in clinical practice. However, many models are deployed without a clear understanding of clinical or operational impact and frequently lack monitoring plans that can detect potential safety signals. There is a lack of consensus in establishing governance to deploy, pilot, and monitor algorithms within operational healthcare delivery workflows. Here, we describe a governance framework that combines current regulatory best practices and lifecycle management of predictive models being used for clinical care. Since January 2021, we have successfully added models to our governance portfolio and are currently managing 52 models.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Algoritmos , Atenção à Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA