RESUMEN
Electronic health records (EHRs) are ubiquitous yet still evolving, resulting in a moving target for determining the effects of context (features of the work environment, such as organization, payment systems, user training, and roles) on EHR implementation projects. Electronic health records have become instrumental in effecting quality improvement innovations and providing data to evaluate them. However, reports of studies typically fail to provide adequate descriptions of contextual details to permit readers to apply the findings. As for any evaluation, the quality of reporting is essential to learning from, and disseminating, the results. Extensive guidelines exist for reporting of virtually all types of applied health research, but they are not tailored to capture some contextual factors that may affect the outcomes of EHR implementations, such as attitudes toward implementation, format and amount of training, post go-live support, amount of local customization, and time diverted from direct interaction with patients to computers. Nevertheless, evaluators of EHR-based innovations can choose reporting guidelines that match the general purpose of their evaluation and the stage of their investigation (planning, protocol, execution, and analysis) and should report relevant contextual details (including, if pertinent, any pressures to help justify the huge investments and many years required for some implementations). Reporting guidelines are based on the scientific principles and practices that underlie sound research and should be consulted from the earliest stages of planning evaluations and onward, serving as guides for how evaluations should be conducted as well as reported.
Asunto(s)
Registros Electrónicos de Salud/organización & administración , Medicina Interna/organización & administración , Mejoramiento de la Calidad , HumanosRESUMEN
BACKGROUND: Rapid access to evidence is crucial in times of an evolving clinical crisis. To that end, we propose a novel approach to answer clinical queries, termed rapid meta-analysis (RMA). Unlike traditional meta-analysis, RMA balances a quick time to production with reasonable data quality assurances, leveraging artificial intelligence (AI) to strike this balance. OBJECTIVE: We aimed to evaluate whether RMA can generate meaningful clinical insights, but crucially, in a much faster processing time than traditional meta-analysis, using a relevant, real-world example. METHODS: The development of our RMA approach was motivated by a currently relevant clinical question: is ocular toxicity and vision compromise a side effect of hydroxychloroquine therapy? At the time of designing this study, hydroxychloroquine was a leading candidate in the treatment of coronavirus disease (COVID-19). We then leveraged AI to pull and screen articles, automatically extract their results, review the studies, and analyze the data with standard statistical methods. RESULTS: By combining AI with human analysis in our RMA, we generated a meaningful, clinical result in less than 30 minutes. The RMA identified 11 studies considering ocular toxicity as a side effect of hydroxychloroquine and estimated the incidence to be 3.4% (95% CI 1.11%-9.96%). The heterogeneity across individual study findings was high, which should be taken into account in interpretation of the result. CONCLUSIONS: We demonstrate that a novel approach to meta-analysis using AI can generate meaningful clinical insights in a much shorter time period than traditional meta-analysis.
Asunto(s)
Inteligencia Artificial , Infecciones por Coronavirus/tratamiento farmacológico , Oftalmopatías/etiología , Hidroxicloroquina/efectos adversos , Hidroxicloroquina/uso terapéutico , Metaanálisis como Asunto , Neumonía Viral/tratamiento farmacológico , COVID-19 , Ojo/efectos de los fármacos , Ojo/patología , Humanos , Pandemias , Factores de Tiempo , Tratamiento Farmacológico de COVID-19RESUMEN
BACKGROUND: A major barrier to the practice of evidence-based medicine is efficiently finding scientifically sound studies on a given clinical topic. OBJECTIVE: To investigate a deep learning approach to retrieve scientifically sound treatment studies from the biomedical literature. METHODS: We trained a Convolutional Neural Network using a noisy dataset of 403,216 PubMed citations with title and abstract as features. The deep learning model was compared with state-of-the-art search filters, such as PubMed's Clinical Query Broad treatment filter, McMaster's textword search strategy (no Medical Subject Heading, MeSH, terms), and Clinical Query Balanced treatment filter. A previously annotated dataset (Clinical Hedges) was used as the gold standard. RESULTS: The deep learning model obtained significantly lower recall than the Clinical Queries Broad treatment filter (96.9% vs 98.4%; P<.001); and equivalent recall to McMaster's textword search (96.9% vs 97.1%; P=.57) and Clinical Queries Balanced filter (96.9% vs 97.0%; P=.63). Deep learning obtained significantly higher precision than the Clinical Queries Broad filter (34.6% vs 22.4%; P<.001) and McMaster's textword search (34.6% vs 11.8%; P<.001), but was significantly lower than the Clinical Queries Balanced filter (34.6% vs 40.9%; P<.001). CONCLUSIONS: Deep learning performed well compared to state-of-the-art search filters, especially when citations were not indexed. Unlike previous machine learning approaches, the proposed deep learning model does not require feature engineering, or time-sensitive or proprietary features, such as MeSH terms and bibliometrics. Deep learning is a promising approach to identifying reports of scientifically rigorous clinical research. Further work is needed to optimize the deep learning model and to assess generalizability to other areas, such as diagnosis, etiology, and prognosis.
Asunto(s)
Aprendizaje Profundo/normas , Almacenamiento y Recuperación de la Información/métodos , Redes Neurales de la Computación , PubMed/normas , HumanosRESUMEN
[This corrects the article DOI: 10.1016/j.conctc.2019.100443.].
RESUMEN
BACKGROUND: More than 90% of clinical-trial compounds fail to demonstrate sufficient efficacy and safety. To help alleviate this issue, systematic literature review and meta-analysis (SLR), which synthesize current evidence for a research question, can be applied to preclinical evidence to identify the most promising therapeutics. However, these methods remain time-consuming and labor-intensive. Here, we introduce an economic formula to estimate the expense of SLR for academic institutions and pharmaceutical companies. METHODS: We estimate the manual effort involved in SLR by quantifying the amount of labor required and the total associated labor cost. We begin with an empirical estimation and derive a formula that quantifies and describes the cost. RESULTS: The formula estimated that each SLR costs approximately $141,194.80. We found that on average, the ten largest pharmaceutical companies publish 118.71 and the ten major academic institutions publish 132.16 SLRs per year. On average, the total cost of all SLRs per year to each academic institution amounts to $18,660,304.77 and for each pharmaceutical company is $16,761,234.71. DISCUSSION: It appears that SLR is an important, but costly mechanisms to assess the totality of evidence. CONCLUSIONS: With the increase in the number of publications, the significant time and cost of SLR may pose a barrier to their consistent application to assess the promise of clinical trials thoroughly. We call on investigators and developers to develop automated solutions to help with the assessment of preclinical evidence particularly. The formula we introduce provides a cost baseline against which the efficiency of automation can be measured.