RESUMO
Concentrations of pathogen genomes measured in wastewater have recently become available as a new data source to use when modeling the spread of infectious diseases. One promising use for this data source is inference of the effective reproduction number, the average number of individuals a newly infected person will infect. We propose a model where new infections arrive according to a time-varying immigration rate which can be interpreted as an average number of secondary infections produced by one infectious individual per unit time. This model allows us to estimate the effective reproduction number from concentrations of pathogen genomes, while avoiding difficulty to verify assumptions about the dynamics of the susceptible population. As a byproduct of our primary goal, we also produce a new model for estimating the effective reproduction number from case data using the same framework. We test this modeling framework in an agent-based simulation study with a realistic data generating mechanism which accounts for the time-varying dynamics of pathogen shedding. Finally, we apply our new model to estimating the effective reproduction number of SARS-CoV-2, the causative agent of COVID-19, in Los Angeles, CA, using pathogen RNA concentrations collected from a large wastewater treatment facility.
Assuntos
Número Básico de Reprodução , COVID-19 , SARS-CoV-2 , Águas Residuárias , Humanos , COVID-19/transmissão , COVID-19/epidemiologia , Número Básico de Reprodução/estatística & dados numéricos , Simulação por Computador , Modelos Estatísticos , Los Angeles/epidemiologiaRESUMO
Identifying host factors that influence infectious disease transmission is an important step toward developing interventions to reduce disease incidence. Recent advances in methods for reconstructing infectious disease transmission events using pathogen genomic and epidemiological data open the door for investigation of host factors that affect onward transmission. While most transmission reconstruction methods are designed to work with densely sampled outbreaks, these methods are making their way into surveillance studies, where the fraction of sampled cases with sequenced pathogens could be relatively low. Surveillance studies that use transmission event reconstruction then use the reconstructed events as response variables (i.e., infection source status of each sampled case) and use host characteristics as predictors (e.g., presence of HIV infection) in regression models. We use simulations to study estimation of the effect of a host factor on probability of being an infection source via this multi-step inferential procedure. Using TransPhylo-a widely-used method for Bayesian estimation of infectious disease transmission events-and logistic regression, we find that low sensitivity of identifying infection sources leads to dilution of the signal, biasing logistic regression coefficients toward zero. We show that increasing the proportion of sampled cases improves sensitivity and some, but not all properties of the logistic regression inference. Application of these approaches to real world data from a population-based TB study in Botswana fails to detect an association between HIV infection and probability of being a TB infection source. We conclude that application of a pipeline, where one first uses TransPhylo and sparsely sampled surveillance data to infer transmission events and then estimates effects of host characteristics on probabilities of these events, should be accompanied by a realistic simulation study to better understand biases stemming from imprecise transmission event inference.
Assuntos
Infecções por HIV , Tuberculose , Humanos , Teorema de Bayes , Infecções por HIV/epidemiologia , Tuberculose/epidemiologia , Tuberculose/genética , Surtos de Doenças , Simulação por ComputadorRESUMO
PURPOSE: Aggressive posterior retinopathy of prematurity (AP-ROP) is a vision-threatening disease with a significant rate of progression to retinal detachment. The purpose of this study was to characterize AP-ROP quantitatively by demographics, rate of disease progression, and a deep learning-based vascular severity score. DESIGN: Retrospective analysis. PARTICIPANTS: The Imaging and Informatics in ROP cohort from 8 North American centers, consisting of 947 patients and 5945 clinical eye examinations with fundus images, was used. Pretreatment eyes were categorized by disease severity: none, mild, type 2 or pre-plus, treatment-requiring (TR) without AP-ROP, TR with AP-ROP. Analyses compared TR with AP-ROP and TR without AP-ROP to investigate differences between AP-ROP and other TR disease. METHODS: A reference standard diagnosis was generated for each eye examination using previously published methods combining 3 independent image-based gradings and 1 ophthalmoscopic grading. All fundus images were analyzed using a previously published deep learning system and were assigned a score from 1 through 9. MAIN OUTCOME MEASURES: Birth weight, gestational age, postmenstrual age, and vascular severity score. RESULTS: Infants who demonstrated AP-ROP were more premature by birth weight (617 g vs. 679 g; P = 0.01) and gestational age (24.3 weeks vs. 25.0 weeks; P < 0.01) and reached peak severity at an earlier postmenstrual age (34.7 weeks vs. 36.9 weeks; P < 0.001) compared with infants with TR without AP-ROP. The mean vascular severity score was greatest in TR with AP-ROP infants compared with TR without AP-ROP infants (8.79 vs. 7.19; P < 0.001). Analyzing the severity score over time, the rate of progression was fastest in infants with AP-ROP (P < 0.002 at 30-32 weeks). CONCLUSIONS: Premature infants in North America with AP-ROP are born younger and demonstrate disease earlier than infants with less severe ROP. Disease severity is quantifiable with a deep learning-based score, which correlates with clinically identified categories of disease, including AP-ROP. The rate of progression to peak disease is greatest in eyes that demonstrate AP-ROP compared with other treatment-requiring eyes. Analysis of quantitative characteristics of AP-ROP may help improve diagnosis and treatment of an aggressive, vision-threatening form of ROP.
Assuntos
Diagnóstico por Imagem/métodos , Oftalmoscopia/métodos , Retinopatia da Prematuridade/diagnóstico , Telemedicina/métodos , Progressão da Doença , Feminino , Humanos , Incidência , Recém-Nascido , Masculino , América do Norte/epidemiologia , Retinopatia da Prematuridade/epidemiologia , Estudos Retrospectivos , Fatores de RiscoRESUMO
PURPOSE: With the current wide adoption of electronic health records (EHRs) by ophthalmologists, there are widespread concerns about the amount of time spent using the EHR. The goal of this study was to examine how the amount of time spent using EHRs as well as related documentation behaviors changed 1 decade after EHR adoption. DESIGN: Single-center cohort study. PARTICIPANTS: Six hundred eighty-five thousand three hundred sixty-one office visits with 70 ophthalmology providers. METHODS: We calculated time spent using the EHR associated with each individual office visit using EHR audit logs and determined chart closure times and progress note length from secondary EHR data. We tracked and modeled how these metrics changed from 2006 to 2016 with linear mixed models. MAIN OUTCOME MEASURES: Minutes spent using the EHR associated with an office visit, chart closure time in hours from the office visit check-in time, and progress note length in characters. RESULTS: Median EHR time per office visit in 2006 was 4.2 minutes (interquartile range [IQR], 3.5 minutes), and increased to 6.4 minutes (IQR, 4.5 minutes) in 2016. Median chart closure time was 2.8 hours (IQR, 21.3 hours) in 2006 and decreased to 2.3 hours (IQR, 18.5 hours) in 2016. In 2006, median note length was 1530 characters (IQR, 1435 characters) and increased to 3838 characters (IQR, 2668.3 characters) in 2016. Linear mixed models found EHR time per office visit was 31.9±0.2% (P < 0.001) greater from 2014 through 2016 than from 2006 through 2010, chart closure time was 6.7±0.3 hours (P < 0.001) shorter from 2014 through 2016 versus 2006 through 2010, and note length was 1807.4±6.5 characters (P < 0.001) longer from 2014 through 2016 versus 2006 through 2010. CONCLUSIONS: After 1 decade of use, providers spend more time using the EHR for an office visit, generate longer notes, and close the chart faster. These changes are likely to represent increased time and documentation pressure for providers. Electronic health record redesign and new documentation regulations may help to address these issues.
Assuntos
Documentação/tendências , Registros Eletrônicos de Saúde/tendências , Oftalmologia/tendências , Optometria/tendências , Centros Médicos Acadêmicos , Estudos de Coortes , Documentação/estatística & dados numéricos , Registros Eletrônicos de Saúde/estatística & dados numéricos , Feminino , Pessoal de Saúde , Humanos , Masculino , Visita a Consultório Médico/estatística & dados numéricos , Oftalmologistas , Oftalmologia/estatística & dados numéricos , Optometristas , Optometria/estatística & dados numéricos , Fatores de TempoRESUMO
PURPOSE: To improve clinic efficiency through development of an ophthalmology scheduling template developed using simulation models and electronic health record (EHR) data. DESIGN: We created a computer simulation model of 1 pediatric ophthalmologist's clinic using EHR timestamp data, which was used to develop a scheduling template based on appointment length (short, medium, or long). We assessed its impact on clinic efficiency after implementation in the practices of 5 different pediatric ophthalmologists. PARTICIPANTS: We observed and timed patient appointments in person (n = 120) and collected EHR timestamps for 2 years of appointments (n = 650). We calculated efficiency measures for 172 clinic sessions before implementation vs. 119 clinic sessions after implementation. METHODS: We validated clinic workflow timings calculated from EHR timestamps and the simulation models based on them with observed timings. From simulation tests, we developed a new scheduling template and evaluated it with efficiency metrics before vs. after implementation. MAIN OUTCOME MEASURES: Measurements of clinical efficiency (mean clinic volume, patient wait time, examination time, and clinic length). RESULTS: Mean physician examination time calculated from EHR timestamps was 13.8±8.2 minutes and was not statistically different from mean physician examination time from in-person observation (13.3±7.3 minutes; P = 0.7), suggesting that EHR timestamps are accurate. Mean patient wait time for the simulation model (31.2±10.9 minutes) was not statistically different from the observed mean patient wait times (32.6±25.3 minutes; P = 0.9), suggesting that simulation models are accurate. After implementation of the new scheduling template, all 5 pediatric ophthalmologists showed statistically significant improvements in clinic volume (mean increase of 1-3 patients/session; P ≤ 0.05 for 2 providers; P ≤ 0.008 for 3 providers), whereas 4 of 5 had improvements in mean patient wait time (average improvements of 3-4 minutes/patient; statistically significant for 2 providers, P ≤ 0.008). All of the ophthalmologists' examination times remained the same before and after implementation. CONCLUSIONS: Simulation models based on big data from EHRs can test clinic changes before real-life implementation. A scheduling template using predicted appointment length improves clinic efficiency and may generalize to other clinics. Electronic health records have potential to become tools for supporting clinic operations improvement.
Assuntos
Centros Médicos Acadêmicos/estatística & dados numéricos , Agendamento de Consultas , Eficiência Organizacional/estatística & dados numéricos , Registros Eletrônicos de Saúde/estatística & dados numéricos , Visita a Consultório Médico/estatística & dados numéricos , Oftalmologia/estatística & dados numéricos , Centros Médicos Acadêmicos/organização & administração , Adolescente , Criança , Pré-Escolar , Simulação por Computador , Humanos , Lactente , Recém-Nascido , Oftalmologia/organização & administração , Fatores de Tempo , Fluxo de TrabalhoRESUMO
Branching process inspired models are widely used to estimate the effective reproduction number-a useful summary statistic describing an infectious disease outbreak-using counts of new cases. Case data is a real-time indicator of changes in the reproduction number, but is challenging to work with because cases fluctuate due to factors unrelated to the number of new infections. We develop a new model that incorporates the number of diagnostic tests as a surveillance model covariate. Using simulated data and data from the SARS-CoV-2 pandemic in California, we demonstrate that incorporating tests leads to improved performance over the state of the art.
RESUMO
Concentrations of pathogen genomes measured in wastewater have recently become available as a new data source to use when modeling the spread of infectious diseases. One promising use for this data source is inference of the effective reproduction number, the average number of individuals a newly infected person will infect. We propose a model where new infections arrive according to a time-varying immigration rate which can be interpreted as an average number of secondary infections produced by one infectious individual per unit time. This model allows us to estimate the effective reproduction number from concentrations of pathogen genomes while avoiding difficult to verify assumptions about the dynamics of the susceptible population. As a byproduct of our primary goal, we also produce a new model for estimating the effective reproduction number from case data using the same framework. We test this modeling framework in an agent-based simulation study with a realistic data generating mechanism which accounts for the time-varying dynamics of pathogen shedding. Finally, we apply our new model to estimating the effective reproduction number of SARS-CoV-2 in Los Angeles, California, using pathogen RNA concentrations collected from a large wastewater treatment facility.
RESUMO
Branching process inspired models are widely used to estimate the effective reproduction number -- a useful summary statistic describing an infectious disease outbreak -- using counts of new cases. Case data is a real-time indicator of changes in the reproduction number, but is challenging to work with because cases fluctuate due to factors unrelated to the number of new infections. We develop a new model that incorporates the number of diagnostic tests as a surveillance model covariate. Using simulated data and data from the SARS-CoV-2 pandemic in California, we demonstrate that incorporating tests leads to improved performance over the state-of-the-art.
RESUMO
Concentrations of pathogen genomes measured in wastewater have recently become available as a new data source to use when modeling the spread of infectious diseases. One promising use for this data source is inference of the effective reproduction number, the average number of individuals a newly infected person will infect. We propose a model where new infections arrive according to a time-varying immigration rate which can be interpreted as a compound parameter equal to the product of the proportion of susceptibles in the population and the transmission rate. This model allows us to estimate the effective reproduction number from concentrations of pathogen genomes while avoiding difficult to verify assumptions about the dynamics of the susceptible population. As a byproduct of our primary goal, we also produce a new model for estimating the effective reproduction number from case data using the same framework. We test this modeling framework in an agent-based simulation study with a realistic data generating mechanism which accounts for the time-varying dynamics of pathogen shedding. Finally, we apply our new model to estimating the effective reproduction number of SARS-CoV-2 in Los Angeles, California, using pathogen RNA concentrations collected from a large wastewater treatment facility.
RESUMO
Mechanistic models fit to streaming surveillance data are critical to understanding the transmission dynamics of an outbreak as it unfolds in real-time. However, transmission model parameter estimation can be imprecise, and sometimes even impossible, because surveillance data are noisy and not informative about all aspects of the mechanistic model. To partially overcome this obstacle, Bayesian models have been proposed to integrate multiple surveillance data streams. We devised a modeling framework for integrating SARS-CoV-2 diagnostics test and mortality time series data, as well as seroprevalence data from cross-sectional studies, and tested the importance of individual data streams for both inference and forecasting. Importantly, our model for incidence data accounts for changes in the total number of tests performed. We model the transmission rate, infection-to-fatality ratio, and a parameter controlling a functional relationship between the true case incidence and the fraction of positive tests as time-varying quantities and estimate changes of these parameters nonparametrically. We compare our base model against modified versions which do not use diagnostics test counts or seroprevalence data to demonstrate the utility of including these often unused data streams. We apply our Bayesian data integration method to COVID-19 surveillance data collected in Orange County, California between March 2020 and February 2021 and find that 32-72% of the Orange County residents experienced SARS-CoV-2 infection by mid-January, 2021. Despite this high number of infections, our results suggest that the abrupt end of the winter surge in January 2021 was due to both behavioral changes and a high level of accumulated natural immunity.
RESUMO
PURPOSE: Observe the impact of employing scribes on documentation efficiency in ophthalmology clinics. DESIGN: Single-center retrospective cohort study. PARTICIPANTS: A total of 29,997 outpatient visits conducted by seven attending ophthalmologists between 1/1/2018 and 12/31/2019 were included in the study; 18,483 with a scribe present during the encounter and 11,514 without a scribe present. INTERVENTION: Use of a scribe. MAIN OUTCOME MEASURES: Total physician documentation time, physician documentation time during and after the visit, visit length, time to chart closure, note length, and percent of note text edited by physician. RESULTS: Total physician documentation time was significantly less when working with a scribe (mean ± SD, 4.7 ± 2.9 vs. 7.6 ± 3.8 minutes/note, P<.001), as was documentation time during the visit (2.8 ± 2.2 vs. 5.9 ± 3.1 minutes/note, P<.001). Physicians also edited scribed notes less, deleting 1.9 ± 4.4% of scribes' draft note text and adding 14.8 ± 11.4% of the final note text, compared to deleting 6.0 ± 9.1%(P<.001) of draft note text and adding 21.2 ± 15.3%(P<.001) of final note text when not working with a scribe. However, physician after-visit documentation time was significantly higher with a scribe for 3 of 7 physicians (P<.001). Scribe use was also associated with an office visit length increase of 2.9 minutes (P<.001) per patient and time to chart closure of 3.0 hours (P<.001), according to mixed-effects linear models. CONCLUSIONS: Scribe use was associated with increased documentation efficiency through lower total documentation time and less note editing by physicians. However, the use of a scribe was also associated with longer office visit lengths and time to chart closure. The variability in the impact of scribe use on different measures of documentation efficiency leaves unanswered questions about best practices for the implementation of scribes, and warrants further study of effective scribe use.
RESUMO
Note entry and review in electronic health records (EHRs) are time-consuming. While some clinics have adopted team-based models of note entry, how these models have impacted note review is unknown in outpatient specialty clinics such as ophthalmology. We hypothesized that ophthalmologists and ancillary staff review very few notes. Using audit log data from 9775 follow-up office visits in an academic ophthalmology clinic, we found ophthalmologists reviewed a median of 1 note per visit (2.6 ± 5.3% of available notes), while ancillary staff reviewed a median of 2 notes per visit (4.1 ± 6.2% of available notes). While prior ophthalmic office visit notes were the most frequently reviewed note type, ophthalmologists and staff reviewed no such notes in 51% and 31% of visits, respectively. These results highlight the collaborative nature of note review and raise concerns about how cumbersome EHR designs affect efficient note review and the utility of prior notes in ophthalmic clinical care.
RESUMO
Many medical providers employ scribes to manage electronic health record (EHR) documentation. Prior studies have shown the benefits of scribes, but no large-scale study has quantitively assessed scribe impact on documentation workflows. We propose methods that leverage EHR data for identifying scribe presence during an office visit, measuring provider documentation time, and determining how notes are edited and composed. In a case study, we found scribe use was associated with less provider documentation time overall (averaging 2.4 minutes or 39% less time, p < 0.001), fewer note edits by providers (8.4% less added and 4.2% less deleted text, p < 0.001), but significantly more documentation time after the visit for four out of seven providers (p < 0.001) and no change in the amount of copied and imported note text. Our methods could validate prior study results, identify variability for determining best practices, and determine that scribes do not improve all aspects of documentation.
Assuntos
Documentação/métodos , Registros Eletrônicos de Saúde , Humanos , Fluxo de TrabalhoRESUMO
As healthcare providers have transitioned from paper to electronic health records they have gained access to increasingly sophisticated documentation aids such as custom note templates. However, little is known about how providers use these aids. To address this gap, we examine how 48 ophthalmologists and their staff create and use content-importing phrases - a customizable and composable form of note template - to document office visits across two years. In this case study, we find 1) content-importing phrases were used to document the vast majority of visits (95%), 2) most content imported by these phrases was structured data imported by data-links rather than boilerplate text, and 3) providers primarily used phrases they had created while staff largely used phrases created by other people. We conclude by discussing how framing clinical documentation as end-user programming can inform the design of electronic health records and other documentation systems mixing data and narrative text.
RESUMO
Patient "no-shows" are missed appointments resulting in clinical inefficiencies, revenue loss, and discontinuity of care. Using secondary electronic health record (EHR) data, we used machine learning to predict patient no-shows in follow-up and new patient visits in pediatric ophthalmology and to evaluate features for importance. The best model, XGBoost, had an area under the receiver operating characteristics curve (AUC) score of 0.90 for predicting no-shows in follow-up visits. The key findings from this study are: (1) secondary use of EHR data can be used to build datasets for predictive modeling and successfully predict patient no-shows in pediatric ophthalmology, (2) models predicting no-shows for follow-up visits are more accurate than those for new patient visits, and (3) the performance of predictive models is more robust in predicting no-shows compared to individual important features. We hope these models will be used for more effective interventions to mitigate the impact ofpatient no-shows.
Assuntos
Centros Médicos Acadêmicos/estatística & dados numéricos , Instituições de Assistência Ambulatorial/estatística & dados numéricos , Agendamento de Consultas , Eficiência Organizacional/estatística & dados numéricos , Registros Eletrônicos de Saúde/estatística & dados numéricos , Aprendizado de Máquina , Pacientes não Comparecentes , Visita a Consultório Médico/estatística & dados numéricos , Oftalmologia/estatística & dados numéricos , Centros Médicos Acadêmicos/organização & administração , Criança , Humanos , Oftalmologia/organização & administração , Curva ROCRESUMO
PURPOSE: This study analyzed and quantified the sources of electronic health record (EHR) text documentation in ophthalmology progress notes. DESIGN: EHR documentation review and analysis. METHODS: Setting: a single academic ophthalmology department. STUDY POPULATION: a cohort study conducted between November 1, 2016, and December 31, 2018, using secondary EHR data and a follow-up manual review of a random samples. The cohort study included 123,274 progress notes documented by 42 attending providers. These notes were for patients with the 5 most common primary International Statistical Classification of Diseases and Related Health Problems, version 10, parent codes for each provider. For the manual review, 120 notes from 8 providers were randomly sampled. Main outcome measurements were characters or number of words in each note categorized by attribution source, author type, and time of creation. RESULTS: Imported text entries made up the majority of text in new and return patients, 2,978 characters (77%) and 3,612 characters (91%). Support staff members authored substantial portions of notes; 3,024 characters (68%) of new patient notes, 3,953 characters (83%) of return patient notes. Finally, providers completed large amounts of documentation after clinical visits: 135 words (35%) of new patient notes, 102 words (27%) of return patient notes. CONCLUSIONS: EHR documentation consists largely of imported text, is often authored by support staff, and is often written after the end of a visit. These findings raise questions about documentation accuracy and utility and may have implications for quality of care and patient-provider relationships.
Assuntos
Documentação/normas , Registros Eletrônicos de Saúde/normas , Prontuários Médicos/normas , Oftalmologia/normas , Centros Médicos Acadêmicos , Confiabilidade dos Dados , Humanos , Oregon , Pacientes Ambulatoriais , Padrões de Prática Médica , Estudos RetrospectivosRESUMO
Patient perceptions of wait time during outpatient office visits can affect patient satisfaction. Providing accurate information about wait times could improve patients' satisfaction by reducing uncertainty. However, these are rarely known about efficient ways to predict wait time in the clinic. Supervised machine learning algorithms is a powerful tool for predictive modeling with large and complicated data sets. In this study, we tested machine learning models to predict wait times based on secondary EHR data in pediatric ophthalmology outpatient clinic. We compared several machine-learning algorithms, including random forest, elastic net, gradient boosting machine, support vector machine, and multiple linear regressions to find the most accurate model for prediction. The importance of the predictors was also identified via machine learning models. In the future, these models have the potential to combine with real-time EHR data to provide real time accurate estimates of patient wait time outpatient clinics.
Assuntos
Instituições de Assistência Ambulatorial , Aprendizado de Máquina , Oftalmologia , Instituições de Assistência Ambulatorial/organização & administração , Humanos , Modelos Lineares , Modelos Estatísticos , Satisfação do Paciente , Pediatria , Curva ROC , Aprendizado de Máquina Supervisionado , Fatores de Tempo , Tempo para o TratamentoRESUMO
Electronic Health Records (EHRs) are widely used in the United States for clinical care and billing activities. Their widespread adoption has raised a variety of concerns about their effects on providers and medical care. As researchers address these concerns, they will need to understand how much time providers actually spend on the EHR. This study develops and validates methods for calculating total time requirements for EHR use by ophthalmologists using secondary EHR data from audit logs. Key findings from this study are that (1) Secondary EHR data can be used to estimate lower bounds on provider EHR use, (2) Providers spend a large amount of time using the EHR, (3) Most time spent on the EHR is spent reviewing information. These findings have important implications for practicing clinicians, and for EHR system design in the future.
Assuntos
Registros Eletrônicos de Saúde , Oftalmologistas , Estudos de Tempo e Movimento , Centros Médicos Acadêmicos , Adulto , Eficiência Organizacional , Docentes de Medicina , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Oregon , Fatores de Tempo , Estados UnidosRESUMO
Busy clinicians struggle with productivity and usability in electronic health record systems (EHRs). While previous studies have investigated documentation practices and strategies in the inpatient setting, outpatient documentation and review practices by clinicians using EHRs are relatively unknown. In this study, we look at clinicians' patterns of note review in the EHR during outpatient follow-up office visits in ophthalmology. Key findings from this study are that the number and percentage of notes reviewed is very low, there is variation between providers, specialties, and users, and staff access more notes than physicians. These findings suggest that the vast majority of content in the EHR is not being used by clinicians; improved EHR designs would better present this data and support the information needs of outpatient clinicians.