Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
JAMA Netw Open ; 4(9): e2123374, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34468756

RESUMO

Importance: In the absence of a national strategy in response to the COVID-19 pandemic, many public health decisions fell to local elected officials and agencies. Outcomes of such policies depend on a complex combination of local epidemic conditions and demographic features as well as the intensity and timing of such policies and are therefore unclear. Objective: To use a decision analytical model of the COVID-19 epidemic to investigate potential outcomes if actual policies enacted in March 2020 (during the first wave of the epidemic) in the St Louis region of Missouri had been delayed. Design, Setting, and Participants: A previously developed, publicly available, open-source modeling platform (Local Epidemic Modeling for Management & Action, version 2.1) designed to enable localized COVID-19 epidemic projections was used. The compartmental epidemic model is programmed in R and Stan, uses bayesian inference, and accepts user-supplied demographic, epidemiologic, and policy inputs. Hospital census data for 1.3 million people from St Louis City and County from March 14, 2020, through July 15, 2020, were used to calibrate the model. Exposures: Hypothetical delays in actual social distancing policies (which began on March 13, 2020) by 1, 2, or 4 weeks. Sensitivity analyses were conducted that explored plausible spontaneous behavior change in the absence of social distancing policies. Main Outcomes and Measures: Hospitalizations and deaths. Results: A model of 1.3 million residents of the greater St Louis, Missouri, area found an initial reproductive number (indicating transmissibility of an infectious agent) of 3.9 (95% credible interval [CrI], 3.1-4.5) in the St Louis region before March 15, 2020, which fell to 0.93 (95% CrI, 0.88-0.98) after social distancing policies were implemented between March 15 and March 21, 2020. By June 15, a 1-week delay in policies would have increased cumulative hospitalizations from an observed actual number of 2246 hospitalizations to 8005 hospitalizations (75% CrI: 3973-15 236 hospitalizations) and increased deaths from an observed actual number of 482 deaths to a projected 1304 deaths (75% CrI, 656-2428 deaths). By June 15, a 2-week delay would have yielded 3292 deaths (75% CrI, 2104-4905 deaths)-an additional 2810 deaths or a 583% increase beyond what was actually observed. Sensitivity analyses incorporating a range of spontaneous behavior changes did not avert severe epidemic projections. Conclusions and Relevance: The results of this decision analytical model study suggest that, in the St Louis region, timely social distancing policies were associated with improved population health outcomes, and small delays may likely have led to a COVID-19 epidemic similar to the most heavily affected areas in the US. These findings indicate that an open-source modeling platform designed to accept user-supplied local and regional data may provide projections tailored to, and more relevant for, local settings.


Assuntos
COVID-19/mortalidade , Política de Saúde , Hospitalização/estatística & dados numéricos , Distanciamento Físico , Teorema de Bayes , Feminino , Mortalidade Hospitalar/tendências , Humanos , Masculino , Missouri , Pandemias , SARS-CoV-2
2.
J Patient Exp ; 8: 23743735211034064, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34423122

RESUMO

Transitioning from one electronic health record (EHR) system to another is of the most disruptive events in health care and research about its impact on patient experience for inpatient is limited. This study aimed to assess the impact of transitioning EHR on patient experience measured by the Hospital Consumer Assessment of Healthcare Providers and Systems composites and global items. An interrupted time series study was conducted to evaluate quarter-specific changes in patient experience following implementation of a new EHR at a Midwest health care system during 2017 to 2018. First quarter post-implementation was associated with statistically significant decreases in Communication with Nurses (-1.82; 95% CI, -3.22 to -0.43; P = .0101), Responsiveness of Hospital Staff (-2.73; 95% CI, -4.90 to -0.57; P = .0131), Care Transition (-2.01; 95% CI, -3.96 to -0.07; P = .0426), and Recommend the Hospital (-2.42; 95% CI, -4.36 to -0.49; P = .0142). No statistically significant changes were observed in the transition, second, or third quarters post-implementation. Patient experience scores returned to baseline level after two quarters and the impact from EHR transition appeared to be temporary.

3.
PLoS Comput Biol ; 17(7): e1009053, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34228716

RESUMO

Drug-drug interactions account for up to 30% of adverse drug reactions. Increasing prevalence of electronic health records (EHRs) offers a unique opportunity to build machine learning algorithms to identify drug-drug interactions that drive adverse events. In this study, we investigated hospitalizations' data to study drug interactions with non-steroidal anti-inflammatory drugs (NSAIDS) that result in drug-induced liver injury (DILI). We propose a logistic regression based machine learning algorithm that unearths several known interactions from an EHR dataset of about 400,000 hospitalization. Our proposed modeling framework is successful in detecting 87.5% of the positive controls, which are defined by drugs known to interact with diclofenac causing an increased risk of DILI, and correctly ranks aggregate risk of DILI for eight commonly prescribed NSAIDs. We found that our modeling framework is particularly successful in inferring associations of drug-drug interactions from relatively small EHR datasets. Furthermore, we have identified a novel and potentially hepatotoxic interaction that might occur during concomitant use of meloxicam and esomeprazole, which are commonly prescribed together to allay NSAID-induced gastrointestinal (GI) bleeding. Empirically, we validate our approach against prior methods for signal detection on EHR datasets, in which our proposed approach outperforms all the compared methods across most metrics, such as area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC).


Assuntos
Anti-Inflamatórios não Esteroides/efeitos adversos , Doença Hepática Induzida por Substâncias e Drogas , Interações Medicamentosas , Registros Eletrônicos de Saúde/estatística & dados numéricos , Aprendizado de Máquina , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Doença Hepática Induzida por Substâncias e Drogas/epidemiologia , Doença Hepática Induzida por Substâncias e Drogas/etiologia , Biologia Computacional , Feminino , Humanos , Fígado/efeitos dos fármacos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Estudos Retrospectivos , Adulto Jovem
4.
Am J Epidemiol ; 190(4): 539-552, 2021 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-33351077

RESUMO

There are limited data on longitudinal outcomes for coronavirus disease 2019 (COVID-19) hospitalizations that account for transitions between clinical states over time. Using electronic health record data from a hospital network in the St. Louis, Missouri, region, we performed multistate analyses to examine longitudinal transitions and outcomes among hospitalized adults with laboratory-confirmed COVID-19 with respect to 15 mutually exclusive clinical states. Between March 15 and July 25, 2020, a total of 1,577 patients in the network were hospitalized with COVID-19 (49.9% male; median age, 63 years (interquartile range, 50-75); 58.8% Black). Overall, 34.1% (95% confidence interval (CI): 26.4, 41.8) had an intensive care unit admission and 12.3% (95% CI: 8.5, 16.1) received invasive mechanical ventilation (IMV). The risk of decompensation peaked immediately after admission; discharges peaked around days 3-5, and deaths plateaued between days 7 and 16. At 28 days, 12.6% (95% CI: 9.6, 15.6) of patients had died (4.2% (95% CI: 3.2, 5.2) had received IMV) and 80.8% (95% CI: 75.4, 86.1) had been discharged. Among those receiving IMV, 35.1% (95% CI: 28.2, 42.0) remained intubated after 14 days; after 28 days, 37.6% (95% CI: 30.4, 44.7) had died and only 37.7% (95% CI: 30.6, 44.7) had been discharged. Multistate methods offer granular characterizations of the clinical course of COVID-19 and provide essential information for guiding both clinical decision-making and public health planning.


Assuntos
COVID-19/epidemiologia , Hospitalização/tendências , Unidades de Terapia Intensiva/estatística & dados numéricos , Pandemias , Respiração Artificial/métodos , SARS-CoV-2 , Idoso , COVID-19/terapia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Estados Unidos/epidemiologia
5.
Clin Infect Dis ; 73(2): 213-222, 2021 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-32421195

RESUMO

BACKGROUND: Quantifying the amount and diversity of antibiotic use in United States hospitals assists antibiotic stewardship efforts but is hampered by limited national surveillance. Our study aimed to address this knowledge gap by examining adult antibiotic use across 576 hospitals and nearly 12 million encounters in 2016-2017. METHODS: We conducted a retrospective study of patients aged ≥ 18 years discharged from hospitals in the Premier Healthcare Database between 1 January 2016 and 31 December 2017. Using daily antibiotic charge data, we mapped antibiotics to mutually exclusive classes and to spectrum of activity categories. We evaluated relationships between facility and case-mix characteristics and antibiotic use in negative binomial regression models. RESULTS: The study included 11 701 326 admissions, totaling 64 064 632 patient-days, across 576 hospitals. Overall, patients received antibiotics in 65% of hospitalizations, at a crude rate of 870 days of therapy (DOT) per 1000 patient-days. By class, use was highest among ß-lactam/ß-lactamase inhibitor combinations, third- and fourth-generation cephalosporins, and glycopeptides. Teaching hospitals averaged lower rates of total antibiotic use than nonteaching hospitals (834 vs 957 DOT per 1000 patient-days; P < .001). In adjusted models, teaching hospitals remained associated with lower use of third- and fourth-generation cephalosporins and antipseudomonal agents (adjusted incidence rate ratio [95% confidence interval], 0.92 [.86-.97] and 0.91 [.85-.98], respectively). Significant regional differences in total and class-specific antibiotic use also persisted in adjusted models. CONCLUSIONS: Adult inpatient antibiotic use remains high, driven predominantly by broad-spectrum agents. Better understanding reasons for interhospital usage differences, including by region and teaching status, may inform efforts to reduce inappropriate antibiotic prescribing.


Assuntos
Antibacterianos , Gestão de Antimicrobianos , Adulto , Antibacterianos/uso terapêutico , Hospitais , Humanos , Alta do Paciente , Estudos Retrospectivos , Estados Unidos
6.
Learn Health Syst ; 5(1): e10235, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32838037

RESUMO

Problem: The current coronavirus disease 2019 (COVID-19) pandemic underscores the need for building and sustaining public health data infrastructure to support a rapid local, regional, national, and international response. Despite a historical context of public health crises, data sharing agreements and transactional standards do not uniformly exist between institutions which hamper a foundational infrastructure to meet data sharing and integration needs for the advancement of public health. Approach: There is a growing need to apply population health knowledge with technological solutions to data transfer, integration, and reasoning, to improve health in a broader learning health system ecosystem. To achieve this, data must be combined from healthcare provider organizations, public health departments, and other settings. Public health entities are in a unique position to consume these data, however, most do not yet have the infrastructure required to integrate data sources and apply computable knowledge to combat this pandemic. Outcomes: Herein, we describe lessons learned and a framework to address these needs, which focus on: (a) identifying and filling technology "gaps"; (b) pursuing collaborative design of data sharing requirements and transmission mechanisms; (c) facilitating cross-domain discussions involving legal and research compliance; and (d) establishing or participating in multi-institutional convening or coordinating activities. Next steps: While by no means a comprehensive evaluation of such issues, we envision that many of our experiences are universal. We hope those elucidated can serve as the catalyst for a robust community-wide dialogue on what steps can and should be taken to ensure that our regional and national health care systems can truly learn, in a rapid manner, so as to respond to this and future emergent public health crises.

7.
Am J Infect Control ; 49(5): 646-648, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-32860846

RESUMO

Ultraviolet light (UVL) room disinfection has emerged as an adjunct to manual cleaning of patient rooms. Two different no-touch UVL devices were implemented in 3 health system hospitals to reduce Clostridioides difficile infections (CDI). CDI rates at all 3 facilities remained unchanged following implementation of UVL disinfection. Preintervention CDI rates were generally low, and data from one hospital showed high compliance with manual cleaning, which may have limited the impact of UVL disinfection.


Assuntos
Clostridioides difficile , Infecções por Clostridium , Infecção Hospitalar , Clostridioides , Infecções por Clostridium/prevenção & controle , Infecção Hospitalar/prevenção & controle , Desinfecção , Humanos , Raios Ultravioleta
8.
IEEE J Biomed Health Inform ; 25(6): 2204-2214, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33095721

RESUMO

Machine learning, combined with a proliferation of electronic healthcare records (EHR), has the potential to transform medicine by identifying previously unknown interventions that reduce the risk of adverse outcomes. To realize this potential, machine learning must leave the conceptual 'black box' in complex domains to overcome several pitfalls, like the presence of confounding variables. These variables predict outcomes but are not causal, often yielding uninformative models. In this work, we envision a 'conversational' approach to design machine learning models, which couple modeling decisions to domain expertise. We demonstrate this approach via a retrospective cohort study to identify factors which affect the risk of hospital-acquired venous thromboembolism (HA-VTE). Using logistic regression for modeling, we have identified drugs that reduce the risk of HA-VTE. Our analysis reveals that ondansetron, an anti-nausea and anti-emetic medication, commonly used in treating side-effects of chemotherapy and post-general anesthesia period, substantially reduces the risk of HA-VTE when compared to aspirin (11% vs. 15% relative risk reduction or RRR, respectively). The low cost and low morbidity of ondansetron may justify further inquiry into its use as a preventative agent for HA-VTE. This case study highlights the importance of engaging domain expertise while applying machine learning in complex domains.


Assuntos
Tromboembolia Venosa , Hospitais , Humanos , Aprendizado de Máquina , Ondansetron/uso terapêutico , Estudos Retrospectivos , Fatores de Risco , Tromboembolia Venosa/epidemiologia , Tromboembolia Venosa/prevenção & controle
9.
Clin Infect Dis ; 73(11): e4484-e4492, 2021 12 06.
Artigo em Inglês | MEDLINE | ID: mdl-32756970

RESUMO

BACKGROUND: The Centers for Disease Control and Prevention (CDC) uses standardized antimicrobial administration ratios (SAARs)-that is, observed-to-predicted ratios-to compare antibiotic use across facilities. CDC models adjust for facility characteristics when predicting antibiotic use but do not include patient diagnoses and comorbidities that may also affect utilization. This study aimed to identify comorbidities causally related to appropriate antibiotic use and to compare models that include these comorbidities and other patient-level claims variables to a facility model for risk-adjusting inpatient antibiotic utilization. METHODS: The study included adults discharged from Premier Database hospitals in 2016-2017. For each admission, we extracted facility, claims, and antibiotic data. We evaluated 7 models to predict an admission's antibiotic days of therapy (DOTs): a CDC facility model, models that added patient clinical constructs in varying layers of complexity, and an external validation of a published patient-variable model. We calculated hospital-specific SAARs to quantify effects on hospital rankings. Separately, we used Delphi Consensus methodology to identify Elixhauser comorbidities associated with appropriate antibiotic use. RESULTS: The study included 11 701 326 admissions across 576 hospitals. Compared to a CDC-facility model, a model that added Delphi-selected comorbidities and a bacterial infection indicator was more accurate for all antibiotic outcomes. For total antibiotic use, it was 24% more accurate (respective mean absolute errors: 3.11 vs 2.35 DOTs), resulting in 31-33% more hospitals moving into bottom or top usage quartiles postadjustment. CONCLUSIONS: Adding electronically available patient claims data to facility models consistently improved antibiotic utilization predictions and yielded substantial movement in hospitals' utilization rankings.


Assuntos
Antibacterianos , Hospitais , Adulto , Antibacterianos/uso terapêutico , Centers for Disease Control and Prevention, U.S. , Comorbidade , Humanos , Pacientes Internados , Estados Unidos/epidemiologia
10.
J Am Med Inform Assoc ; 27(7): 1142-1146, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-32333757

RESUMO

Data and information technology are key to every aspect of our response to the current coronavirus disease 2019 (COVID-19) pandemic-including the diagnosis of patients and delivery of care, the development of predictive models of disease spread, and the management of personnel and equipment. The increasing engagement of informaticians at the forefront of these efforts has been a fundamental shift, from an academic to an operational role. However, the past history of informatics as a scientific domain and an area of applied practice provides little guidance or prologue for the incredible challenges that we are now tasked with performing. Building on our recent experiences, we present 4 critical lessons learned that have helped shape our scalable, data-driven response to COVID-19. We describe each of these lessons within the context of specific solutions and strategies we applied in addressing the challenges that we faced.


Assuntos
Betacoronavirus , Infecções por Coronavirus/epidemiologia , Registros Eletrônicos de Saúde , Informática Médica , Pandemias , Pneumonia Viral/epidemiologia , COVID-19 , Conjuntos de Dados como Assunto , Humanos , SARS-CoV-2
11.
Jt Comm J Qual Patient Saf ; 45(7): 480-486, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31133536

RESUMO

Medical errors are a significant source of morbidity and mortality, and while focused efforts to prevent harm have been made, sustaining reductions across multiple categories of patient harm remains a challenge. In 2008 BJC HealthCare initiated a systemwide program to eliminate all major causes of preventable harm and mortality over a five-year period with a goal of sustaining these reductions over the subsequent five years. METHODS: Areas of focus included pressure ulcers, adverse drug events, falls with injury, health care-associated infections, and venous thromboembolism. Initial efforts involved building system-level multidisciplinary teams, utilizing standardized project management methods, and establishing standard surveillance methods. Evidence-based interventions were deployed across the system; core standards were established while allowing for flexibility in local implementation. Improvements were tracked using actual numbers of events rather than rates to increase meaning and interpretability by patients and frontline staff. RESULTS: Over the course of the five-year intervention period, total harm events were reduced by 51.6% (10,371 events in 2009 to 5,018 events in 2012). Continued improvement efforts over the subsequent five years led to additional harm reduction (2,605 events in 2017; a 74.9% reduction since 2009). CONCLUSION: A combination of project management discipline, rigorous surveillance, and focused interventions, along with system-level support of local hospital improvement efforts, led to dramatic reductions in preventable harm and long-term sustainment of progress.


Assuntos
Doença Iatrogênica/prevenção & controle , Melhoria de Qualidade/organização & administração , Acidentes por Quedas/prevenção & controle , Infecção Hospitalar/prevenção & controle , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/prevenção & controle , Registros Eletrônicos de Saúde/normas , Humanos , Erros Médicos/prevenção & controle , Segurança do Paciente , Úlcera por Pressão/prevenção & controle , Melhoria de Qualidade/normas , Tromboembolia Venosa/prevenção & controle
12.
Infect Control Hosp Epidemiol ; 38(9): 1019-1024, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28669363

RESUMO

BACKGROUND Risk adjustment is needed to fairly compare central-line-associated bloodstream infection (CLABSI) rates between hospitals. Until 2017, the Centers for Disease Control and Prevention (CDC) methodology adjusted CLABSI rates only by type of intensive care unit (ICU). The 2017 CDC models also adjust for hospital size and medical school affiliation. We hypothesized that risk adjustment would be improved by including patient demographics and comorbidities from electronically available hospital discharge codes. METHODS Using a cohort design across 22 hospitals, we analyzed data from ICU patients admitted between January 2012 and December 2013. Demographics and International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM) discharge codes were obtained for each patient, and CLABSIs were identified by trained infection preventionists. Models adjusting only for ICU type and for ICU type plus patient case mix were built and compared using discrimination and standardized infection ratio (SIR). Hospitals were ranked by SIR for each model to examine and compare the changes in rank. RESULTS Overall, 85,849 ICU patients were analyzed and 162 (0.2%) developed CLABSI. The significant variables added to the ICU model were coagulopathy, paralysis, renal failure, malnutrition, and age. The C statistics were 0.55 (95% CI, 0.51-0.59) for the ICU-type model and 0.64 (95% CI, 0.60-0.69) for the ICU-type plus patient case-mix model. When the hospitals were ranked by adjusted SIRs, 10 hospitals (45%) changed rank when comorbidity was added to the ICU-type model. CONCLUSIONS Our risk-adjustment model for CLABSI using electronically available comorbidities demonstrated better discrimination than did the CDC model. The CDC should strongly consider comorbidity-based risk adjustment to more accurately compare CLABSI rates across hospitals. Infect Control Hosp Epidemiol 2017;38:1019-1024.


Assuntos
Infecções Relacionadas a Cateter/epidemiologia , Cateterismo Venoso Central/efeitos adversos , Comorbidade , Infecção Hospitalar/epidemiologia , Infecção Hospitalar/etiologia , Risco Ajustado/métodos , Fatores Etários , Centers for Disease Control and Prevention, U.S. , Infecção Hospitalar/etnologia , Contaminação de Equipamentos , Hospitais/estatística & dados numéricos , Humanos , Unidades de Terapia Intensiva , Modelos de Riscos Proporcionais , Estudos Retrospectivos , Estados Unidos
13.
Clin Infect Dis ; 65(5): 803-810, 2017 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-28481976

RESUMO

BACKGROUND: Healthcare-associated infections such as surgical site infections (SSIs) are used by the Centers for Medicare and Medicaid Services (CMS) as pay-for-performance metrics. Risk adjustment allows a fairer comparison of SSI rates across hospitals. Until 2016, Centers for Disease Control and Prevention (CDC) risk adjustment models for pay-for-performance SSI did not adjust for patient comorbidities. New 2016 CDC models only adjust for body mass index and diabetes. METHODS: We performed a multicenter retrospective cohort study of patients undergoing surgical procedures at 28 US hospitals. Demographic data and International Classification of Diseases, Ninth Revision codes were obtained on patients undergoing colectomy, hysterectomy, and knee and hip replacement procedures. Complex SSIs were identified by infection preventionists at each hospital using CDC criteria. Model performance was evaluated using measures of discrimination and calibration. Hospitals were ranked by SSI proportion and risk-adjusted standardized infection ratios (SIR) to assess the impact of comorbidity adjustment on public reporting. RESULTS: Of 45394 patients at 28 hospitals, 573 (1.3%) developed a complex SSI. A model containing procedure type, age, race, smoking, diabetes, liver disease, obesity, renal failure, and malnutrition showed good discrimination (C-statistic, 0.73) and calibration. When comparing hospital rankings by crude proportion to risk-adjusted ranks, 24 of 28 (86%) hospitals changed ranks, 16 (57%) changed by ≥2 ranks, and 4 (14%) changed by >10 ranks. CONCLUSIONS: We developed a well-performing risk adjustment model for SSI using electronically available comorbidities. Comorbidity-based risk adjustment should be strongly considered by the CDC and CMS to adequately compare SSI rates across hospitals.


Assuntos
Infecção da Ferida Cirúrgica/epidemiologia , Adulto , Idoso , Comorbidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Risco Ajustado , Fatores de Risco , Estados Unidos/epidemiologia
14.
Infect Control Hosp Epidemiol ; 38(4): 449-454, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28031061

RESUMO

OBJECTIVE To determine which comorbid conditions are considered causally related to central-line associated bloodstream infection (CLABSI) and surgical-site infection (SSI) based on expert consensus. DESIGN Using the Delphi method, we administered an iterative, 2-round survey to 9 infectious disease and infection control experts from the United States. METHODS Based on our selection of components from the Charlson and Elixhauser comorbidity indices, 35 different comorbid conditions were rated from 1 (not at all related) to 5 (strongly related) by each expert separately for CLABSI and SSI, based on perceived relatedness to the outcome. To assign expert consensus on causal relatedness for each comorbid condition, all 3 of the following criteria had to be met at the end of the second round: (1) a majority (>50%) of experts rating the condition at 3 (somewhat related) or higher, (2) interquartile range (IQR)≤1, and (3) standard deviation (SD)≤1. RESULTS From round 1 to round 2, the IQR and SD, respectively, decreased for ratings of 21 of 35 (60%) and 33 of 35 (94%) comorbid conditions for CLABSI, and for 17 of 35 (49%) and 32 of 35 (91%) comorbid conditions for SSI, suggesting improvement in consensus among this group of experts. At the end of round 2, 13 of 35 (37%) and 17 of 35 (49%) comorbid conditions were perceived as causally related to CLABSI and SSI, respectively. CONCLUSIONS Our results have produced a list of comorbid conditions that should be analyzed as risk factors for and further explored for risk adjustment of CLABSI and SSI. Infect Control Hosp Epidemiol 2017;38:449-454.


Assuntos
Bacteriemia/epidemiologia , Infecções Relacionadas a Cateter/epidemiologia , Comorbidade , Infecção Hospitalar/epidemiologia , Infecção da Ferida Cirúrgica/epidemiologia , Consenso , Técnica Delphi , Humanos , Fatores de Risco , Estados Unidos/epidemiologia
15.
Health Care Manag Sci ; 19(3): 291-9, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25876516

RESUMO

We compare statistical approaches for predicting the likelihood that individual patients will require readmission to hospital within 30 days of their discharge and for setting quality-control standards in that regard. Logistic regression, neural networks and decision trees are found to have comparable discriminating power when applied to cases that were not used to calibrate the respective models. Significant factors for predicting likelihood of readmission are the patient's medical condition upon admission and discharge, length (days) of the hospital visit, care rendered during the hospital stay, size and role of the medical facility, the type of medical insurance, and the environment into which the patient is discharged. Separately constructed models for major medical specialties (Surgery/Gynecology, Cardiorespiratory, Cardiovascular, Neurology, and Medicine) can improve the ability to identify high-risk patients for possible intervention, while consolidated models (with indicator variables for the specialties) can serve well for assessing overall quality of care.


Assuntos
Readmissão do Paciente/estatística & dados numéricos , Fatores Etários , Idoso , Árvores de Decisões , Meio Ambiente , Número de Leitos em Hospital/estatística & dados numéricos , Humanos , Seguro Saúde/estatística & dados numéricos , Tempo de Internação/estatística & dados numéricos , Modelos Logísticos , Medicina/estatística & dados numéricos , Pessoa de Meia-Idade , Redes Neurais de Computação , Alta do Paciente/estatística & dados numéricos , Medição de Risco , Fatores de Risco , Índice de Gravidade de Doença
16.
Am J Med Qual ; 31(5): 400-7, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-26038608

RESUMO

Clinical quality scorecards are used by health care institutions to monitor clinical performance and drive quality improvement. Because of the rapid proliferation of quality metrics in health care, BJC HealthCare found it increasingly difficult to select the most impactful scorecard metrics while still monitoring metrics for regulatory purposes. A 7-step measure selection process was implemented incorporating Kepner-Tregoe Decision Analysis, which is a systematic process that considers key criteria that must be satisfied in order to make the best decision. The decision analysis process evaluates what metrics will most appropriately fulfill these criteria, as well as identifies potential risks associated with a particular metric in order to identify threats to its implementation. Using this process, a list of 750 potential metrics was narrowed to 25 that were selected for scorecard inclusion. This decision analysis process created a more transparent, reproducible approach for selecting quality metrics for clinical quality scorecards.


Assuntos
Técnicas de Apoio para a Decisão , Garantia da Qualidade dos Cuidados de Saúde/métodos , Hospitais/normas , Humanos , Missouri , Garantia da Qualidade dos Cuidados de Saúde/organização & administração , Indicadores de Qualidade em Assistência à Saúde , Qualidade da Assistência à Saúde/organização & administração , Qualidade da Assistência à Saúde/normas
17.
Pediatr Infect Dis J ; 34(12): 1323-8, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26353030

RESUMO

BACKGROUND: Surgical site infections (SSIs) occur in approximately 700 pediatric patients annually and are associated with increased morbidity, mortality and cost. The aim of this study is to determine risk factors for SSI among pediatric patients undergoing craniotomy and spinal fusion. METHODS: This is a retrospective case-control study. Cases were craniotomy or spinal fusion patients with SSI as defined by Centers for Disease Control and Prevention criteria with surgery performed from January 1, 2008 to July 31, 2009. For each case patient, 3 uninfected controls were randomly selected among patients who underwent the same procedure as the case patient within 1 month. We performed analyses of risk factors for craniotomy and spinal fusion SSI separately and as a combined outcome variable. RESULTS: Underweight body mass index, increased time at lowest body temperature, increased interval to antibiotic redosing, the combination of vancomycin and cefazolin for prophylaxis, longer preoperative and postoperative intensive care unit stay and anticoagulant use at 2 weeks postoperatively were associated with an increased risk of SSI in the combined analysis of craniotomy and spinal fusion. Forty-seven percent of cases and 27% of controls received preoperative antibiotic doses that were inappropriately low because of their weight. CONCLUSIONS: We identified modifiable risk factors for SSI including antibiotic dosing and body temperature during surgery. Preoperative antibiotic administration is likely to benefit from standard processes. Further studies of risk benefit for prolonged low body temperature during procedures are needed to determine the optimal balance between neuroprotection and potential immunosuppression associated with low body temperature.


Assuntos
Craniotomia/efeitos adversos , Fusão Vertebral/efeitos adversos , Infecção da Ferida Cirúrgica/epidemiologia , Análise de Variância , Criança , Feminino , Humanos , Masculino , Fatores de Risco
18.
Infect Control Hosp Epidemiol ; 36(12): 1396-400, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26329691

RESUMO

OBJECTIVE: To increase reliability of the algorithm used in our fully automated electronic surveillance system by adding rules to better identify bloodstream infections secondary to other hospital-acquired infections. METHODS: Intensive care unit (ICU) patients with positive blood cultures were reviewed. Central line-associated bloodstream infection (CLABSI) determinations were based on 2 sources: routine surveillance by infection preventionists, and fully automated surveillance. Discrepancies between the 2 sources were evaluated to determine root causes. Secondary infection sites were identified in most discrepant cases. New rules to identify secondary sites were added to the algorithm and applied to this ICU population and a non-ICU population. Sensitivity, specificity, predictive values, and kappa were calculated for the new models. RESULTS: Of 643 positive ICU blood cultures reviewed, 68 (10.6%) were identified as central line-associated bloodstream infections by fully automated electronic surveillance, whereas 38 (5.9%) were confirmed by routine surveillance. New rules were tested to identify organisms as central line-associated bloodstream infections if they did not meet one, or a combination of, the following: (I) matching organisms (by genus and species) cultured from any other site; (II) any organisms cultured from sterile site; (III) any organisms cultured from skin/wound; (IV) any organisms cultured from respiratory tract. The best-fit model included new rules I and II when applied to positive blood cultures in an ICU population. However, they didn't improve performance of the algorithm when applied to positive blood cultures in a non-ICU population. CONCLUSION: Electronic surveillance system algorithms may need adjustment for specific populations.


Assuntos
Infecções Relacionadas a Cateter/prevenção & controle , Infecção Hospitalar , Controle de Infecções/métodos , Aplicações da Informática Médica , Vigilância de Evento Sentinela , Sepse/diagnóstico , Algoritmos , Bacteriemia/diagnóstico , Bacteriemia/prevenção & controle , Cateterismo Venoso Central/efeitos adversos , Infecção Hospitalar/sangue , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/microbiologia , Infecção Hospitalar/prevenção & controle , Bases de Dados Factuais , Hospitais , Humanos , Illinois , Unidades de Terapia Intensiva , Missouri , Reprodutibilidade dos Testes , Sepse/microbiologia , Sepse/prevenção & controle
19.
Infect Control Hosp Epidemiol ; 35(12): 1483-90, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25419770

RESUMO

OBJECTIVE: Central line-associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line-associated BSI detection can improve the validity of surveillance. DESIGN: Retrospective cohort study. SETTING: Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers. METHODS: Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004-2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line-days). RESULTS: We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI) = 0.44 [0.37-0.51]) than computer algorithm surveillance (κ [95% CI] = 0.58; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .01); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line-associated BSI rates. Conclusions: Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.


Assuntos
Bacteriemia , Infecções Relacionadas a Cateter , Infecção Hospitalar , Sistemas de Informação Hospitalar , Controle de Infecções/normas , Algoritmos , Bacteriemia/diagnóstico , Bacteriemia/epidemiologia , Bacteriemia/etiologia , Bacteriemia/prevenção & controle , Infecções Relacionadas a Cateter/diagnóstico , Infecções Relacionadas a Cateter/epidemiologia , Infecções Relacionadas a Cateter/prevenção & controle , Cateterismo Venoso Central/efeitos adversos , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/epidemiologia , Infecção Hospitalar/prevenção & controle , Monitoramento Epidemiológico , Sistemas de Informação Hospitalar/organização & administração , Sistemas de Informação Hospitalar/normas , Humanos , Unidades de Terapia Intensiva/normas , Unidades de Terapia Intensiva/estatística & dados numéricos , Auditoria Administrativa , Melhoria de Qualidade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Estados Unidos/epidemiologia
20.
Infect Control Hosp Epidemiol ; 35(9): 1083-91, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25111915

RESUMO

Electronic surveillance for healthcare-associated infections (HAIs) is increasingly widespread. This is driven by multiple factors: a greater burden on hospitals to provide surveillance data to state and national agencies, financial pressures to be more efficient with HAI surveillance, the desire for more objective comparisons between healthcare facilities, and the increasing amount of patient data available electronically. Optimal implementation of electronic surveillance requires that specific information be available to the surveillance systems. This white paper reviews different approaches to electronic surveillance, discusses the specific data elements required for performing surveillance, and considers important issues of data validation.


Assuntos
Infecção Hospitalar/epidemiologia , Registros Eletrônicos de Saúde , Vigilância em Saúde Pública/métodos , Humanos , Reprodutibilidade dos Testes , Estados Unidos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA