Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
JAMA ; 327(8): 760-771, 2022 02 22.
Artigo em Inglês | MEDLINE | ID: mdl-35143601

RESUMO

Importance: Current guidelines recommend against use of intravenous alteplase in patients with acute ischemic stroke who are taking non-vitamin K antagonist oral anticoagulants (NOACs). Objective: To evaluate the safety and functional outcomes of intravenous alteplase among patients who were taking NOACs prior to stroke and compare outcomes with patients who were not taking long-term anticoagulants. Design, Setting, and Participants: A retrospective cohort study of 163 038 patients with acute ischemic stroke either taking NOACs or not taking anticoagulants prior to stroke and treated with intravenous alteplase within 4.5 hours of symptom onset at 1752 US hospitals participating in the Get With The Guidelines-Stroke program between April 2015 and March 2020, with complementary data from the Addressing Real-world Anticoagulant Management Issues in Stroke registry. Exposures: Prestroke treatment with NOACs within 7 days prior to alteplase treatment. Main Outcomes and Measures: The primary outcome was symptomatic intracranial hemorrhage occurring within 36 hours after intravenous alteplase administration. There were 4 secondary safety outcomes, including inpatient mortality, and 7 secondary functional outcomes assessed at hospital discharge, including the proportion of patients discharged home. Results: Of 163 038 patients treated with intravenous alteplase (median age, 70 [IQR, 59 to 81] years; 49.1% women), 2207 (1.4%) were taking NOACs and 160 831 (98.6%) were not taking anticoagulants prior to their stroke. Patients taking NOACs were older (median age, 75 [IQR, 64 to 82] years vs 70 [IQR, 58 to 81] years for those not taking anticoagulants), had a higher prevalence of cardiovascular comorbidities, and experienced more severe strokes (median National Institutes of Health Stroke Scale score, 10 [IQR, 5 to 17] vs 7 [IQR, 4 to 14]) (all standardized differences >10). The unadjusted rate of symptomatic intracranial hemorrhage was 3.7% (95% CI, 2.9% to 4.5%) for patients taking NOACs vs 3.2% (95% CI, 3.1% to 3.3%) for patients not taking anticoagulants. After adjusting for baseline clinical factors, the risk of symptomatic intracranial hemorrhage was not significantly different between groups (adjusted odds ratio [OR], 0.88 [95% CI, 0.70 to 1.10]; adjusted risk difference [RD], -0.51% [95% CI, -1.36% to 0.34%]). There were no significant differences in the secondary safety outcomes, including inpatient mortality (6.3% for patients taking NOACs vs 4.9% for patients not taking anticoagulants; adjusted OR, 0.84 [95% CI, 0.69 to 1.01]; adjusted RD, -1.20% [95% CI, -2.39% to -0%]). Of the secondary functional outcomes, 4 of 7 showed significant differences in favor of the NOAC group after adjustment, including the proportion of patients discharged home (45.9% vs 53.6% for patients not taking anticoagulants; adjusted OR, 1.17 [95% CI, 1.06 to 1.29]; adjusted RD, 3.84% [95% CI, 1.46% to 6.22%]). Conclusions and Relevance: Among patients with acute ischemic stroke treated with intravenous alteplase, use of NOACs within the preceding 7 days, compared with no use of anticoagulants, was not associated with a significantly increased risk of intracranial hemorrhage.


Assuntos
Anticoagulantes/uso terapêutico , Fibrinolíticos/uso terapêutico , Hemorragias Intracranianas/etiologia , AVC Isquêmico/tratamento farmacológico , Ativador de Plasminogênio Tecidual/uso terapêutico , Administração Intravenosa , Administração Oral , Idoso , Idoso de 80 Anos ou mais , Anticoagulantes/efeitos adversos , Feminino , Humanos , AVC Isquêmico/complicações , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Fatores de Risco
2.
Circulation ; 141(9): e120-e138, 2020 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-31992057

RESUMO

Each decade, the American Heart Association (AHA) develops an Impact Goal to guide its overall strategic direction and investments in its research, quality improvement, advocacy, and public health programs. Guided by the AHA's new Mission Statement, to be a relentless force for a world of longer, healthier lives, the 2030 Impact Goal is anchored in an understanding that to achieve cardiovascular health for all, the AHA must include a broader vision of health and well-being and emphasize health equity. In the next decade, by 2030, the AHA will strive to equitably increase healthy life expectancy beyond current projections, with global and local collaborators, from 66 years of age to at least 68 years of age across the United States and from 64 years of age to at least 67 years of age worldwide. The AHA commits to developing additional targets for equity and well-being to accompany this overarching Impact Goal. To attain the 2030 Impact Goal, we recommend a thoughtful evaluation of interventions available to the public, patients, providers, healthcare delivery systems, communities, policy makers, and legislators. This presidential advisory summarizes the task force's main considerations in determining the 2030 Impact Goal and the metrics to monitor progress. It describes the aspiration that these goals will be achieved by working with a diverse community of volunteers, patients, scientists, healthcare professionals, and partner organizations needed to ensure success.


Assuntos
American Heart Association , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/prevenção & controle , Saúde Global , Formulação de Políticas , Vigilância da População , Serviços Preventivos de Saúde/normas , Idoso , Doenças Cardiovasculares/diagnóstico , Doenças Cardiovasculares/mortalidade , Nível de Saúde , Humanos , Pessoa de Meia-Idade , Medição de Risco , Fatores de Risco , Fatores de Tempo , Estados Unidos/epidemiologia
4.
Stroke ; 52(10): e586-e589, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34496619

RESUMO

Background and Purpose: Mild ischemic stroke patients enrolled in randomized controlled trials of thrombolysis may have a different symptom severity distribution than those treated in routine clinical practice. Methods: We compared the distribution of the National Institutes of Health Stroke Scale (NIHSS) scores, neurological symptoms/severity among patients enrolled in the PRISMS (Potential of r-tPA for Ischemic Strokes With Mild Symptoms) randomized controlled trial to those with NIHSS score ≤5 enrolled in the prospective MaRISS (Mild and Rapidly Improving Stroke Study) registry using global P values from χ2 analyses. Results: Among 1736 participants in MaRISS, 972 (56%) were treated with alteplase and 764 (44%) were not. These participants were compared with 313 patients randomized in PRISMS. The median NIHSS scores were 3 (2­4) in MaRISS alteplase-treated, 1 (1­3) in MaRISS non­alteplase-treated, and 2 (1­3) in PRISMS. The percentage with an NIHSS score of 0 to 2 was 36.3%, 73.3%, and 65.2% in the 3 groups, respectively (P<0.0001). The proportion of patients with a dominant neurological syndrome (≥1 NIHSS item score of ≥2) was higher in MaRISS alteplase-treated (32%) compared with MaRISS nonalteplase-treated (13.8%) and PRISMS (8.6%; P<0.0001). Conclusions: Patients randomized in PRISMS had comparable deficit and syndromic severity to patients not treated with alteplase in the MaRISS registry and lesser severity than patients treated with alteplase in MaRISS. The PRISMS trial cohort is representative of mild patients who do not receive alteplase in current broad clinical practice.


Assuntos
AVC Isquêmico/terapia , Terapia Trombolítica/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Feminino , Fibrinolíticos/uso terapêutico , Humanos , AVC Isquêmico/complicações , Masculino , Pessoa de Meia-Idade , Doenças do Sistema Nervoso/etiologia , Doenças do Sistema Nervoso/fisiopatologia , Estudos Prospectivos , Ensaios Clínicos Controlados Aleatórios como Assunto , Sistema de Registros , Ativador de Plasminogênio Tecidual/uso terapêutico , Resultado do Tratamento
5.
BMC Med Inform Decis Mak ; 21(1): 5, 2021 01 06.
Artigo em Inglês | MEDLINE | ID: mdl-33407390

RESUMO

BACKGROUND: Cardiovascular disease (CVD) is the leading cause of death in the United States (US). Better cardiovascular health (CVH) is associated with CVD prevention. Predicting future CVH levels may help providers better manage patients' CVH. We hypothesized that CVH measures can be predicted based on previous measurements from longitudinal electronic health record (EHR) data. METHODS: The Guideline Advantage (TGA) dataset was used and contained EHR data from 70 outpatient clinics across the United States (US). We studied predictions of 5 CVH submetrics: smoking status (SMK), body mass index (BMI), blood pressure (BP), hemoglobin A1c (A1C), and low-density lipoprotein (LDL). We applied embedding techniques and long short-term memory (LSTM) networks - to predict future CVH category levels from all the previous CVH measurements of 216,445 unique patients for each CVH submetric. RESULTS: The LSTM model performance was evaluated by the area under the receiver operator curve (AUROC): the micro-average AUROC was 0.99 for SMK prediction; 0.97 for BMI; 0.84 for BP; 0.91 for A1C; and 0.93 for LDL prediction. Model performance was not improved by using all 5 submetric measures compared with using single submetric measures. CONCLUSIONS: We suggest that future CVH levels can be predicted using previous CVH measurements for each submetric, which has implications for population cardiovascular health management. Predicting patients' future CVH levels might directly increase patient CVH health and thus quality of life, while also indirectly decreasing the burden and cost for clinical health system caused by CVD and cancers.


Assuntos
Doenças Cardiovasculares , Registros Eletrônicos de Saúde , Pressão Sanguínea , Doenças Cardiovasculares/diagnóstico , Doenças Cardiovasculares/epidemiologia , Estudos Transversais , Nível de Saúde , Humanos , Qualidade de Vida , Fatores de Risco , Estados Unidos/epidemiologia
6.
BMC Med Inform Decis Mak ; 21(1): 361, 2021 12 24.
Artigo em Inglês | MEDLINE | ID: mdl-34952584

RESUMO

BACKGROUND: Mood disorders (MDS) are a type of mental health illness that effects millions of people in the United States. Early prediction of MDS can give providers greater opportunity to treat these disorders. We hypothesized that longitudinal cardiovascular health (CVH) measurements would be informative for MDS prediction. METHODS: To test this hypothesis, the American Heart Association's Guideline Advantage (TGA) dataset was used, which contained longitudinal EHR from 70 outpatient clinics. The statistical analysis and machine learning models were employed to identify the associations of the MDS and the longitudinal CVH metrics and other confounding factors. RESULTS: Patients diagnosed with MDS consistently had a higher proportion of poor CVH compared to patients without MDS, with the largest difference between groups for Body mass index (BMI) and Smoking. Race and gender were associated with status of CVH metrics. Approximate 46% female patients with MDS had a poor hemoglobin A1C compared to 44% of those without MDS; 62% of those with MDS had poor BMI compared to 47% of those without MDS; 59% of those with MDS had poor blood pressure (BP) compared to 43% of those without MDS; and 43% of those with MDS were current smokers compared to 17% of those without MDS. CONCLUSIONS: Women and ethnoracial minorities with poor cardiovascular health measures were associated with a higher risk of development of MDS, which indicated the high utility for using routine medical records data collected in care to improve detection and treatment for MDS among patients with poor CVH.


Assuntos
Doenças Cardiovasculares , Pressão Sanguínea , Doenças Cardiovasculares/epidemiologia , Estudos Transversais , Feminino , Nível de Saúde , Humanos , Masculino , Transtornos do Humor , Fatores de Risco , Estados Unidos
7.
Ann Intern Med ; 167(8): 555-564, 2017 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-28973634

RESUMO

BACKGROUND: Publicly reported hospital risk-standardized mortality rates (RSMRs) for acute myocardial infarction (AMI) are calculated for Medicare beneficiaries. Outcomes for older patients with AMI may not reflect general outcomes. OBJECTIVE: To examine the relationship between hospital 30-day RSMRs for older patients (aged ≥65 years) and those for younger patients (aged 18 to 64 years) and all patients (aged ≥18 years) with AMI. DESIGN: Retrospective cohort study. SETTING: 986 hospitals in the ACTION (Acute Coronary Treatment and Intervention Outcomes Network) Registry-Get With the Guidelines. PARTICIPANTS: Adults hospitalized for AMI from 1 October 2010 to 30 September 2014. MEASUREMENTS: Hospital 30-day RSMRs were calculated for older, younger, and all patients using an electronic health record measure of AMI mortality endorsed by the National Quality Forum. Hospitals were ranked by their 30-day RSMRs for these 3 age groups, and agreement in rankings was plotted. The correlation in hospital AMI achievement scores for each age group was also calculated using the Hospital Value-Based Purchasing (HVBP) Program method computed with the electronic health record measure. RESULTS: 267 763 and 276 031 AMI hospitalizations among older and younger patients, respectively, were identified. Median hospital 30-day RSMRs were 9.4%, 3.0%, and 6.2% for older, younger, and all patients, respectively. Most top- and bottom-performing hospitals for older patients were neither top nor bottom performers for younger patients. In contrast, most top and bottom performers for older patients were also top and bottom performers for all patients. Similarly, HVBP achievement scores for older patients correlated weakly with those for younger patients (R = 0.30) and strongly with those for all patients (R = 0.92). LIMITATION: Minority of U.S. hospitals. CONCLUSION: Hospital mortality rankings for older patients with AMI inconsistently reflect rankings for younger patients. Incorporation of younger patients into assessment of hospital outcomes would permit further examination of the presence and effect of age-related quality differences. PRIMARY FUNDING SOURCE: American College of Cardiology.


Assuntos
Mortalidade Hospitalar , Hospitais/normas , Infarto do Miocárdio/mortalidade , Avaliação de Resultados em Cuidados de Saúde , Adolescente , Adulto , Fatores Etários , Idoso , Hospitais/estatística & dados numéricos , Humanos , Pessoa de Meia-Idade , Estudos Retrospectivos , Estados Unidos/epidemiologia , Adulto Jovem
8.
Clin Infect Dis ; 53(1): 20-5, 2011 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-21653298

RESUMO

BACKGROUND: US estimates of the Clostridium difficile infection (CDI) burden have utilized International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis codes. Whether ICD-9-CM code rank order affects CDI prevalence estimates is important because the National Hospital Discharge Survey (NHDS) and the Nationwide Inpatient Sample (NIS) have varying limits on the number of ICD-9-CM codes collected. METHODS: ICD-9-CM codes for CDI (008.45), C. difficile toxin assay results, and dates of admission and discharge were collected from electronic hospital databases for adult patients admitted to 4 hospitals in the United States from July 2000 through June 2006. CDI prevalence per 1000 discharges was calculated and compared for NHDS and NIS limits and toxin assay results from the same hospitals. CDI prevalence estimates were compared using the χ(2) test, and the test of equality was used to compare slopes. RESULTS: CDI prevalence measured by NIS criteria was significantly higher than that measured using NHDS criteria (10.7 cases per 1000 discharges versus 9.4 cases per 1000 discharges; P<.001) in the 4 hospitals. CDI prevalence measured by toxin assay results was 9.4 cases per 1000 discharges (P=.57 versus NHDS). However, the CDI prevalence increased more rapidly over time when measured according to the NHDS criteria than when measured according to toxin assay results (ß=1.09 versus 0.84; P=.008). CONCLUSIONS: Compared with the NHDS definition, the NIS definition captured 12% more CDI cases and reported significantly higher CDI rates. Rates calculated using toxin assay results were not different from rates calculated using NHDS criteria, but CDI prevalence appeared to increase more rapidly when measured by NHDS criteria than when measured by toxin assay results.


Assuntos
Clostridioides difficile/isolamento & purificação , Infecções por Clostridium/diagnóstico , Infecções por Clostridium/epidemiologia , Classificação Internacional de Doenças , Toxinas Bacterianas , Distribuição de Qui-Quadrado , Infecções por Clostridium/classificação , Estudos Transversais , Testes Imunológicos de Citotoxicidade , Ensaio de Imunoadsorção Enzimática , Pesquisas sobre Atenção à Saúde , Humanos , Prevalência , Estados Unidos/epidemiologia
9.
Sci Rep ; 11(1): 20969, 2021 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-34697328

RESUMO

Certain diseases have strong comorbidity and co-occurrence with others. Understanding disease-disease associations can potentially increase awareness among healthcare providers of co-occurring conditions and facilitate earlier diagnosis, prevention and treatment of patients. In this study, we utilized the valuable and large The Guideline Advantage (TGA) longitudinal electronic health record dataset from 70 outpatient clinics across the United States to investigate potential disease-disease associations. Specifically, the most prevalent 50 disease diagnoses were manually identified from 165,732 unique patients. To investigate the co-occurrence or dependency associations among the 50 diseases, the categorical disease terms were first mapped into numerical vectors based on disease co-occurrence frequency in individual patients using the Word2Vec approach. Then the novel and interesting disease association clusters were identified using correlation and clustering analyses in the numerical space. Moreover, the distribution of time delay (Δt) between pair-wise strongly associated diseases (correlation coefficients ≥ 0.5) were calculated to show the dependency among the diseases. The results can indicate the risk of disease comorbidity and complications, and facilitate disease prevention and optimal treatment decision-making.


Assuntos
Comorbidade , Adulto , Idoso , Análise por Conglomerados , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Feminino , Humanos , Classificação Internacional de Doenças , Masculino , Pessoa de Meia-Idade , Estados Unidos
10.
PLoS One ; 16(9): e0239007, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34516567

RESUMO

BACKGROUND: Cardiac dysrhythmias (CD) affect millions of Americans in the United States (US), and are associated with considerable morbidity and mortality. New strategies to combat this growing problem are urgently needed. OBJECTIVES: Predicting CD using electronic health record (EHR) data would allow for earlier diagnosis and treatment of the condition, thus improving overall cardiovascular outcomes. The Guideline Advantage (TGA) is an American Heart Association ambulatory quality clinical data registry of EHR data representing 70 clinics distributed throughout the US, and has been used to monitor outpatient prevention and disease management outcome measures across populations and for longitudinal research on the impact of preventative care. METHODS: For this study, we represented all time-series cardiovascular health (CVH) measures and the corresponding data collection time points for each patient by numerical embedding vectors. We then employed a deep learning technique-long-short term memory (LSTM) model-to predict CD from the vector of time-series CVH measures by 5-fold cross validation and compared the performance of this model to the results of deep neural networks, logistic regression, random forest, and Naïve Bayes models. RESULTS: We demonstrated that the LSTM model outperformed other traditional machine learning models and achieved the best prediction performance as measured by the average area under the receiver operator curve (AUROC): 0.76 for LSTM, 0.71 for deep neural networks, 0.66 for logistic regression, 0.67 for random forest, and 0.59 for Naïve Bayes. The most influential feature from the LSTM model were blood pressure. CONCLUSIONS: These findings may be used to prevent CD in the outpatient setting by encouraging appropriate surveillance and management of CVH.


Assuntos
Aprendizado Profundo , Registros Eletrônicos de Saúde , Arritmias Cardíacas , Humanos
11.
JAMA ; 304(18): 2035-41, 2010 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-21063013

RESUMO

CONTEXT: Central line-associated bloodstream infection (BSI) rates, determined by infection preventionists using the Centers for Disease Control and Prevention (CDC) surveillance definitions, are increasingly published to compare the quality of patient care delivered by hospitals. However, such comparisons are valid only if surveillance is performed consistently across institutions. OBJECTIVE: To assess institutional variation in performance of traditional central line-associated BSI surveillance. DESIGN, SETTING, AND PARTICIPANTS: We performed a retrospective cohort study of 20 intensive care units among 4 medical centers (2004-2007). Unit-specific central line-associated BSI rates were calculated for 12-month periods. Infection preventionists, blinded to study participation, performed routine prospective surveillance using CDC definitions. A computer algorithm reference standard was applied retrospectively using criteria that adapted the same CDC surveillance definitions. MAIN OUTCOME MEASURES: Correlation of central line-associated BSI rates as determined by infection preventionist vs the computer algorithm reference standard. Variation in performance was assessed by testing for institution-dependent heterogeneity in a linear regression model. RESULTS: Forty-one unit-periods among 20 intensive care units were analyzed, representing 241,518 patient-days and 165,963 central line-days. The median infection preventionist and computer algorithm central line-associated BSI rates were 3.3 (interquartile range [IQR], 2.0-4.5) and 9.0 (IQR, 6.3-11.3) infections per 1000 central line-days, respectively. Overall correlation between computer algorithm and infection preventionist rates was weak (ρ = 0.34), and when stratified by medical center, point estimates for institution-specific correlations ranged widely: medical center A: 0.83; 95% confidence interval (CI), 0.05 to 0.98; P = .04; medical center B: 0.76; 95% CI, 0.32 to 0.93; P = .003; medical center C: 0.50, 95% CI, -0.11 to 0.83; P = .10; and medical center D: 0.10; 95% CI -0.53 to 0.66; P = .77. Regression modeling demonstrated significant heterogeneity among medical centers in the relationship between computer algorithm and expected infection preventionist rates (P < .001). The medical center that had the lowest rate by traditional surveillance (2.4 infections per 1000 central line-days) had the highest rate by computer algorithm (12.6 infections per 1000 central line-days). CONCLUSIONS: Institutional variability of infection preventionist rates relative to a computer algorithm reference standard suggests that there is significant variation in the application of standard central line-associated BSI surveillance definitions across medical centers. Variation in central line-associated BSI surveillance practice may complicate interinstitutional comparisons of publicly reported central line-associated BSI rates.


Assuntos
Bacteriemia/epidemiologia , Infecções Relacionadas a Cateter/epidemiologia , Infecção Hospitalar/epidemiologia , Vigilância da População , Garantia da Qualidade dos Cuidados de Saúde , Centros Médicos Acadêmicos/estatística & dados numéricos , Algoritmos , Bacteriemia/classificação , Infecções Relacionadas a Cateter/classificação , Centers for Disease Control and Prevention, U.S. , Estudos de Coortes , Infecção Hospitalar/classificação , Humanos , Controle de Infecções , Unidades de Terapia Intensiva/estatística & dados numéricos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Método Simples-Cego , Terminologia como Assunto , Estados Unidos/epidemiologia
12.
PLoS One ; 15(8): e0236836, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32790674

RESUMO

BACKGROUND: Cancer is the second leading cause of death in the United States. Cancer screenings can detect precancerous cells and allow for earlier diagnosis and treatment. Our purpose was to better understand risk factors for cancer screenings and assess the effect of cancer screenings on changes of Cardiovascular health (CVH) measures before and after cancer screenings among patients. METHODS: We used The Guideline Advantage (TGA)-American Heart Association ambulatory quality clinical data registry of electronic health record data (n = 362,533 patients) to investigate associations between time-series CVH measures and receipt of breast, cervical, and colon cancer screenings. Long short-term memory (LSTM) neural networks was employed to predict receipt of cancer screenings. We also compared the distributions of CVH factors between patients who received cancer screenings and those who did not. Finally, we examined and quantified changes in CVH measures among the screened and non-screened groups. RESULTS: Model performance was evaluated by the area under the receiver operator curve (AUROC): the average AUROC of 10 curves was 0.63 for breast, 0.70 for cervical, and 0.61 for colon cancer screening. Distribution comparison found that screened patients had a higher prevalence of poor CVH categories. CVH submetrics were improved for patients after cancer screenings. CONCLUSION: Deep learning algorithm could be used to investigate the associations between time-series CVH measures and cancer screenings in an ambulatory population. Patients with more adverse CVH profiles tend to be screened for cancers, and cancer screening may also prompt favorable changes in CVH. Cancer screenings may increase patient CVH health, thus potentially decreasing burden of disease and costs for the health system (e.g., cardiovascular diseases and cancers).


Assuntos
Neoplasias da Mama/diagnóstico , Doenças Cardiovasculares/diagnóstico , Neoplasias do Colo/diagnóstico , Aprendizado Profundo , Neoplasias do Colo do Útero/diagnóstico , Adulto , Área Sob a Curva , Detecção Precoce de Câncer , Feminino , Guias como Assunto , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Fatores de Risco
13.
Infect Control Hosp Epidemiol ; 35(12): 1483-90, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25419770

RESUMO

OBJECTIVE: Central line-associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line-associated BSI detection can improve the validity of surveillance. DESIGN: Retrospective cohort study. SETTING: Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers. METHODS: Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004-2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line-days). RESULTS: We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI) = 0.44 [0.37-0.51]) than computer algorithm surveillance (κ [95% CI] = 0.58; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .01); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line-associated BSI rates. Conclusions: Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.


Assuntos
Bacteriemia , Infecções Relacionadas a Cateter , Infecção Hospitalar , Sistemas de Informação Hospitalar , Controle de Infecções/normas , Algoritmos , Bacteriemia/diagnóstico , Bacteriemia/epidemiologia , Bacteriemia/etiologia , Bacteriemia/prevenção & controle , Infecções Relacionadas a Cateter/diagnóstico , Infecções Relacionadas a Cateter/epidemiologia , Infecções Relacionadas a Cateter/prevenção & controle , Cateterismo Venoso Central/efeitos adversos , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/epidemiologia , Infecção Hospitalar/prevenção & controle , Monitoramento Epidemiológico , Sistemas de Informação Hospitalar/organização & administração , Sistemas de Informação Hospitalar/normas , Humanos , Unidades de Terapia Intensiva/normas , Unidades de Terapia Intensiva/estatística & dados numéricos , Auditoria Administrativa , Melhoria de Qualidade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Estados Unidos/epidemiologia
14.
Artigo em Inglês | MEDLINE | ID: mdl-24303269

RESUMO

Obesity adversely affects not just individuals but populations, and it is a major problem for our society. Environmental and socioeconomic factors influence obesity and are potentially important for research, yet these data are often not readily available in electronic health records (EHRs). We augmented an EHR-derived clinical data set with publicly available data on factors thought to influence obesity rates to examine associations between these factors and the prevalence of overweight and obesity. As revealed by our multinomial logistic model for overweight and obesity in a diverse region, this study demonstrates the potential value for augmenting the secondary use of EHR data with publicly available data.

15.
Infect Control Hosp Epidemiol ; 33(3): 305-8, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22314071

RESUMO

Automated surveillance using electronically available data has been found to be accurate and save time. An automated Clostridium difficile infection (CDI) surveillance algorithm was validated at 4 Centers for Disease Control and Prevention Epicenter hospitals. Electronic surveillance was highly sensitive, specific, and showed good to excellent agreement for hospital-onset; community-onset, study facility-associated; indeterminate; and recurrent CDI.


Assuntos
Infecção Hospitalar/epidemiologia , Enterocolite Pseudomembranosa/epidemiologia , Sistemas Computadorizados de Registros Médicos , Vigilância de Evento Sentinela , Adolescente , Adulto , Algoritmos , Automação/métodos , Centers for Disease Control and Prevention, U.S. , Clostridioides difficile/isolamento & purificação , Infecção Hospitalar/microbiologia , Registros Eletrônicos de Saúde , Enterocolite Pseudomembranosa/diagnóstico , Fezes/microbiologia , Instalações de Saúde , Humanos , Pessoa de Meia-Idade , Sensibilidade e Especificidade , Estados Unidos/epidemiologia , Adulto Jovem
16.
Infect Control Hosp Epidemiol ; 33(5): 470-6, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22476273

RESUMO

OBJECTIVE: To assess Clostridium difficile infection (CDI)-related colectomy rates by CDI surveillance definitions and over time at multiple healthcare facilities. SETTING: Five university-affiliated acute care hospitals in the United States. DESIGN AND METHODS: Cases of CDI and patients who underwent colectomy from July 2000 through June 2006 were identified from 5 US tertiary care centers. Monthly CDI-related colectomy rates were calculated as the number of CDI-related colectomies per 1,000 CDI cases, and cases were categorized according to recommended surveillance definitions. Logistic regression was performed to evaluate risk factors for CDI-related colectomy. RESULTS: In total, 8,569 cases of CDI were identified, and 75 patients underwent CDI-related colectomy. The overall colectomy rate was 8.7 per 1,000 CDI cases. The CDI-related colectomy rate ranged from 0 to 23 per 1,000 CDI episodes across hospitals. The colectomy rate for healthcare-facility-onset CDI was 4.3 per 1,000 CDI cases, and that for community-onset CDI was 16.5 per 1,000 CDI cases (P < .05). There were significantly more CDI-related colectomies at hospitals B and C (P < .05). CONCLUSIONS: The overall CDI-related colectomy rate was low, and there was no significant change in the CDI-related colectomy rate over time. Onset of disease outside the study hospital was an independent risk factor for colectomy.


Assuntos
Clostridioides difficile/isolamento & purificação , Infecções por Clostridium/etiologia , Colectomia/efeitos adversos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Infecções por Clostridium/epidemiologia , Infecção Hospitalar , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estados Unidos/epidemiologia , Adulto Jovem
17.
Infect Control Hosp Epidemiol ; 33(1): 40-9, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22173521

RESUMO

OBJECTIVE: To evaluate the use of routinely collected electronic health data in Medicare claims to identify surgical site infections (SSIs) following hip arthroplasty, knee arthroplasty, and vascular surgery. DESIGN: Retrospective cohort study. SETTING: Four academic hospitals that perform prospective SSI surveillance. METHODS: We developed lists of International Classification of Diseases, Ninth Revision, and Current Procedural Terminology diagnosis and procedure codes to identify potential SSIs. We then screened for these codes in Medicare claims submitted by each hospital on patients older than 65 years of age who had undergone 1 of the study procedures during 2007. Each site reviewed medical records of patients identified by either claims codes or traditional infection control surveillance to confirm SSI using Centers for Disease Control and Prevention/National Healthcare Safety Network criteria. We assessed the performance of both methods against all chart-confirmed SSIs identified by either method. RESULTS: Claims-based surveillance detected 1.8-4.7-fold more SSIs than traditional surveillance, including detection of all previously identified cases. For hip and vascular surgery, there was a 5-fold and 1.6-fold increase in detection of deep and organ/space infections, respectively, with no increased detection of deep and organ/space infections following knee surgery. Use of claims to trigger chart review led to confirmation of SSI in 1 out of 3 charts for hip arthroplasty, 1 out of 5 charts for knee arthroplasty, and 1 out of 2 charts for vascular surgery. CONCLUSION: Claims-based SSI surveillance markedly increased the number of SSIs detected following hip arthroplasty, knee arthroplasty, and vascular surgery. It deserves consideration as a more effective approach to target chart reviews for identifying SSIs.


Assuntos
Classificação Internacional de Doenças , Medicare , Vigilância da População/métodos , Infecção da Ferida Cirúrgica/epidemiologia , Idoso , Idoso de 80 Anos ou mais , Artroplastia de Quadril/efeitos adversos , Artroplastia do Joelho/efeitos adversos , Feminino , Humanos , Masculino , Prontuários Médicos , Estudos Retrospectivos , Infecção da Ferida Cirúrgica/diagnóstico , Infecção da Ferida Cirúrgica/etiologia , Estados Unidos , Procedimentos Cirúrgicos Vasculares/efeitos adversos
18.
Infect Control Hosp Epidemiol ; 32(5): 472-80, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-21515978

RESUMO

OBJECTIVE: To outline methods for deriving and validating intensive care unit (ICU) antimicrobial utilization (AU) measures from computerized data and to describe programming problems that emerged. DESIGN: Retrospective evaluation of computerized pharmacy and administrative data. SETTING: ICUs from 4 academic medical centers over 36 months. INTERVENTIONS: Investigators separately developed and validated programming code to report AU measures in selected ICUs. Use of antibacterial and antifungal drugs for systemic administration was categorized and expressed as antimicrobial-days (each day that each antimicrobial drug was given to each patient) and patient-days receiving antimicrobials (each day that any antimicrobial drug was given to each patient). Monthly rates were compiled and analyzed centrally, with ICU patient-days as the denominator. Results were validated against data collected from manual review of medical records. Frequent discussion among investigators aided identification and correction of programming problems. RESULTS: AU data were successfully programmed though a reiterative process of computer code revision. After identifying and resolving major programming errors, comparison of computerized patient-level data with data collected by manual review of medical records revealed discrepancies in antimicrobial-days and patient-days receiving antimicrobials that ranged from less than 1% to 17.7%. The hospital from which numerator data were derived from electronic records of medication administration had the least discrepant results. CONCLUSIONS: Computerized AU measures can be derived feasibly, but threats to validity must be sought out and corrected. The magnitude of discrepancies between computerized AU data and a gold standard based on manual review of medical records varies, with electronic records of medication administration providing maximal accuracy.


Assuntos
Anti-Infecciosos/uso terapêutico , Sistemas de Informação em Farmácia Clínica , Revisão de Uso de Medicamentos/métodos , Unidades de Terapia Intensiva , Sistemas Computadorizados de Registros Médicos , Centros Médicos Acadêmicos , Humanos , Prontuários Médicos , Serviço de Farmácia Hospitalar , Estudos Retrospectivos , Software
19.
Infect Control Hosp Epidemiol ; 31(10): 1030-7, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20695799

RESUMO

OBJECTIVE: To compare incidence rates of Clostridium difficile infection (CDI) during a 6-year period among 5 geographically diverse academic medical centers across the United States by use of recommended standardized surveillance definitions of CDI that incorporate recent information on healthcare facility (HCF) exposure. METHODS: Data on C. difficile toxin assay results and dates of hospital admission and discharge were collected from electronic databases. Chart review was performed for patients with a positive C. difficile toxin assay result who were identified within 48 hours after hospital admission to determine whether they had any HCF exposure during the 90 days prior to their hospital admission. CDI cases, defined as any inpatient with a stool toxin assay positive for C. difficile, were categorized into 5 surveillance definitions based on recent HCF exposure. Annual CDI rates were calculated and evaluated by use of the chi(2) test for trend and the chi(2) summary test. RESULTS: During the study period, there were significant increases in the overall incidence rates of HCF-onset, HCF-associated CDI (from 7.0 to 8.5 cases per 10,000 patient-days; P < .001); community-onset, HCF-associated CDI attributed to a study hospital (from 1.1 to 1.3 cases per 10,000 patient-days; P = .003); and community-onset, HCF-associated CDI not attributed to a study hospital (from 0.8 to 1.5 cases per 1,000 admissions overall; P < .001). For each surveillance definition of CDI, there were significant differences in the total incidence rate between HCFs. CONCLUSIONS: The increasing incidence rates of CDI over time and across healthcare institutions and the correlation of CDI incidence in different surveillance categories suggest that CDI may be a regional problem and not isolated to a single HCF within a community.


Assuntos
Centros Médicos Acadêmicos/estatística & dados numéricos , Clostridioides difficile , Infecções por Clostridium/epidemiologia , Infecção Hospitalar/epidemiologia , Enterocolite Pseudomembranosa/epidemiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Infecções por Clostridium/microbiologia , Infecções Comunitárias Adquiridas/epidemiologia , Infecções Comunitárias Adquiridas/microbiologia , Infecção Hospitalar/microbiologia , Enterocolite Pseudomembranosa/microbiologia , Humanos , Incidência , Pessoa de Meia-Idade , Alta do Paciente , Vigilância da População/métodos , Estados Unidos/epidemiologia , Adulto Jovem
20.
Infect Control Hosp Epidemiol ; 31(3): 262-8, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20100085

RESUMO

OBJECTIVE: To compare incidence of hospital-onset Clostridium difficile infection (CDI) measured by the use of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) discharge diagnosis codes with rates measured by the use of electronically available C. difficile toxin assay results. METHODS: Cases of hospital-onset CDI were identified at 5 US hospitals during the period from July 2000 through June 2006 with the use of 2 surveillance definitions: positive toxin assay results (gold standard) and secondary ICD-9-CM discharge diagnosis codes for CDI. The chi(2) test was used to compare incidence, linear regression models were used to analyze trends, and the test of equality was used to compare slopes. RESULTS: Of 8,670 cases of hospital-onset CDI, 38% were identified by the use of both toxin assay results and the ICD-9-CM code, 16% by the use of toxin assay results alone, and 45% by the use of the ICD-9-CM code alone. Nearly half (47%) of cases of CDI identified by the use of a secondary diagnosis code alone were community-onset CDI according to the results of the toxin assay. The rate of hospital-onset CDI found by use of ICD-9-CM codes was significantly higher than the rate found by use of toxin assay results overall (P < .001), as well as individually at 3 of the 5 hospitals (P < .001 for all). The agreement between toxin assay results and the presence of a secondary ICD-9-CM diagnosis code for CDI was moderate, with an overall kappa value of 0.509 and hospital-specific kappa values of 0.489-0.570. Overall, the annual increase in CDI incidence was significantly greater for rates determined by the use of ICD-9-CM codes than for rates determined by the use of toxin assay results (P = .006). CONCLUSIONS: Although the ICD-9-CM code for CDI seems to be adequate for measuring the overall CDI burden, use of the ICD-9-CM discharge diagnosis code for CDI, without present-on-admission code assignment, is not an acceptable surrogate for surveillance for hospital-onset CDI.


Assuntos
Clostridioides difficile/isolamento & purificação , Infecção Hospitalar/epidemiologia , Classificação Internacional de Doenças , Vigilância da População , Adulto , Bioensaio , Infecção Hospitalar/diagnóstico , Infecção Hospitalar/etiologia , Humanos , Modelos Lineares , Alta do Paciente , Vigilância da População/métodos , Estados Unidos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA