Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 80
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Ann Surg Oncol ; 30(5): 2883-2894, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36749504

RESUMO

BACKGROUND: Measures taken to address the COVID-19 pandemic interrupted routine diagnosis and care for breast cancer. The aim of this study was to characterize the effects of the pandemic on breast cancer care in a statewide cohort. PATIENTS AND METHODS: Using data from a large health information exchange, we retrospectively analyzed the timing of breast cancer screening, and identified a cohort of newly diagnosed patients with any stage of breast cancer to further access the information available about their surgical treatments. We compared data for four subgroups: pre-lockdown (preLD) 25 March to 16 June 2019; lockdown (LD) 23 March to 3 May 2020; reopening (RO) 4 May to 14 June 2020; and post-lockdown (postLD) 22 March to 13 June 2021. RESULTS: During LD and RO, screening mammograms in the cohort decreased by 96.3% and 36.2%, respectively. The overall breast cancer diagnosis and surgery volumes decreased up to 38.7%, and the median time to surgery was prolonged from 1.5 months to 2.4 for LD and 1.8 months for RO. Interestingly, higher mean DCIS diagnosis (5.0 per week vs. 3.1 per week, p < 0.05) and surgery volume (14.8 vs. 10.5, p < 0.05) were found for postLD compared with preLD, while median time to surgery was shorter (1.2 months vs. 1.5 months, p < 0.0001). However, the postLD average weekly screening and diagnostic mammogram did not fully recover to preLD levels (2055.3 vs. 2326.2, p < 0.05; 574.2 vs. 624.1, p < 0.05). CONCLUSIONS: Breast cancer diagnosis and treatment patterns were interrupted during the lockdown and still altered 1 year after. Screening in primary care should be expanded to mitigate possible longer-term effects of these interruptions.


Assuntos
Neoplasias da Mama , COVID-19 , Troca de Informação em Saúde , Humanos , Feminino , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/epidemiologia , Neoplasias da Mama/cirurgia , COVID-19/epidemiologia , Pandemias , Estudos Retrospectivos , Detecção Precoce de Câncer , Controle de Doenças Transmissíveis , Teste para COVID-19
2.
Am J Obstet Gynecol ; 224(6): 599.e1-599.e18, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33460585

RESUMO

BACKGROUND: Intrauterine devices are effective and safe, long-acting reversible contraceptives, but the risk of uterine perforation occurs with an estimated incidence of 1 to 2 per 1000 insertions. The European Active Surveillance Study for Intrauterine Devices, a European prospective observational study that enrolled 61,448 participants (2006-2012), found that women breastfeeding at the time of device insertion or with the device inserted at ≤36 weeks after delivery had a higher risk of uterine perforation. The Association of Uterine Perforation and Expulsion of Intrauterine Device (APEX-IUD) study was a Food and Drug Administration-mandated study designed to reflect current United States clinical practice. The aims of the APEX-IUD study were to evaluate the risk of intrauterine device-related uterine perforation and device expulsion among women who were breastfeeding or within 12 months after delivery at insertion. OBJECTIVE: We aimed to describe the APEX-IUD study design, methodology, and analytical plan and present population characteristics, size of risk factor groups, and duration of follow-up. STUDY DESIGN: APEX-IUD study was a retrospective cohort study conducted in 4 organizations with access to electronic health records: Kaiser Permanente Northern California, Kaiser Permanente Southern California, Kaiser Permanente Washington, and Regenstrief Institute in Indiana. Variables were identified through structured data (eg, diagnostic, procedural, medication codes) and unstructured data (eg, clinical notes) via natural language processing. Outcomes include uterine perforation and device expulsion; potential risk factors were breastfeeding at insertion, postpartum timing of insertion, device type, and menorrhagia diagnosis in the year before insertion. Covariates include demographic characteristics, clinical characteristics, and procedure-related variables, such as difficult insertion. The first potential date of inclusion for eligible women varies by research site (from January 1, 2001 to January 1, 2010). Follow-up begins at insertion and ends at first occurrence of an outcome of interest, a censoring event (device removal or reinsertion, pregnancy, hysterectomy, sterilization, device expiration, death, disenrollment, last clinical encounter), or end of the study period (June 30, 2018). Comparisons of levels of exposure variables were made using Cox regression models with confounding adjusted by propensity score weighting using overlap weights. RESULTS: The study population includes 326,658 women with at least 1 device insertion during the study period (Kaiser Permanente Northern California, 161,442; Kaiser Permanente Southern California, 123,214; Kaiser Permanente Washington, 20,526; Regenstrief Institute, 21,476). The median duration of continuous enrollment was 90 (site medians 74-177) months. The mean age was 32 years, and the population was racially and ethnically diverse across the 4 sites. The mean body mass index was 28.5 kg/m2, and of the women included in the study, 10.0% had menorrhagia ≤12 months before insertion, 5.3% had uterine fibroids, and 10% were recent smokers; furthermore, among these women, 79.4% had levonorgestrel-releasing devices, and 19.5% had copper devices. Across sites, 97,824 women had an intrauterine device insertion at ≤52 weeks after delivery, of which 94,817 women (97%) had breastfeeding status at insertion determined; in addition, 228,834 women had intrauterine device insertion at >52 weeks after delivery or no evidence of a delivery in their health record. CONCLUSION: Combining retrospective data from multiple sites allowed for a large and diverse study population. Collaboration with clinicians in the study design and validation of outcomes ensured that the APEX-IUD study results reflect current United States clinical practice. Results from this study will provide valuable information based on real-world evidence about risk factors for intrauterine devices perforation and expulsion for clinicians.


Assuntos
Aleitamento Materno , Dispositivos Intrauterinos/efeitos adversos , Período Pós-Parto , Perfuração Uterina/etiologia , Adulto , Protocolos Clínicos , Feminino , Seguimentos , Humanos , Expulsão de Dispositivo Intrauterino , Modelos Logísticos , Pessoa de Meia-Idade , Padrões de Prática Médica , Projetos de Pesquisa , Estudos Retrospectivos , Medição de Risco , Fatores de Risco , Fatores de Tempo , Estados Unidos/epidemiologia , Perfuração Uterina/epidemiologia
3.
BMC Med Inform Decis Mak ; 21(1): 112, 2021 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-33812369

RESUMO

BACKGROUND: Many patients with atrial fibrillation (AF) remain undiagnosed despite availability of interventions to reduce stroke risk. Predictive models to date are limited by data requirements and theoretical usage. We aimed to develop a model for predicting the 2-year probability of AF diagnosis and implement it as proof-of-concept (POC) in a production electronic health record (EHR). METHODS: We used a nested case-control design using data from the Indiana Network for Patient Care. The development cohort came from 2016 to 2017 (outcome period) and 2014 to 2015 (baseline). A separate validation cohort used outcome and baseline periods shifted 2 years before respective development cohort times. Machine learning approaches were used to build predictive model. Patients ≥ 18 years, later restricted to age ≥ 40 years, with at least two encounters and no AF during baseline, were included. In the 6-week EHR prospective pilot, the model was silently implemented in the production system at a large safety-net urban hospital. Three new and two previous logistic regression models were evaluated using receiver-operating characteristics. Number, characteristics, and CHA2DS2-VASc scores of patients identified by the model in the pilot are presented. RESULTS: After restricting age to ≥ 40 years, 31,474 AF cases (mean age, 71.5 years; female 49%) and 22,078 controls (mean age, 59.5 years; female 61%) comprised the development cohort. A 10-variable model using age, acute heart disease, albumin, body mass index, chronic obstructive pulmonary disease, gender, heart failure, insurance, kidney disease, and shock yielded the best performance (C-statistic, 0.80 [95% CI 0.79-0.80]). The model performed well in the validation cohort (C-statistic, 0.81 [95% CI 0.8-0.81]). In the EHR pilot, 7916/22,272 (35.5%; mean age, 66 years; female 50%) were identified as higher risk for AF; 5582 (70%) had CHA2DS2-VASc score ≥ 2. CONCLUSIONS: Using variables commonly available in the EHR, we created a predictive model to identify 2-year risk of developing AF in those previously without diagnosed AF. Successful POC implementation of the model in an EHR provided a practical strategy to identify patients who may benefit from interventions to reduce their stroke risk.


Assuntos
Fibrilação Atrial , Acidente Vascular Cerebral , Adulto , Idoso , Fibrilação Atrial/diagnóstico , Fibrilação Atrial/epidemiologia , Registros Eletrônicos de Saúde , Feminino , Humanos , Indiana , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos Prospectivos , Medição de Risco , Fatores de Risco , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/epidemiologia
5.
Cancer ; 123(12): 2338-2351, 2017 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-28211937

RESUMO

BACKGROUND: Annual computed tomography (CT) scans are a component of the current standard of care for the posttreatment surveillance of survivors of colorectal cancer (CRC) after curative-intent resection. The authors conducted a retrospective study with the primary aim of assessing patient, physician, and organizational characteristics associated with the receipt of CT surveillance among veterans. METHODS: The Department of Veterans Affairs Central Cancer Registry was used to identify patients diagnosed with AJCC collaborative stage I to III CRC between 2001 and 2009. Patient sociodemographic and clinical (ie, CRC stage and comorbidity) characteristics, provider specialty, and organizational characteristics were measured. Hierarchical multivariable logistic regression models were used to assess the association between patient, provider, and organizational characteristics on receipt of 1) consistently guideline-concordant care (at least 1 CT every 12 months for both of the first 2 years of CRC surveillance) versus no CT receipt and 2) potential overuse (>1 CT every 12 months during the first 2 years of CRC surveillance) of CRC surveillance using CT. The authors also analyzed the impact of the 2005 American Society of Clinical Oncology update in CRC surveillance guidelines on care received over time. RESULTS: For 2263 survivors of stage II/III CRC who were diagnosed after 2005, 19.4% of patients received no surveillance CT, whereas potential overuse occurred in both surveillance years for 14.9% of patients. Guideline-concordant care was associated with younger age, higher stage of disease (stage III vs stage II), and geographic region. In adjusted analyses, younger age and higher stage of disease (stage III vs stage II) were found to be associated with overuse. There was no significant difference in the annual rate of CT scanning noted across time periods (year ≤ 2005 vs year > 2005). CONCLUSIONS: Among a minority of veteran survivors of CRC, both underuse and potential overuse of CT surveillance were present. Patient factors, but no provider or organizational characteristics, were found to be significantly associated with patterns of care. The 2005 change in American Society of Clinical Oncology guidelines did not appear to have an impact on rates of surveillance CT. Cancer 2017;123:2338-2351. © 2017 American Cancer Society.


Assuntos
Adenocarcinoma/diagnóstico por imagem , Neoplasias Colorretais/diagnóstico por imagem , Fidelidade a Diretrizes/estatística & dados numéricos , Hospitais de Veteranos/estatística & dados numéricos , Recidiva Local de Neoplasia/diagnóstico por imagem , Sistema de Registros , Sobreviventes , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Adenocarcinoma/patologia , Adenocarcinoma/cirurgia , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Neoplasias Colorretais/patologia , Neoplasias Colorretais/cirurgia , Feminino , Humanos , Modelos Logísticos , Masculino , Uso Excessivo dos Serviços de Saúde/estatística & dados numéricos , Pessoa de Meia-Idade , Análise Multivariada , Estadiamento de Neoplasias , Guias de Prática Clínica como Assunto , Estudos Retrospectivos , Estados Unidos , United States Department of Veterans Affairs
6.
Crit Care Med ; 45(5): 851-857, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-28263192

RESUMO

OBJECTIVES: Delirium severity is independently associated with longer hospital stays, nursing home placement, and death in patients outside the ICU. Delirium severity in the ICU is not routinely measured because the available instruments are difficult to complete in critically ill patients. We designed our study to assess the reliability and validity of a new ICU delirium severity tool, the Confusion Assessment Method for the ICU-7 delirium severity scale. DESIGN: Observational cohort study. SETTING: Medical, surgical, and progressive ICUs of three academic hospitals. PATIENTS: Five hundred eighteen adult (≥ 18 yr) patients. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patients received the Confusion Assessment Method for the ICU, Richmond Agitation-Sedation Scale, and Delirium Rating Scale-Revised-98 assessments. A 7-point scale (0-7) was derived from responses to the Confusion Assessment Method for the ICU and Richmond Agitation-Sedation Scale items. Confusion Assessment Method for the ICU-7 showed high internal consistency (Cronbach's α = 0.85) and good correlation with Delirium Rating Scale-Revised-98 scores (correlation coefficient = 0.64). Known-groups validity was supported by the separation of mechanically ventilated and nonventilated assessments. Median Confusion Assessment Method for the ICU-7 scores demonstrated good predictive validity with higher odds (odds ratio = 1.47; 95% CI = 1.30-1.66) of in-hospital mortality and lower odds (odds ratio = 0.8; 95% CI = 0.72-0.9) of being discharged home after adjusting for age, race, gender, severity of illness, and chronic comorbidities. Higher Confusion Assessment Method for the ICU-7 scores were also associated with increased length of ICU stay (p = 0.001). CONCLUSIONS: Our results suggest that Confusion Assessment Method for the ICU-7 is a valid and reliable delirium severity measure among ICU patients. Further research comparing it to other delirium severity measures, its use in delirium efficacy trials, and real-life implementation is needed to determine its role in research and clinical practice.


Assuntos
Delírio/diagnóstico , Unidades de Terapia Intensiva/estatística & dados numéricos , Escalas de Graduação Psiquiátrica/normas , Índice de Gravidade de Doença , Adulto , Idoso , Atenção , Estado de Consciência , Feminino , Hospitais Universitários , Humanos , Masculino , Entrevista Psiquiátrica Padronizada , Pessoa de Meia-Idade , Alta do Paciente , Estudos Prospectivos , Reprodutibilidade dos Testes , Fatores Socioeconômicos
7.
Crit Care Med ; 44(9): 1727-34, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27276344

RESUMO

OBJECTIVES: Delirium is a highly prevalent syndrome of acute brain dysfunction among critically ill patients that has been linked to multiple risk factors, such as age, preexisting cognitive impairment, and use of sedatives; but to date, the relationship between race and delirium is unclear. We conducted this study to identify whether African-American race is a risk factor for developing ICU delirium. DESIGN: A prospective cohort study. SETTING: Medical and surgical ICUs of a university-affiliated, safety net hospital in Indianapolis, IN. PATIENTS: A total of 2,087 consecutive admissions with 1,008 African Americans admitted to the ICU services from May 2009 to August 2012. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Incident delirium was defined as first positive Confusion Assessment Method for the ICU result after an initial negative Confusion Assessment Method for the ICU; and prevalent delirium was defined as positive Confusion Assessment Method for the ICU on first Confusion Assessment Method for the ICU assessment. The overall incident delirium rate in African Americans was 8.7% compared with 10.4% in Caucasians (p = 0.26). The prevalent delirium rate was 14% in both African Americans and Caucasians (p = 0.95). Significant age and race interactions were detected for incident delirium (p = 0.02) but not for prevalent delirium (p = 0.3). The hazard ratio for incident delirium for African Americans in the 18-49 years age group compared with Caucasians of similar age was 0.4 (0.1-0.9). The hazard and odds ratios for incident and prevalent delirium in other groups were not different. CONCLUSIONS: African-American race does not confer any additional risk for developing incident or prevalent delirium in the ICU. Instead, younger African Americans tend to have lower rates of incident delirium compared with Caucasians of similar age.


Assuntos
Negro ou Afro-Americano/psicologia , Cuidados Críticos , Delírio/etnologia , Adolescente , Adulto , Idoso , Estado Terminal , Delírio/diagnóstico , Feminino , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Avaliação de Resultados em Cuidados de Saúde , Prevalência , Estudos Prospectivos , Fatores de Risco , Adulto Jovem
8.
Crit Care Med ; 42(12): e791-5, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25402299

RESUMO

OBJECTIVES: Mechanically ventilated critically ill patients receive significant amounts of sedatives and analgesics that increase their risk of developing coma and delirium. We evaluated the impact of a "Wake-up and Breathe Protocol" at our local ICU on sedation and delirium. DESIGN: A pre/post implementation study design. SETTING: A 22-bed mixed surgical and medical ICU. PATIENTS: Seven hundred two consecutive mechanically ventilated ICU patients from June 2010 to January 2013. INTERVENTIONS: Implementation of daily paired spontaneous awakening trials (daily sedation vacation plus spontaneous breathing trials) as a quality improvement project. MEASUREMENTS AND MAIN RESULTS: After implementation of our program, there was an increase in the mean Richmond Agitation Sedation Scale scores on weekdays of 0.88 (p < 0.0001) and an increase in the mean Richmond Agitation Sedation Scale scores on weekends of 1.21 (p < 0.0001). After adjusting for age, race, gender, severity of illness, primary diagnosis, and ICU, the incidence and prevalence of delirium did not change post implementation of the protocol (incidence: 23% pre vs 19.6% post; p = 0.40; prevalence: 66.7% pre vs 55.3% post; p = 0.06). The combined prevalence of delirium/coma decreased from 90.8% pre protocol implementation to 85% postimplementation (odds ratio, 0.505; 95% CI, 0.299-0.853; p = 0.01). CONCLUSIONS: Implementing a "Wake Up and Breathe Program" resulted in reduced sedation among critically ill mechanically ventilated patients but did not change the incidence or prevalence of delirium.


Assuntos
Estado Terminal , Sedação Profunda/métodos , Delírio/prevenção & controle , Respiração Artificial/métodos , Respiração , Adulto , Idoso , Protocolos Clínicos , Coma/prevenção & controle , Feminino , Humanos , Incidência , Unidades de Terapia Intensiva , Tempo de Internação , Masculino , Pessoa de Meia-Idade
9.
Stat Med ; 33(3): 500-13, 2014 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-24038175

RESUMO

In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.


Assuntos
Algoritmos , Biometria/métodos , Classificação/métodos , Modelos Estatísticos , Valor Preditivo dos Testes , Feminino , Humanos , Masculino
10.
Proc Natl Acad Sci U S A ; 108(46): E1146-55, 2011 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-22006328

RESUMO

Autosomal dominant hypophosphatemic rickets (ADHR) is unique among the disorders involving Fibroblast growth factor 23 (FGF23) because individuals with R176Q/W and R179Q/W mutations in the FGF23 (176)RXXR(179)/S(180) proteolytic cleavage motif can cycle from unaffected status to delayed onset of disease. This onset may occur in physiological states associated with iron deficiency, including puberty and pregnancy. To test the role of iron status in development of the ADHR phenotype, WT and R176Q-Fgf23 knock-in (ADHR) mice were placed on control or low-iron diets. Both the WT and ADHR mice receiving low-iron diet had significantly elevated bone Fgf23 mRNA. WT mice on a low-iron diet maintained normal serum intact Fgf23 and phosphate metabolism, with elevated serum C-terminal Fgf23 fragments. In contrast, the ADHR mice on the low-iron diet had elevated intact and C-terminal Fgf23 with hypophosphatemic osteomalacia. We used in vitro iron chelation to isolate the effects of iron deficiency on Fgf23 expression. We found that iron chelation in vitro resulted in a significant increase in Fgf23 mRNA that was dependent upon Mapk. Thus, unlike other syndromes of elevated FGF23, our findings support the concept that late-onset ADHR is the product of gene-environment interactions whereby the combined presence of an Fgf23-stabilizing mutation and iron deficiency can lead to ADHR.


Assuntos
Raquitismo Hipofosfatêmico Familiar/genética , Fatores de Crescimento de Fibroblastos/genética , Deficiências de Ferro , Anemia Ferropriva/complicações , Animais , Raquitismo Hipofosfatêmico Familiar/fisiopatologia , Feminino , Fator de Crescimento de Fibroblastos 23 , Interação Gene-Ambiente , Glucuronidase/metabolismo , Hipofosfatemia/genética , Proteínas Klotho , Sistema de Sinalização das MAP Quinases , Masculino , Camundongos , Camundongos Transgênicos , Osteócitos/citologia , Osteomalacia/genética , Fenótipo , Estrutura Terciária de Proteína , Ratos
11.
Med Care ; 51(12): 1040-7, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24226304

RESUMO

BACKGROUND: Among physicians who perform endoscopic retrograde cholangiopancreatography (ERCP), the relationship between procedure volume and outcome is unknown. OBJECTIVE: Quantify the ERCP volume-outcome relationship by measuring provider-specific failure rates, hospitalization rates, and other quality measures. RESEARCH DESIGN: Retrospective cohort. SUBJECTS: A total of 16,968 ERCPs performed by 130 physicians between 2001 and 2011, identified in the Indiana Network for Patient Care. MEASURES: Physicians were classified by their average annual Indiana Network for Patient Care volume and stratified into low (<25/y) and high (≥25/y). Outcomes included failed procedures, defined as repeat ERCP, percutaneous transhepatic cholangiography or surgical exploration of the bile duct≤7 days after the index procedure, hospitalization rates, and 30-day mortality. RESULTS: Among 15,514 index ERCPs, there were 1163 (7.5%) failures; the failure rate was higher among low (9.5%) compared with high volume (5.7%) providers (P<0.001). A second ERCP within 7 days (a subgroup of failure rate) occurred more frequently when the original ERCP was performed by a low-volume (4.1%) versus a high-volume physician (2.3%, P=0.013). Patients were more frequently hospitalized within 24 hours when the ERCP was performed by a low-volume (28.3%) versus high-volume physician (14.8%, P=0.002). Mortality within 30 days was similar (low=1.9%, high=1.9%). Among low-volume physicians and after adjusting, the odds of having a failed procedure decreased 3.3% (95% confidence interval, 1.6%-5.0%, P<0.001) with each additional ERCP performed per year. CONCLUSIONS: Lower provider volume is associated with higher failure rate for ERCP, and greater need for postprocedure hospitalization.


Assuntos
Colangiopancreatografia Retrógrada Endoscópica/estatística & dados numéricos , Gastroenterologia/estatística & dados numéricos , Adulto , Idoso , Feminino , Pesquisa sobre Serviços de Saúde , Humanos , Indiana , Revisão da Utilização de Seguros , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Resultado do Tratamento
12.
BMC Med Inform Decis Mak ; 13: 97, 2013 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-24001000

RESUMO

BACKGROUND: Methods for linking real-world healthcare data often use a latent class model, where the latent, or unknown, class is the true match status of candidate record-pairs. This commonly used model assumes that agreement patterns among multiple fields within a latent class are independent. When this assumption is violated, various approaches, including the most commonly proposed loglinear models, have been suggested to account for conditional dependence. METHODS: We present a step-by-step guide to identify important dependencies between fields through a correlation residual plot and demonstrate how they can be incorporated into loglinear models for record linkage. This method is applied to healthcare data from the patient registry for a large county health department. RESULTS: Our method could be readily implemented using standard software (with code supplied) to produce an overall better model fit as measured by BIC and deviance. Finding the most parsimonious model is known to reduce bias in parameter estimates. CONCLUSIONS: This novel approach identifies and accommodates conditional dependence in the context of record linkage. The conditional dependence model is recommended for routine use due to its flexibility for incorporating conditional dependence and easy implementation using existing software.


Assuntos
Registro Médico Coordenado/normas , Modelos Estatísticos , Humanos
13.
J Gen Intern Med ; 27(5): 561-7, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22302355

RESUMO

BACKGROUND: Approximately 40% of hospitalized older adults have cognitive impairment (CI) and are more prone to hospital-acquired complications. The Institute of Medicine suggests using health information technology to improve the overall safety and quality of the health care system. OBJECTIVE: Evaluate the efficacy of a clinical decision support system (CDSS) to improve the quality of care for hospitalized older adults with CI. DESIGN: A randomized controlled clinical trial. SETTING: A public hospital in Indianapolis. POPULATION: A total of 998 hospitalized older adults were screened for CI, and 424 patients (225 intervention, 199 control) with CI were enrolled in the trial with a mean age of 74.8, 59% African Americans, and 68% female. INTERVENTION: A CDSS alerts the physicians of the presence of CI, recommends early referral into a geriatric consult, and suggests discontinuation of the use of Foley catheterization, physical restraints, and anticholinergic drugs. MEASUREMENTS: Orders of a geriatric consult and discontinuation orders of Foley catheterization, physical restraints, or anticholinergic drugs. RESULTS: Using intent-to-treat analyses, there were no differences between the intervention and the control groups in geriatric consult orders (56% vs 49%, P = 0.21); discontinuation orders for Foley catheterization (61.7% vs 64.6%, P = 0.86); physical restraints (4.8% vs 0%, P = 0.86), or anticholinergic drugs (48.9% vs 31.2%, P = 0.11). CONCLUSION: A simple screening program for CI followed by a CDSS did not change physician prescribing behaviors or improve the process of care for hospitalized older adults with CI.


Assuntos
Transtornos Cognitivos/terapia , Sistemas de Apoio a Decisões Clínicas , Avaliação Geriátrica/métodos , Qualidade da Assistência à Saúde , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Hospitalização , Humanos , Análise de Intenção de Tratamento , Masculino , Inquéritos e Questionários
14.
J Nutr ; 140(11): 2030S-45S, 2010 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-20881084

RESUMO

A roundtable to discuss monitoring of serum 25-hydroxyvitamin D [25(OH)D] in the NHANES was held in late July 2009. Topics included the following: 1) options for dealing with assay fluctuations in serum 25(OH)D in the NHANES conducted between 1988 and 2006; 2) approaches for transitioning between the RIA used in the NHANES between 1988 and 2006 to the liquid chromatography tandem MS (LC-MS/MS) measurement procedure to be used in NHANES 2007 and later; 3) approaches for integrating the recently available standard reference material for vitamin D in human serum (SRM 972) from the National Institute of Standards and Technology (NIST) into the NHANES; 4) questions regarding whether the C-3 epimer of 25-hydroxyvitamin D3 [3-epi-25(OH)D3] should be measured in NHANES 2007 and later; and 5) identification of research and educational needs. The roundtable experts agreed that the NHANES data needed to be adjusted to control for assay fluctuations and offered several options for addressing this issue. The experts suggested that the LC-MS/MS measurement procedure developed by NIST could serve as a higher order reference measurement procedure. They noted the need for a commutability study for the recently released NIST SRM 972 across a range of measurement procedures. They suggested that federal agencies and professional organizations work with manufacturers to improve the quality and comparability of measurement procedures across all laboratories. The experts noted the preliminary nature of the evidence of the 3-epi-25(OH)D3 but felt that it should be measured in 2007 NHANES and later.


Assuntos
25-Hidroxivitamina D 2/sangue , Calcifediol/sangue , Inquéritos Nutricionais , 25-Hidroxivitamina D 2/análogos & derivados , 25-Hidroxivitamina D 2/química , 25-Hidroxivitamina D 2/normas , Calcifediol/química , Calcifediol/normas , Cromatografia Líquida de Alta Pressão , Humanos , Padrões de Referência , Reprodutibilidade dos Testes , Espectrometria de Massas em Tandem
15.
Ann Emerg Med ; 56(6): 623-9, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-20452703

RESUMO

STUDY OBJECTIVE: Emergency physicians prescribe several discharge medications that require dosage adjustment for patients with renal disease. The hypothesis for this research was that decision support in a computerized physician order entry system would reduce the rate of excessive medication dosing for patients with renal impairment. METHODS: This was a randomized, controlled trial in an academic emergency department (ED), in which computerized physician order entry was used to write all prescriptions for patients being discharged from the ED. The sample included 42 physicians who were randomized to the intervention (21 physicians) or control (21 physicians) group. The intervention was decision support that provided dosing recommendations for targeted medications for patients aged 18 years and older when the patient's estimated creatinine clearance level was below the threshold for dosage adjustment. The primary outcome was the proportion of targeted medications that were excessively dosed. RESULTS: For 2,783 (46%) of the 6,015 patient visits, the decision support had sufficient information to estimate the patient's creatinine clearance level. The average age of these patients was 46 years, 1,768 (64%) were women, and 1,523 (55%) were black. Decision support was provided 73 times to physicians in the intervention group, who excessively dosed 31 (43%) prescriptions. In comparison, control physicians excessively dosed a significantly larger proportion of medications: 34 of 46, 74% (effect size=31%; 95% confidence interval 14% to 49%; P=.001). CONCLUSION: Emergency physicians often prescribed excessive doses of medications that require dosage adjustment for renal impairment. Computerized physician order entry with decision support significantly reduced excessive dosing of targeted medications.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Serviço Hospitalar de Emergência , Insuficiência Renal/tratamento farmacológico , Adolescente , Adulto , Idoso , Creatinina/sangue , Serviço Hospitalar de Emergência/organização & administração , Serviço Hospitalar de Emergência/normas , Feminino , Humanos , Prescrição Inadequada/estatística & dados numéricos , Masculino , Sistemas de Registro de Ordens Médicas/organização & administração , Sistemas de Registro de Ordens Médicas/normas , Erros de Medicação/prevenção & controle , Pessoa de Meia-Idade
16.
Pain Med ; 11(7): 1072-7, 2010 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-20642733

RESUMO

OBJECTIVES: Among patients who arrive at an emergency department (ED) with pain, over half remain in moderate or severe pain at ED discharge. Our objectives were to identify ED physicians' prescribing patterns when discharging patients with common musculoskeletal conditions and to determine if disparities in opioid prescribing exist. DESIGN: Five-year retrospective investigation. SETTING: An urban, academic ED with approximately 100,000 annual visits, where physicians write discharge prescriptions, including over-the-counter medications, using a computerized order entry system. PATIENTS: Adult patients who were discharged home from an ED with fractures (clavicle or long bone fractures) or non-fracture musculoskeletal diagnoses (sprains, strains, sciatica, or back pain). OUTCOME MEASURES: Patient demographics and pain medications prescribed for use at home. RESULTS: The study sample included 13,335 patients with a mean age of 39 years. Half were female; 52% were white; 39% were black; and 7% were Hispanic. Among fracture patients, 77% received an opioid prescription, 2% received a non-opioid prescription, and 21% received no analgesic prescription. The percentages for patients with non-fracture diagnoses were 65% (opioids), 18% (non-opioid analgesics), and 17% (no analgesic). Patients aged 80 years and older were significantly less likely to receive opioid prescriptions. Although prescribing by race for fractures was similar, significantly fewer black and Hispanic patients with non-fracture diagnoses received opioid prescriptions, compared with white patients. CONCLUSIONS: Approximately one fifth of patients in the fracture and non-fracture groups did not receive an analgesic prescription. Age greater than 80 years and minority race/ethnic status were associated with lower rates of opioid prescribing.


Assuntos
Analgésicos/uso terapêutico , Serviço Hospitalar de Emergência , Dor/tratamento farmacológico , Adulto , Idoso de 80 Anos ou mais , Etnicidade , Feminino , Fraturas Ósseas/tratamento farmacológico , Humanos , Indiana , Padrões de Prática Médica , Estudos Retrospectivos
17.
Adv Ther ; 37(1): 552-565, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31828610

RESUMO

INTRODUCTION: Most cases of small cell lung cancer (SCLC) are diagnosed at an advanced stage. The objective of this study was to investigate patient characteristics, survival, chemotherapy treatments, and health care use after a diagnosis of advanced SCLC in subjects enrolled in a health system network. METHODS: This was a retrospective cohort study of patients aged ≥ 18 years who either were diagnosed with stage III/IV SCLC or who progressed to advanced SCLC during the study period (2005-2015). Patients identified from the Indiana State Cancer Registry and the Indiana Network for Patient Care were followed from their advanced diagnosis index date until the earliest date of the last visit, death, or the end of the study period. Patient characteristics, survival, chemotherapy regimens, associated health care visits, and durations of treatment were reported. Time-to-event analyses were performed using the Kaplan-Meier method. RESULTS: A total of 498 patients with advanced SCLC were identified, of whom 429 were newly diagnosed with advanced disease and 69 progressed to advanced disease during the study period. Median survival from the index diagnosis date was 13.2 months. First-line (1L) chemotherapy was received by 464 (93.2%) patients, most commonly carboplatin/etoposide, received by 213 (45.9%) patients, followed by cisplatin/etoposide (20.7%). Ninety-five (20.5%) patients progressed to second-line (2L) chemotherapy, where topotecan monotherapy (20.0%) was the most common regimen, followed by carboplatin/etoposide (14.7%). Median survival was 10.1 months from 1L initiation and 7.7 months from 2L initiation. CONCLUSION: Patients in a regional health system network diagnosed with advanced SCLC were treated with chemotherapy regimens similar to those in earlier reports based on SEER-Medicare data. Survival of patients with advanced SCLC was poor, illustrating the lack of progress over several decades in the treatment of this lethal disease and highlighting the need for improved treatments.


Assuntos
Protocolos de Quimioterapia Combinada Antineoplásica/administração & dosagem , Neoplasias Pulmonares/tratamento farmacológico , Carcinoma de Pequenas Células do Pulmão/tratamento farmacológico , Adulto , Idoso , Carboplatina/uso terapêutico , Cisplatino/administração & dosagem , Epirubicina/administração & dosagem , Etoposídeo/administração & dosagem , Feminino , Humanos , Neoplasias Pulmonares/mortalidade , Masculino , Medicare , Pessoa de Meia-Idade , Estudos Retrospectivos , Carcinoma de Pequenas Células do Pulmão/mortalidade , Análise de Sobrevida , Resultado do Tratamento , Estados Unidos
18.
Calcif Tissue Int ; 84(2): 97-102, 2009 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-19093065

RESUMO

Phenotypic variation in bone mineral density (BMD) among healthy adults is influenced by both genetic and environmental factors. Sequence variations in the adenylate cyclase 10 (ADCY10) gene, which is also called soluble adenylate cyclase, have previously been associated with low spinal BMD in hypercalciuric patients. Since ADCY10 is located in the region linked to spinal BMD in our previous linkage analysis, we tested whether polymorphisms in this gene are also associated with normal BMD variation in healthy adults. Sixteen single-nucleotide polymorphisms (SNPs) distributed throughout ADCY10 were genotyped in two healthy groups of American whites: 1692 premenopausal women and 715 men. Statistical analyses were performed in the two groups to test for association between these SNPs and the femoral neck and lumbar spine areal BMD. We observed significant evidence of association (p < 0.01), with one SNP each in men and women. Genotypes at these SNPs accounted for <1% of hip BMD variation in men but 1.5% of spinal BMD in women. However, adjacent SNPs did not corroborate the association in either men or women. In conclusion, we found a modest association between an ADCY10 polymorphism and the spinal areal BMD in premenopausal white women.


Assuntos
Adenilil Ciclases/genética , Densidade Óssea/genética , Polimorfismo de Nucleotídeo Único , Adulto , Feminino , Genótipo , Humanos , Desequilíbrio de Ligação , Masculino , Coluna Vertebral/metabolismo
19.
J Am Med Inform Assoc ; 16(2): 196-202, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-18952934

RESUMO

OBJECTIVES: Only half of consultants' medical recommendations are implemented. We created a tool that lets referring providers review and implement electronic recommendations made by consultants, with the hypothesis that facilitation with our tool could improve implementation. MEASUREMENTS: The tool was piloted among geriatrics consultants and hospitalists. Pre-post evaluation was done with control (before pilot; N=20) and intervention (after pilot; N=20) patients. Consultants wrote notes containing recommendations for all study patients, and entered electronic recommendations only for intervention patients. We analyzed all recommendations and surveyed hospitalists. RESULTS: A total of 249 recommendations were made for intervention patients versus 192 for controls (p<0.05). Of all recommendations about intervention patients, 78% were implemented, compared to 59% for controls (p=0.01). Of the intervention recommendations, 77% were entered electronically using our tool; of these, 86% were implemented. All 24 survey respondents indicated that the system improved quality, saved time, and should be expanded. CONCLUSION: Consultant recommendations were implemented 30% more often when there was electronic facilitation of recommendations.


Assuntos
Consultores , Sistemas de Apoio a Decisões Clínicas , Sistemas de Registro de Ordens Médicas , Centros Médicos Acadêmicos , Atitude do Pessoal de Saúde , Geriatria , Médicos Hospitalares , Hospitais Urbanos , Humanos , Meio-Oeste dos Estados Unidos , Projetos Piloto , Padrões de Prática Médica , Encaminhamento e Consulta , Interface Usuário-Computador
20.
Hosp Pract (1995) ; 47(1): 42-45, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30409047

RESUMO

BACKGROUND: Rapid response teams (RRTs) improve mortality by intervening in the hours preceding arrest. Implementation of these teams varies across institutions. SETTING AND DESIGN: Our health-care system has two different RRT models at two hospitals: Hospital A does not utilize a proactive rounder while Hospital B does. We studied the patterns of RRT calls at each hospital focusing on the differences between night and day and during nursing shift transitions. RESULTS: The presence of proactive surveillance appeared to be associated with an increased total number of RRT calls with more than twice as many calls made at the smaller Hospital B than Hospital A. Hospital B had more calls in the daytime compared to the nighttime. Both hospitals showed a surge in the night-to-day shift transition (7-8am) compared to the preceding nighttime. Hospital A additionally showed a surge in calls during the day-to-night shift transition (7-8pm) compared to the preceding daytime. CONCLUSIONS: Differences in the diurnal patterns of RRT activation exist between hospitals even within the same system. As a continuously learning system, each hospital should consider tracking these patterns to identify their unique vulnerabilities. More calls are noted between 7-8am compared to the overnight hours. This may represent the reestablishment of the 'afferent' arm of the RRT as the hospital returns to daytime staffing and activity. Factors that influence the impact of proactive rounding on RRT performance may deserve further study.


Assuntos
Tratamento de Emergência/normas , Parada Cardíaca/terapia , Equipe de Respostas Rápidas de Hospitais/normas , Unidades de Terapia Intensiva/normas , Assistência Noturna/normas , Hospitalização/estatística & dados numéricos , Humanos , Avaliação de Resultados em Cuidados de Saúde
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa