Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 80
Filtrar
1.
Ann Surg Oncol ; 30(5): 2883-2894, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36749504

RESUMO

BACKGROUND: Measures taken to address the COVID-19 pandemic interrupted routine diagnosis and care for breast cancer. The aim of this study was to characterize the effects of the pandemic on breast cancer care in a statewide cohort. PATIENTS AND METHODS: Using data from a large health information exchange, we retrospectively analyzed the timing of breast cancer screening, and identified a cohort of newly diagnosed patients with any stage of breast cancer to further access the information available about their surgical treatments. We compared data for four subgroups: pre-lockdown (preLD) 25 March to 16 June 2019; lockdown (LD) 23 March to 3 May 2020; reopening (RO) 4 May to 14 June 2020; and post-lockdown (postLD) 22 March to 13 June 2021. RESULTS: During LD and RO, screening mammograms in the cohort decreased by 96.3% and 36.2%, respectively. The overall breast cancer diagnosis and surgery volumes decreased up to 38.7%, and the median time to surgery was prolonged from 1.5 months to 2.4 for LD and 1.8 months for RO. Interestingly, higher mean DCIS diagnosis (5.0 per week vs. 3.1 per week, p < 0.05) and surgery volume (14.8 vs. 10.5, p < 0.05) were found for postLD compared with preLD, while median time to surgery was shorter (1.2 months vs. 1.5 months, p < 0.0001). However, the postLD average weekly screening and diagnostic mammogram did not fully recover to preLD levels (2055.3 vs. 2326.2, p < 0.05; 574.2 vs. 624.1, p < 0.05). CONCLUSIONS: Breast cancer diagnosis and treatment patterns were interrupted during the lockdown and still altered 1 year after. Screening in primary care should be expanded to mitigate possible longer-term effects of these interruptions.


Assuntos
Neoplasias da Mama , COVID-19 , Troca de Informação em Saúde , Humanos , Feminino , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/epidemiologia , Neoplasias da Mama/cirurgia , COVID-19/epidemiologia , Pandemias , Estudos Retrospectivos , Detecção Precoce de Câncer , Controle de Doenças Transmissíveis , Teste para COVID-19
3.
BMC Med Inform Decis Mak ; 21(1): 112, 2021 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-33812369

RESUMO

BACKGROUND: Many patients with atrial fibrillation (AF) remain undiagnosed despite availability of interventions to reduce stroke risk. Predictive models to date are limited by data requirements and theoretical usage. We aimed to develop a model for predicting the 2-year probability of AF diagnosis and implement it as proof-of-concept (POC) in a production electronic health record (EHR). METHODS: We used a nested case-control design using data from the Indiana Network for Patient Care. The development cohort came from 2016 to 2017 (outcome period) and 2014 to 2015 (baseline). A separate validation cohort used outcome and baseline periods shifted 2 years before respective development cohort times. Machine learning approaches were used to build predictive model. Patients ≥ 18 years, later restricted to age ≥ 40 years, with at least two encounters and no AF during baseline, were included. In the 6-week EHR prospective pilot, the model was silently implemented in the production system at a large safety-net urban hospital. Three new and two previous logistic regression models were evaluated using receiver-operating characteristics. Number, characteristics, and CHA2DS2-VASc scores of patients identified by the model in the pilot are presented. RESULTS: After restricting age to ≥ 40 years, 31,474 AF cases (mean age, 71.5 years; female 49%) and 22,078 controls (mean age, 59.5 years; female 61%) comprised the development cohort. A 10-variable model using age, acute heart disease, albumin, body mass index, chronic obstructive pulmonary disease, gender, heart failure, insurance, kidney disease, and shock yielded the best performance (C-statistic, 0.80 [95% CI 0.79-0.80]). The model performed well in the validation cohort (C-statistic, 0.81 [95% CI 0.8-0.81]). In the EHR pilot, 7916/22,272 (35.5%; mean age, 66 years; female 50%) were identified as higher risk for AF; 5582 (70%) had CHA2DS2-VASc score ≥ 2. CONCLUSIONS: Using variables commonly available in the EHR, we created a predictive model to identify 2-year risk of developing AF in those previously without diagnosed AF. Successful POC implementation of the model in an EHR provided a practical strategy to identify patients who may benefit from interventions to reduce their stroke risk.


Assuntos
Fibrilação Atrial , Acidente Vascular Cerebral , Adulto , Idoso , Fibrilação Atrial/diagnóstico , Fibrilação Atrial/epidemiologia , Registros Eletrônicos de Saúde , Feminino , Humanos , Indiana , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos Prospectivos , Medição de Risco , Fatores de Risco , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/epidemiologia
4.
Am J Obstet Gynecol ; 224(6): 599.e1-599.e18, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33460585

RESUMO

BACKGROUND: Intrauterine devices are effective and safe, long-acting reversible contraceptives, but the risk of uterine perforation occurs with an estimated incidence of 1 to 2 per 1000 insertions. The European Active Surveillance Study for Intrauterine Devices, a European prospective observational study that enrolled 61,448 participants (2006-2012), found that women breastfeeding at the time of device insertion or with the device inserted at ≤36 weeks after delivery had a higher risk of uterine perforation. The Association of Uterine Perforation and Expulsion of Intrauterine Device (APEX-IUD) study was a Food and Drug Administration-mandated study designed to reflect current United States clinical practice. The aims of the APEX-IUD study were to evaluate the risk of intrauterine device-related uterine perforation and device expulsion among women who were breastfeeding or within 12 months after delivery at insertion. OBJECTIVE: We aimed to describe the APEX-IUD study design, methodology, and analytical plan and present population characteristics, size of risk factor groups, and duration of follow-up. STUDY DESIGN: APEX-IUD study was a retrospective cohort study conducted in 4 organizations with access to electronic health records: Kaiser Permanente Northern California, Kaiser Permanente Southern California, Kaiser Permanente Washington, and Regenstrief Institute in Indiana. Variables were identified through structured data (eg, diagnostic, procedural, medication codes) and unstructured data (eg, clinical notes) via natural language processing. Outcomes include uterine perforation and device expulsion; potential risk factors were breastfeeding at insertion, postpartum timing of insertion, device type, and menorrhagia diagnosis in the year before insertion. Covariates include demographic characteristics, clinical characteristics, and procedure-related variables, such as difficult insertion. The first potential date of inclusion for eligible women varies by research site (from January 1, 2001 to January 1, 2010). Follow-up begins at insertion and ends at first occurrence of an outcome of interest, a censoring event (device removal or reinsertion, pregnancy, hysterectomy, sterilization, device expiration, death, disenrollment, last clinical encounter), or end of the study period (June 30, 2018). Comparisons of levels of exposure variables were made using Cox regression models with confounding adjusted by propensity score weighting using overlap weights. RESULTS: The study population includes 326,658 women with at least 1 device insertion during the study period (Kaiser Permanente Northern California, 161,442; Kaiser Permanente Southern California, 123,214; Kaiser Permanente Washington, 20,526; Regenstrief Institute, 21,476). The median duration of continuous enrollment was 90 (site medians 74-177) months. The mean age was 32 years, and the population was racially and ethnically diverse across the 4 sites. The mean body mass index was 28.5 kg/m2, and of the women included in the study, 10.0% had menorrhagia ≤12 months before insertion, 5.3% had uterine fibroids, and 10% were recent smokers; furthermore, among these women, 79.4% had levonorgestrel-releasing devices, and 19.5% had copper devices. Across sites, 97,824 women had an intrauterine device insertion at ≤52 weeks after delivery, of which 94,817 women (97%) had breastfeeding status at insertion determined; in addition, 228,834 women had intrauterine device insertion at >52 weeks after delivery or no evidence of a delivery in their health record. CONCLUSION: Combining retrospective data from multiple sites allowed for a large and diverse study population. Collaboration with clinicians in the study design and validation of outcomes ensured that the APEX-IUD study results reflect current United States clinical practice. Results from this study will provide valuable information based on real-world evidence about risk factors for intrauterine devices perforation and expulsion for clinicians.


Assuntos
Aleitamento Materno , Dispositivos Intrauterinos/efeitos adversos , Período Pós-Parto , Perfuração Uterina/etiologia , Adulto , Protocolos Clínicos , Feminino , Seguimentos , Humanos , Expulsão de Dispositivo Intrauterino , Modelos Logísticos , Pessoa de Meia-Idade , Padrões de Prática Médica , Projetos de Pesquisa , Estudos Retrospectivos , Medição de Risco , Fatores de Risco , Fatores de Tempo , Estados Unidos/epidemiologia , Perfuração Uterina/epidemiologia
5.
Adv Ther ; 37(1): 552-565, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31828610

RESUMO

INTRODUCTION: Most cases of small cell lung cancer (SCLC) are diagnosed at an advanced stage. The objective of this study was to investigate patient characteristics, survival, chemotherapy treatments, and health care use after a diagnosis of advanced SCLC in subjects enrolled in a health system network. METHODS: This was a retrospective cohort study of patients aged ≥ 18 years who either were diagnosed with stage III/IV SCLC or who progressed to advanced SCLC during the study period (2005-2015). Patients identified from the Indiana State Cancer Registry and the Indiana Network for Patient Care were followed from their advanced diagnosis index date until the earliest date of the last visit, death, or the end of the study period. Patient characteristics, survival, chemotherapy regimens, associated health care visits, and durations of treatment were reported. Time-to-event analyses were performed using the Kaplan-Meier method. RESULTS: A total of 498 patients with advanced SCLC were identified, of whom 429 were newly diagnosed with advanced disease and 69 progressed to advanced disease during the study period. Median survival from the index diagnosis date was 13.2 months. First-line (1L) chemotherapy was received by 464 (93.2%) patients, most commonly carboplatin/etoposide, received by 213 (45.9%) patients, followed by cisplatin/etoposide (20.7%). Ninety-five (20.5%) patients progressed to second-line (2L) chemotherapy, where topotecan monotherapy (20.0%) was the most common regimen, followed by carboplatin/etoposide (14.7%). Median survival was 10.1 months from 1L initiation and 7.7 months from 2L initiation. CONCLUSION: Patients in a regional health system network diagnosed with advanced SCLC were treated with chemotherapy regimens similar to those in earlier reports based on SEER-Medicare data. Survival of patients with advanced SCLC was poor, illustrating the lack of progress over several decades in the treatment of this lethal disease and highlighting the need for improved treatments.


Assuntos
Protocolos de Quimioterapia Combinada Antineoplásica/administração & dosagem , Neoplasias Pulmonares/tratamento farmacológico , Carcinoma de Pequenas Células do Pulmão/tratamento farmacológico , Adulto , Idoso , Carboplatina/uso terapêutico , Cisplatino/administração & dosagem , Epirubicina/administração & dosagem , Etoposídeo/administração & dosagem , Feminino , Humanos , Neoplasias Pulmonares/mortalidade , Masculino , Medicare , Pessoa de Meia-Idade , Estudos Retrospectivos , Carcinoma de Pequenas Células do Pulmão/mortalidade , Análise de Sobrevida , Resultado do Tratamento , Estados Unidos
6.
PLoS One ; 14(8): e0218759, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31437170

RESUMO

BACKGROUND: Data on initiation and utilization of direct-acting antiviral therapies for hepatitis C virus infection in the United States are limited. This study evaluated treatment initiation, time to treatment, and real-world effectiveness of direct-acting antiviral therapy in individuals with hepatitis C virus infection treated during the first 2 years of availability of all-oral direct-acting antiviral therapies. METHODS: A retrospective cohort analysis was undertaken using electronic medical records and chart review abstraction of hepatitis C virus-infected individuals aged >18 years diagnosed with chronic hepatitis C virus infection between January 1, 2014, and December 31, 2015 from the Indiana University Health database. RESULTS: Eight hundred thirty people initiated direct-acting antiviral therapy during the 2-year observation window. The estimated incidence of treatment initiation was 8.8%±0.34% at the end of year 1 and 15.0%±0.5% at the end of year 2. Median time to initiating therapy was 300 days. Using a Cox regression analysis, positive predictors of treatment initiation included age (hazard ratio, 1.008), prior hepatitis C virus treatment (1.74), cirrhosis (2.64), and history of liver transplant (1.5). History of drug abuse (0.43), high baseline alanine aminotransferase levels (0.79), hepatitis B virus infection (0.41), and self-pay (0.39) were negatively associated with treatment initiation. In the evaluable population (n = 423), 83.9% (95% confidence interval, 80.1-87.3%) of people achieved sustained virologic response. CONCLUSION: In the early years of the direct-acting antiviral era, <10% of people diagnosed with chronic hepatitis C virus infection received direct-acting antiviral treatment; median time to treatment initiation was 300 days. Future analyses should evaluate time to treatment initiation among those with less advanced fibrosis.


Assuntos
Antivirais/uso terapêutico , Hepatite C Crônica/tratamento farmacológico , Administração Oral , Adulto , Idoso , Antivirais/administração & dosagem , Estudos de Coortes , Quimioterapia Combinada , Feminino , Hepatite C Crônica/virologia , Humanos , Indiana , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Resposta Viral Sustentada , Tempo para o Tratamento , Estados Unidos
7.
Hosp Pract (1995) ; 47(1): 42-45, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30409047

RESUMO

BACKGROUND: Rapid response teams (RRTs) improve mortality by intervening in the hours preceding arrest. Implementation of these teams varies across institutions. SETTING AND DESIGN: Our health-care system has two different RRT models at two hospitals: Hospital A does not utilize a proactive rounder while Hospital B does. We studied the patterns of RRT calls at each hospital focusing on the differences between night and day and during nursing shift transitions. RESULTS: The presence of proactive surveillance appeared to be associated with an increased total number of RRT calls with more than twice as many calls made at the smaller Hospital B than Hospital A. Hospital B had more calls in the daytime compared to the nighttime. Both hospitals showed a surge in the night-to-day shift transition (7-8am) compared to the preceding nighttime. Hospital A additionally showed a surge in calls during the day-to-night shift transition (7-8pm) compared to the preceding daytime. CONCLUSIONS: Differences in the diurnal patterns of RRT activation exist between hospitals even within the same system. As a continuously learning system, each hospital should consider tracking these patterns to identify their unique vulnerabilities. More calls are noted between 7-8am compared to the overnight hours. This may represent the reestablishment of the 'afferent' arm of the RRT as the hospital returns to daytime staffing and activity. Factors that influence the impact of proactive rounding on RRT performance may deserve further study.


Assuntos
Tratamento de Emergência/normas , Parada Cardíaca/terapia , Equipe de Respostas Rápidas de Hospitais/normas , Unidades de Terapia Intensiva/normas , Assistência Noturna/normas , Hospitalização/estatística & dados numéricos , Humanos , Avaliação de Resultados em Cuidados de Saúde
8.
Am J Med Qual ; 33(3): 303-312, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29241347

RESUMO

Members of the Society of Hospital Medicine were surveyed about geographic cohorting (GCh); 369 responses were analyzed, two thirds of which were from GCh participants. Improved collaboration with the bedside nurse, increased nonclinical interactions, decreased paging interruptions, and improved efficiency were perceived by >50%. Narrowed clinical expertise, increased fragmentation, increased face-to-face interruptions, and an adverse impact on camaraderie within the hospitalist group were reported by 25% to 50%. Academic practices were associated with positive perceptions while higher patient loads were associated with negative perceptions. Comments on GCh benefits invoked improvements in (1) interprofessional collaboration, (2) efficiency, (3) patient-centeredness, (4) nursing satisfaction, and (5) GCh mediated facilitation of other interventions. GCh downsides included (1) professional and personal dissatisfaction, (2) concerns about providing suboptimal care, and (3) implementation barriers. GCh is receiving attention. Although it facilitates important benefits, it is perceived to mediate unintended consequences, which should be addressed in redesign efforts.


Assuntos
Percepção , Recursos Humanos em Hospital/psicologia , Melhoria de Qualidade/organização & administração , Regionalização da Saúde/organização & administração , Adulto , Comportamento Cooperativo , Eficiência Organizacional , Feminino , Processos Grupais , Humanos , Satisfação no Emprego , Masculino , Pessoa de Meia-Idade , Equipe de Assistência ao Paciente/organização & administração , Assistência Centrada no Paciente/organização & administração , Qualidade da Assistência à Saúde/organização & administração , Carga de Trabalho/estatística & dados numéricos
9.
Crit Care Med ; 45(5): 851-857, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-28263192

RESUMO

OBJECTIVES: Delirium severity is independently associated with longer hospital stays, nursing home placement, and death in patients outside the ICU. Delirium severity in the ICU is not routinely measured because the available instruments are difficult to complete in critically ill patients. We designed our study to assess the reliability and validity of a new ICU delirium severity tool, the Confusion Assessment Method for the ICU-7 delirium severity scale. DESIGN: Observational cohort study. SETTING: Medical, surgical, and progressive ICUs of three academic hospitals. PATIENTS: Five hundred eighteen adult (≥ 18 yr) patients. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patients received the Confusion Assessment Method for the ICU, Richmond Agitation-Sedation Scale, and Delirium Rating Scale-Revised-98 assessments. A 7-point scale (0-7) was derived from responses to the Confusion Assessment Method for the ICU and Richmond Agitation-Sedation Scale items. Confusion Assessment Method for the ICU-7 showed high internal consistency (Cronbach's α = 0.85) and good correlation with Delirium Rating Scale-Revised-98 scores (correlation coefficient = 0.64). Known-groups validity was supported by the separation of mechanically ventilated and nonventilated assessments. Median Confusion Assessment Method for the ICU-7 scores demonstrated good predictive validity with higher odds (odds ratio = 1.47; 95% CI = 1.30-1.66) of in-hospital mortality and lower odds (odds ratio = 0.8; 95% CI = 0.72-0.9) of being discharged home after adjusting for age, race, gender, severity of illness, and chronic comorbidities. Higher Confusion Assessment Method for the ICU-7 scores were also associated with increased length of ICU stay (p = 0.001). CONCLUSIONS: Our results suggest that Confusion Assessment Method for the ICU-7 is a valid and reliable delirium severity measure among ICU patients. Further research comparing it to other delirium severity measures, its use in delirium efficacy trials, and real-life implementation is needed to determine its role in research and clinical practice.


Assuntos
Delírio/diagnóstico , Unidades de Terapia Intensiva/estatística & dados numéricos , Escalas de Graduação Psiquiátrica/normas , Índice de Gravidade de Doença , Adulto , Idoso , Atenção , Estado de Consciência , Feminino , Hospitais Universitários , Humanos , Masculino , Entrevista Psiquiátrica Padronizada , Pessoa de Meia-Idade , Alta do Paciente , Estudos Prospectivos , Reprodutibilidade dos Testes , Fatores Socioeconômicos
10.
Cancer ; 123(12): 2338-2351, 2017 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-28211937

RESUMO

BACKGROUND: Annual computed tomography (CT) scans are a component of the current standard of care for the posttreatment surveillance of survivors of colorectal cancer (CRC) after curative-intent resection. The authors conducted a retrospective study with the primary aim of assessing patient, physician, and organizational characteristics associated with the receipt of CT surveillance among veterans. METHODS: The Department of Veterans Affairs Central Cancer Registry was used to identify patients diagnosed with AJCC collaborative stage I to III CRC between 2001 and 2009. Patient sociodemographic and clinical (ie, CRC stage and comorbidity) characteristics, provider specialty, and organizational characteristics were measured. Hierarchical multivariable logistic regression models were used to assess the association between patient, provider, and organizational characteristics on receipt of 1) consistently guideline-concordant care (at least 1 CT every 12 months for both of the first 2 years of CRC surveillance) versus no CT receipt and 2) potential overuse (>1 CT every 12 months during the first 2 years of CRC surveillance) of CRC surveillance using CT. The authors also analyzed the impact of the 2005 American Society of Clinical Oncology update in CRC surveillance guidelines on care received over time. RESULTS: For 2263 survivors of stage II/III CRC who were diagnosed after 2005, 19.4% of patients received no surveillance CT, whereas potential overuse occurred in both surveillance years for 14.9% of patients. Guideline-concordant care was associated with younger age, higher stage of disease (stage III vs stage II), and geographic region. In adjusted analyses, younger age and higher stage of disease (stage III vs stage II) were found to be associated with overuse. There was no significant difference in the annual rate of CT scanning noted across time periods (year ≤ 2005 vs year > 2005). CONCLUSIONS: Among a minority of veteran survivors of CRC, both underuse and potential overuse of CT surveillance were present. Patient factors, but no provider or organizational characteristics, were found to be significantly associated with patterns of care. The 2005 change in American Society of Clinical Oncology guidelines did not appear to have an impact on rates of surveillance CT. Cancer 2017;123:2338-2351. © 2017 American Cancer Society.


Assuntos
Adenocarcinoma/diagnóstico por imagem , Neoplasias Colorretais/diagnóstico por imagem , Fidelidade a Diretrizes/estatística & dados numéricos , Hospitais de Veteranos/estatística & dados numéricos , Recidiva Local de Neoplasia/diagnóstico por imagem , Sistema de Registros , Sobreviventes , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Adenocarcinoma/patologia , Adenocarcinoma/cirurgia , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Neoplasias Colorretais/patologia , Neoplasias Colorretais/cirurgia , Feminino , Humanos , Modelos Logísticos , Masculino , Uso Excessivo dos Serviços de Saúde/estatística & dados numéricos , Pessoa de Meia-Idade , Análise Multivariada , Estadiamento de Neoplasias , Guias de Prática Clínica como Assunto , Estudos Retrospectivos , Estados Unidos , United States Department of Veterans Affairs
11.
Crit Care Med ; 44(9): 1727-34, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27276344

RESUMO

OBJECTIVES: Delirium is a highly prevalent syndrome of acute brain dysfunction among critically ill patients that has been linked to multiple risk factors, such as age, preexisting cognitive impairment, and use of sedatives; but to date, the relationship between race and delirium is unclear. We conducted this study to identify whether African-American race is a risk factor for developing ICU delirium. DESIGN: A prospective cohort study. SETTING: Medical and surgical ICUs of a university-affiliated, safety net hospital in Indianapolis, IN. PATIENTS: A total of 2,087 consecutive admissions with 1,008 African Americans admitted to the ICU services from May 2009 to August 2012. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Incident delirium was defined as first positive Confusion Assessment Method for the ICU result after an initial negative Confusion Assessment Method for the ICU; and prevalent delirium was defined as positive Confusion Assessment Method for the ICU on first Confusion Assessment Method for the ICU assessment. The overall incident delirium rate in African Americans was 8.7% compared with 10.4% in Caucasians (p = 0.26). The prevalent delirium rate was 14% in both African Americans and Caucasians (p = 0.95). Significant age and race interactions were detected for incident delirium (p = 0.02) but not for prevalent delirium (p = 0.3). The hazard ratio for incident delirium for African Americans in the 18-49 years age group compared with Caucasians of similar age was 0.4 (0.1-0.9). The hazard and odds ratios for incident and prevalent delirium in other groups were not different. CONCLUSIONS: African-American race does not confer any additional risk for developing incident or prevalent delirium in the ICU. Instead, younger African Americans tend to have lower rates of incident delirium compared with Caucasians of similar age.


Assuntos
Negro ou Afro-Americano/psicologia , Cuidados Críticos , Delírio/etnologia , Adolescente , Adulto , Idoso , Estado Terminal , Delírio/diagnóstico , Feminino , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Avaliação de Resultados em Cuidados de Saúde , Prevalência , Estudos Prospectivos , Fatores de Risco , Adulto Jovem
12.
J Hosp Med ; 11(7): 489-93, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-26929120

RESUMO

BACKGROUND: There are 250,000 cases of central line-associated blood stream infections in the United States annually, some of which may be prevented by the removal of lines that are no longer needed. OBJECTIVE: To test the performance of criteria to identify an idle line as a guideline to facilitate its removal. METHODS: Patients with central lines on the wards were identified. Criteria for justified use were defined. If none were met, the line was considered "idle." We proposed the guideline that a line may be removed the day following the first idle day and compared actual practice with our proposed guideline. RESULTS: One hundred twenty-six lines in 126 patients were observed. Eighty-three (65.9%) were peripherally inserted central catheters. Twenty-seven percent (n= 34) were placed for antibiotics. Seventy-six patients had lines removed prior to discharge. In these patients, the line was in place for 522 days, of which 32.7% were idle. The most common reasons to justify the line included parenteral antibiotics and meeting systemic inflammatory response (SIRS) criteria. In 11 (14.5%) patients, the line was removed prior to the proposed guideline. Most (n = 36, 47.4%) line removals were observed to be in accordance with our guideline. In another 29 (38.2%), line removal was delayed compared to our guideline. CONCLUSIONS: Idle days are common. Central line days may be reduced by the consistent daily reevaluation of a line's justification using defined criteria. The practice of routine central line placement for prolonged antibiotics and the inclusion of SIRS criteria to justify the line may need to be reevaluated. Journal of Hospital Medicine 2016;11:489-493. © 2016 Society of Hospital Medicine.


Assuntos
Cateterismo Venoso Central/estatística & dados numéricos , Guias como Assunto , Padrões de Prática Médica , Infecções Relacionadas a Cateter/prevenção & controle , Cateterismo Venoso Central/efeitos adversos , Cateterismo Periférico/efeitos adversos , Cateterismo Periférico/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
13.
Bone ; 83: 267-275, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26657827

RESUMO

BACKGROUND: Adherence to oral bisphosphonates is often low, but even adherent patients may remain at elevated fracture risk. The goal of this study was to estimate the proportion of bisphosphonate-adherent women remaining at high risk of fracture. METHODS: A retrospective cohort of women aged 50years and older, adherent to oral bisphosphonates for at least two years was identified, and data were extracted from a multi-system health information exchange. Adherence was defined as having a dispensed medication possession ratio≥0.8. The primary outcome was clinical occurrence of: low trauma fracture (months 7-36), persistent T-score≤-2.5 (months 13-36), decrease in bone mineral density (BMD) at any skeletal site≥5%, or the composite of any one of these outcomes. RESULTS: Of 7435 adherent women, 3110 had either pre- or post-adherent DXA data. In the full cohort, 7% had an incident osteoporotic fracture. In 601 women having both pre- and post-adherent DXA to evaluate BMD change, 6% had fractures, 22% had a post-treatment T-score≤-2.5, and 16% had BMD decrease by ≥5%. The composite outcomes occurred in 35%. Incident fracture was predicted by age, previous fracture, and a variety of co-morbidities, but not by race, glucocorticoid treatment or type of bisphosphonate. CONCLUSION: Despite bisphosphonate adherence, 7% had incident osteoporotic fractures and 35% had either fracture, decreases in BMD, or persistent osteoporotic BMD, representing a substantial proportion of treated patients in clinical practices remaining at risk for future fractures. Further studies are required to determine the best achievable goals for osteoporosis therapy, and which patients would benefit from alternate therapies.


Assuntos
Difosfonatos/uso terapêutico , Osteoporose/tratamento farmacológico , Fraturas por Osteoporose/epidemiologia , Fraturas por Osteoporose/etiologia , Cooperação do Paciente , Idoso , Estudos de Coortes , Comorbidade , Demografia , Feminino , Humanos , Análise Multivariada , Fatores de Risco , Resultado do Tratamento
14.
J Hosp Med ; 10(12): 773-9, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26286828

RESUMO

BACKGROUND: US healthcare underperforms on quality and safety metrics. Inpatient care constitutes an immense opportunity to intervene to improve care. OBJECTIVE: Describe a model of inpatient care and measure its impact. DESIGN: A quantitative assessment of the implementation of a new model of care. The graded implementation of the model allowed us to follow outcomes and measure their association with the dose of the implementation. SETTING AND PATIENTS: Inpatient medical and surgical units in a large academic health center. INTERVENTION: Eight interventions rooted in improving interprofessional collaboration (IPC), enabling data-driven decisions, and providing leadership were implemented. MEASUREMENTS: Outcome data from August 2012 to December 2013 were analyzed using generalized linear mixed models for associations with the implementation of the model. Length of stay (LOS) index, case-mix index-adjusted variable direct costs (CMI-adjusted VDC), 30-day readmission rates, overall patient satisfaction scores, and provider satisfaction with the model were measured. RESULTS: The implementation of the model was associated with decreases in LOS index (P < 0.0001) and CMI-adjusted VDC (P = 0.0006). We did not detect improvements in readmission rates or patient satisfaction scores. Most providers (95.8%, n = 92) agreed that the model had improved the quality and safety of the care delivered. CONCLUSIONS: Creating an environment and framework in which IPC is fostered, performance data are transparently available, and leadership is provided may improve value on both medical and surgical units. These interventions appear to be well accepted by front-line staff. Readmission rates and patient satisfaction remain challenging.


Assuntos
Hospitalização , Equipe de Assistência ao Paciente/normas , Assistência ao Paciente/métodos , Assistência ao Paciente/normas , Responsabilidade Social , Hospitalização/tendências , Humanos , Pacientes Internados , Tempo de Internação/tendências , Assistência ao Paciente/tendências , Equipe de Assistência ao Paciente/tendências , Resultado do Tratamento
15.
Crit Care Med ; 42(12): e791-5, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25402299

RESUMO

OBJECTIVES: Mechanically ventilated critically ill patients receive significant amounts of sedatives and analgesics that increase their risk of developing coma and delirium. We evaluated the impact of a "Wake-up and Breathe Protocol" at our local ICU on sedation and delirium. DESIGN: A pre/post implementation study design. SETTING: A 22-bed mixed surgical and medical ICU. PATIENTS: Seven hundred two consecutive mechanically ventilated ICU patients from June 2010 to January 2013. INTERVENTIONS: Implementation of daily paired spontaneous awakening trials (daily sedation vacation plus spontaneous breathing trials) as a quality improvement project. MEASUREMENTS AND MAIN RESULTS: After implementation of our program, there was an increase in the mean Richmond Agitation Sedation Scale scores on weekdays of 0.88 (p < 0.0001) and an increase in the mean Richmond Agitation Sedation Scale scores on weekends of 1.21 (p < 0.0001). After adjusting for age, race, gender, severity of illness, primary diagnosis, and ICU, the incidence and prevalence of delirium did not change post implementation of the protocol (incidence: 23% pre vs 19.6% post; p = 0.40; prevalence: 66.7% pre vs 55.3% post; p = 0.06). The combined prevalence of delirium/coma decreased from 90.8% pre protocol implementation to 85% postimplementation (odds ratio, 0.505; 95% CI, 0.299-0.853; p = 0.01). CONCLUSIONS: Implementing a "Wake Up and Breathe Program" resulted in reduced sedation among critically ill mechanically ventilated patients but did not change the incidence or prevalence of delirium.


Assuntos
Estado Terminal , Sedação Profunda/métodos , Delírio/prevenção & controle , Respiração Artificial/métodos , Respiração , Adulto , Idoso , Protocolos Clínicos , Coma/prevenção & controle , Feminino , Humanos , Incidência , Unidades de Terapia Intensiva , Tempo de Internação , Masculino , Pessoa de Meia-Idade
16.
J Am Geriatr Soc ; 62(3): 506-11, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24576177

RESUMO

OBJECTIVES: To evaluate whether race influences agreement between screening results and documentation of cognitive impairment and delirium. DESIGN: Secondary data analysis. SETTING: An urban, public hospital and healthcare system. PARTICIPANTS: Hospitalized older adults aged 65 and older admitted to general inpatient medical services evaluated for cognitive impairment (n = 851) and evaluated for delirium (n = 424). MEASUREMENTS: Cognitive impairment and delirium were measured in each participant using the Short Portable Mental Status Questionnaire (SPMSQ) and the Confusion Assessment Method (CAM), respectively, as the reference identification method. Clinical documentation of cognitive impairment and delirium was defined according to the presence of International Classification of Diseases, Ninth Revision (ICD-9), codes from within 1 year before hospitalization through discharge for cognitive impairment or from hospital admission through discharge for delirium. RESULTS: Two hundred ninety-four participants (34%) had cognitive impairment based on SPMSQ performance, and 163 (38%) had delirium based on CAM results. One hundred seventy-one (20%) of those with cognitive impairment had an ICD-9 code for cognitive impairment, whereas 92 (22%) of those with delirium had an ICD-9 code for delirium. After considering age, sex, education, socioeconomic status, chronic comorbidity, and severity of acute illness, of those who screened positive on the SPMSQ, African Americans had a higher adjusted odds ratio (AOR) than non-African Americans for clinical documentation of cognitive impairment (AOR = 1.66, 95% confidence interval (CI) = 0.95-2.89), and of those who screened negative on the SPMSQ, African Americans had higher odds of clinical documentation of cognitive impairment (AOR = 2.10, 95% CI = 1.17-3.78) than non-African Americans. There were no differences in clinical documentation rates of delirium between African Americans and non-African Americans. CONCLUSION: Racial differences in coding for cognitive impairment may exist, resulting in higher documentation of cognitive impairment in African Americans screening positive or negative for cognitive impairment.


Assuntos
Transtornos Cognitivos/etnologia , Cognição , Avaliação Geriátrica/métodos , Registros de Saúde Pessoal , Pacientes Internados/estatística & dados numéricos , Grupos Raciais , Idoso , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/terapia , Feminino , Humanos , Indiana/epidemiologia , Masculino , Entrevista Psiquiátrica Padronizada , Prevalência , Estudos Retrospectivos , Fatores de Risco , Inquéritos e Questionários , População Urbana
17.
JAMA Intern Med ; 174(3): 370-7, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24445375

RESUMO

IMPORTANCE: Hospitalized older adults often lack decisional capacity, but outside of the intensive care unit and end-of-life care settings, little is known about the frequency of decision making by family members or other surrogates or its implications for hospital care. OBJECTIVE: To describe the scope of surrogate decision making, the hospital course, and outcomes for older adults. DESIGN, SETTING, AND PARTICIPANTS: Prospective, observational study conducted in medicine and medical intensive care unit services of 2 hospitals in 1 Midwestern city in 1083 hospitalized older adults identified by their physicians as requiring major medical decisions. MAIN OUTCOMES AND MEASURES: Clinical characteristics, hospital outcomes, nature of major medical decisions, and surrogate involvement. RESULTS: According to physician reports, at 48 hours of hospitalization, 47.4% (95% CI, 44.4%-50.4%) of older adults required at least some surrogate involvement, including 23.0% (20.6%-25.6%) with all decisions made by a surrogate. Among patients who required a surrogate for at least 1 decision within 48 hours, 57.2% required decisions about life-sustaining care (mostly addressing code status), 48.6% about procedures and operations, and 46.9% about discharge planning. Patients who needed a surrogate experienced a more complex hospital course with greater use of ventilators (2.5% of patients who made decisions and 13.2% of patients who required any surrogate decisions; P < .001), artificial nutrition (1.7% of patients and 14.4% of surrogates; P < .001), and length of stay (median, 6 days for patients and 7 days for surrogates; P < .001). They were more likely to be discharged to an extended-care facility (21.2% with patient decisions and 40.9% with surrogate decisions; P < .001) and had higher hospital mortality (0.0% patients and 5.9% surrogates; P < .001). Most surrogates were daughters (58.9%), sons (25.0%), or spouses (20.6%). Overall, only 7.4% had a living will and 25.0% had a health care representative document in the medical record. CONCLUSIONS AND RELEVANCE: Surrogate decision making occurs for nearly half of hospitalized older adults and includes both complete decision making by the surrogate and joint decision making by the patient and surrogate. Surrogates commonly face a broad range of decisions in the intensive care unit and the hospital ward setting. Hospital functions should be redesigned to account for the large and growing role of surrogates, supporting them as they make health care decisions.


Assuntos
Diretivas Antecipadas , Tomada de Decisões , Família , Hospitalização , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Estudos Prospectivos
18.
J Am Med Inform Assoc ; 21(2): 345-52, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24113802

RESUMO

BACKGROUND AND OBJECTIVE: Electronic health records databases are increasingly used for identifying cohort populations, covariates, or outcomes, but discerning such clinical 'phenotypes' accurately is an ongoing challenge. We developed a flexible method using overlapping (Venn diagram) queries. Here we describe this approach to find patients hospitalized with acute congestive heart failure (CHF), a sampling strategy for one-by-one 'gold standard' chart review, and calculation of positive predictive value (PPV) and sensitivities, with SEs, across different definitions. MATERIALS AND METHODS: We used retrospective queries of hospitalizations (2002-2011) in the Indiana Network for Patient Care with any CHF ICD-9 diagnoses, a primary diagnosis, an echocardiogram performed, a B-natriuretic peptide (BNP) drawn, or BNP >500 pg/mL. We used a hybrid between proportional sampling by Venn zone and over-sampling non-overlapping zones. The acute CHF (presence/absence) outcome was based on expert chart review using a priori criteria. RESULTS: Among 79,091 hospitalizations, we reviewed 908. A query for any ICD-9 code for CHF had PPV 42.8% (SE 1.5%) for acute CHF and sensitivity 94.3% (1.3%). Primary diagnosis of 428 and BNP >500 pg/mL had PPV 90.4% (SE 2.4%) and sensitivity 28.8% (1.1%). PPV was <10% when there was no echocardiogram, no BNP, and no primary diagnosis. 'False positive' hospitalizations were for other heart disease, lung disease, or other reasons. CONCLUSIONS: This novel method successfully allowed flexible application and validation of queries for patients hospitalized with acute CHF.


Assuntos
Registros Eletrônicos de Saúde , Insuficiência Cardíaca , Hospitalização , Armazenamento e Recuperação da Informação , Doença Aguda , Humanos , Indiana , Disseminação de Informação , Classificação Internacional de Doenças , Registro Médico Coordenado , Estudos Retrospectivos
19.
Stat Med ; 33(3): 500-13, 2014 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-24038175

RESUMO

In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.


Assuntos
Algoritmos , Biometria/métodos , Classificação/métodos , Modelos Estatísticos , Valor Preditivo dos Testes , Feminino , Humanos , Masculino
20.
Int J Gen Med ; 6: 855-61, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24324346

RESUMO

BACKGROUND: Currently, there are no valid and reliable biomarkers to identify delirious patients predisposed to longer delirium duration. We investigated the hypothesis that elevated S100 calcium binding protein B (S100ß) levels will be associated with longer delirium duration in critically ill patients. METHODS: A prospective observational cohort study was performed in the medical, surgical, and progressive intensive care units (ICUs) of a tertiary care, university affiliated, and urban hospital. Sixty-three delirious patients were selected for the analysis, with two samples of S100ß collected on days 1 and 8 of enrollment. The main outcome measure was delirium duration. Using the cutoff of <0.1 ng/mL and ≥0.1 ng/mL as normal and abnormal levels of S100ß, respectively, on day 1 and day 8, four exposure groups were created: Group A, normal S100ß levels on day 1 and day 8; Group B, normal S100ß level on day 1 and abnormal S100ß level on day 8; Group C, abnormal S100ß level on day 1 and normal on day 8; and Group D, abnormal S100ß levels on both day 1 and day 8. RESULTS: Patients with abnormal levels of S100ß showed a trend towards higher delirium duration (P=0.076); Group B (standard deviation) (7.0 [3.2] days), Group C (5.5 [6.3] days), and Group D (5.3 [6.0] days), compared to patients in Group A (3.5 [5.4] days). CONCLUSION: This preliminary investigation identified a potentially novel role for S100ß as a biomarker for delirium duration in critically ill patients. This finding may have important implications for refining future delirium management strategies in ICU patients.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA