RESUMEN
INTRODUCTION: Acute kidney injury (AKI) following burns is associated with increased mortality and morbidity. Some patients require renal replacement therapy. There is limited large-scale data to sufficiently validate risk factors influencing the incidence and severity of early AKI, defined as AKI within the first 72 h since admission to a burn center following burn injury. The aims of this study were to compare the profile of adult patients admitted to Australian and New Zealand burn centers, with burns ≥10% total body surface area (TBSA) who developed early AKI with patients who did not develop AKI and to quantify the association between early AKI and in-hospital outcomes. METHODS: Data were extracted from the Burns Registry of Australia and New Zealand for adults (≥18 y), with burns ≥10% TBSA admitted to Australian or New Zealand burn centers between July 2016 and June 2021. All patients with two valid serum creatinine blood tests within the first 72 h were included. Differences in patient profiles and in-hospital outcomes were investigated. Univariable and multivariable logistic and linear regression models were used to quantify associations between early AKI and outcomes of interest. RESULTS: There were 1297 patients who met the inclusion criteria for this study. Eighty-three patients (6.4%) developed early AKI. Compared to patients without AKI, patients with an AKI were older (P = 0.006), had a greater median %TBSA burned (P < 0.001), and had an inhalation injury (P < 0.001). In adjusted models, the development of early AKI was significantly associated with in-hospital mortality (adjusted odds ratio (aOR) [95% CI] 2.73 [1.33, 5.62], P < 0.001) and the need for mechanical ventilation (aOR [95% CI] 3.44 [1.77, 6.68], P = 0.001), but there was no significant increase in the hospital length of stay or intensive care unit length of stay. CONCLUSIONS: This is the first large-scale study looking at early AKI in adult burns ≥10% TBSA. The incidence of AKI was lower than previously reported and AKI was associated with higher in-hospital mortality and increased need for mechanical ventilation. These findings support the notion that development of AKI in the immediate phase post burns injury can potentially have consequences and the appropriate care should be given to prevent its development.
RESUMEN
BACKGROUND: Renal dialysis is a lifesaving but demanding therapy, requiring 3 weekly treatments of multiple-hour durations. Though travel times and quality of care vary across facilities, the extent to which patients are willing and able to engage in weighing tradeoffs is not known. Since 2015, Medicare has summarized and reported quality data for dialysis facilities using a star rating system. We estimate choice models to assess the relative roles of travel distance and quality of care in explaining patient choice of facility. RESEARCH DESIGN: Using national data on 2 million patient-years from 7198 dialysis facilities and 4-star rating releases, we estimated travel distance to patients' closest facilities, incremental travel distance to the next closest facility with a higher star rating, and the difference in ratings between these 2 facilities. We fit mixed effects logistic regression models predicting whether patients dialyzed at their closest facilities. RESULTS: Median travel distance was 4 times that in rural (10.9 miles) versus urban areas (2.6 miles). Higher differences in rating [odds ratios (OR): 0.56; 95% confidence interval (CI): 0.50-0.62] and greater area deprivation (OR: 0.50; 95% CI: 0.48-0.53) were associated with lower odds of attending one's closest facility. Stratified models were also fit based on urbanicity. For rural patients, excess travel was associated with higher odds of attending the closer facility (per 10 miles; OR: 1.05; 95% CI: 1.04-1.06). Star rating differences were associated with lower odds of receiving care from the closest facility among urban (OR: 0.57; 95% CI: 0.51-0.63) and rural patients (OR: 0.18; 95% CI: 0.08-0.44). CONCLUSIONS: Most dialysis patients have higher rated facilities located not much further than their closest facility, suggesting many patients could evaluate tradeoffs between distance and quality of care in where they receive dialysis. Our results show that such tradeoffs likely occur. Therefore, quality ratings such as the Dialysis Facility Compare (DFC) Star Rating may provide actionable information to patients and caregivers. However, we were not able to assess whether these associations reflect a causal effect of the Star Ratings on patient choice, as the Star Ratings served only as a marker of quality of care.
Asunto(s)
Accesibilidad a los Servicios de Salud/tendencias , Aceptación de la Atención de Salud/psicología , Calidad de la Atención de Salud , Diálisis Renal/psicología , Viaje/psicología , Conducta de Elección , Etnicidad/psicología , Etnicidad/estadística & datos numéricos , Geografía , Humanos , Medicare , Oportunidad Relativa , Grupos Raciales/psicología , Grupos Raciales/estadística & datos numéricos , Diálisis Renal/normas , Población Rural/estadística & datos numéricos , Estados Unidos , Población Urbana/estadística & datos numéricosRESUMEN
OBJECTIVE: To study the hemodynamic response to lower leg heating intervention (LLHI) inside the abdominal and iliac arterial segments (AIAS) of young sedentary individuals. METHODS: A Doppler measurement of blood flow was conducted for 5 young sedentary adults with LLHI. Heating durations of 0, 20, and 40 min were considered. A lumped parameter model (LPM) was used to ascertain the hemodynamic mechanism. The hemodynamics were determined via numerical approaches. RESULTS: Ultrasonography revealed that the blood flow waveform shifted upwards under LLHI; in particular, the mean flow increased significantly (p < 0.05) with increasing heating duration. The LPM showed that its mechanism depends on the reduction in afterload resistance, not on the inertia of blood flow and arterial compliance. The time-averaged wall shear stress, time-averaged production rate of nitric oxide, and helicity in the external iliac arteries increased more significantly than in other segments as the heating duration increased, while the oscillation shear index (OSI) and relative residence time (RRT) in the AIAS declined with increasing heating duration. There was a more obvious helicity response in the bilateral external iliac arteries than the OSI and RRT responses. CONCLUSION: LLHI can effectively induce a positive hemodynamic environment in the AIAS of young sedentary individuals.
Asunto(s)
Aorta Abdominal/fisiología , Hipotermia Inducida , Arteria Ilíaca/fisiología , Conducta Sedentaria , Vasodilatación , Adulto , Factores de Edad , Aorta Abdominal/diagnóstico por imagen , Velocidad del Flujo Sanguíneo , Simulación por Computador , Ejercicio Físico , Humanos , Arteria Ilíaca/diagnóstico por imagen , Modelos Cardiovasculares , Flujo Sanguíneo Regional , Ultrasonografía Doppler , Adulto JovenRESUMEN
Severe cases of COVID-19 often necessitate escalation to the Intensive Care Unit (ICU), where patients may face grave outcomes, including mortality. Chest X-rays play a crucial role in the diagnostic process for evaluating COVID-19 patients. Our collaborative efforts with Michigan Medicine in monitoring patient outcomes within the ICU have motivated us to investigate the potential advantages of incorporating clinical information and chest X-ray images for predicting patient outcomes. We propose an analytical workflow to address challenges such as the absence of standardized approaches for image pre-processing and data utilization. We then propose an ensemble learning approach designed to maximize the information derived from multiple prediction algorithms. This entails optimizing the weights within the ensemble and considering the common variability present in individual risk scores. Our simulations demonstrate the superior performance of this weighted ensemble averaging approach across various scenarios. We apply this refined ensemble methodology to analyze post-ICU COVID-19 mortality, an occurrence observed in 21% of COVID-19 patients admitted to the ICU at Michigan Medicine. Our findings reveal substantial performance improvement when incorporating imaging data compared to models trained solely on clinical risk factors. Furthermore, the addition of radiomic features yields even larger enhancements, particularly among older and more medically compromised patients. These results may carry implications for enhancing patient outcomes in similar clinical contexts.
RESUMEN
In the era of precision medicine, time-to-event outcomes such as time to death or progression are routinely collected, along with high-throughput covariates. These high-dimensional data defy classical survival regression models, which are either infeasible to fit or likely to incur low predictability due to over-fitting. To overcome this, recent emphasis has been placed on developing novel approaches for feature selection and survival prognostication. We will review various cutting-edge methods that handle survival outcome data with high-dimensional predictors, highlighting recent innovations in machine learning approaches for survival prediction. We will cover the statistical intuitions and principles behind these methods and conclude with extensions to more complex settings, where competing events are observed. We exemplify these methods with applications to the Boston Lung Cancer Survival Cohort study, one of the largest cancer epidemiology cohorts investigating the complex mechanisms of lung cancer.
RESUMEN
As portable chest X-rays are an efficient means of triaging emergent cases, their use has raised the question as to whether imaging carries additional prognostic utility for survival among patients with COVID-19. This study assessed the importance of known risk factors on in-hospital mortality and investigated the predictive utility of radiomic texture features using various machine learning approaches. We detected incremental improvements in survival prognostication utilizing texture features derived from emergent chest X-rays, particularly among older patients or those with a higher comorbidity burden. Important features included age, oxygen saturation, blood pressure, and certain comorbid conditions, as well as image features related to the intensity and variability of pixel distribution. Thus, widely available chest X-rays, in conjunction with clinical information, may be predictive of survival outcomes of patients with COVID-19, especially older, sicker patients, and can aid in disease management by providing additional information.
Asunto(s)
COVID-19 , Humanos , COVID-19/diagnóstico por imagen , Pronóstico , Mortalidad Hospitalaria , Aprendizaje Automático , Hospitales , Estudios RetrospectivosRESUMEN
Despite racial disparities in diseases of aging and premature mortality, non-Hispanic Black Americans tend to have longer leukocyte telomere length (LTL), a biomarker of cellular aging, than non-Hispanic White Americans. Previous findings suggest that exposure to certain persistent organic pollutants (POPs) is both racially-patterned and associated with longer LTL. We examine whether Black/White differences in LTL are explained by differences in exposure to 15 POPs by estimating the indirect effect (IE) of self-reported race on LTL that is mediated through nine polychlorinated biphenyls (PCBs), three furans, and three dioxins, as well as their mixtures. Our study population includes 1,251 adults from the 1999-2000 and 2001-2002 cycles of the cross-sectional National Health and Nutrition Examination Survey. We characterized single-pollutant mediation effects by constructing survey-weighted linear regression models. We also implemented various approaches to quantify a global mediation effect of all POPs, including unpenalized linear regression, ridge regression, and examination of three summary exposure scores. We found support for the hypothesis that exposure to PCBs partially mediates Black/White differences in LTL. In single-pollutant models, there were significant IEs of race on LTL through six individual PCBs (118, 138, 153, 170, 180, and 187). Ridge regression (0.013, CI 0.001, 0.023; 26.0% mediated) and models examining summative exposure scores with linear combinations derived from principal components analysis (0.019, CI 0.009, 0.029; 34.8% mediated) and Toxic Equivalency Quotient (TEQ) scores (0.016, CI 0.005, 0.026; 28.8% mediated) showed significant IEs when incorporating survey weights. Exposures to individual POPs and their mixtures, which may arise from residential and occupational segregation, may help explain why Black Americans have longer LTL than their White counterparts, providing an environmental explanation for counterintuitive race differences in cellular aging.
Asunto(s)
Contaminantes Ambientales , Bifenilos Policlorados , Humanos , Adulto , Contaminantes Orgánicos Persistentes , Bifenilos Policlorados/toxicidad , Encuestas Nutricionales , Estudios Transversales , Población Blanca , Leucocitos , Contaminantes Ambientales/toxicidad , Telómero/genéticaRESUMEN
AIM: Our objective was to develop a first 19 weeks risk prediction model with several potential gestational diabetes mellitus (GDM) predictors including hepatic and renal and coagulation function measures. METHODS: A total of 490 pregnant women, 215 with GDM and 275 controls, participated in this case-control study. Forty-three blood examination indexes including blood routine, hepatic and renal function, and coagulation function were obtained. Support vector machine (SVM) and light gradient boosting machine (lightGBM) were applied to estimate possible associations with GDM and build the predict model. Cutoff points were estimated using receiver operating characteristic curve analysis. RESULTS: It was observed that a cutoff of Prothrombin time (PAT-PT) and Activated partial thromboplastin time (PAT-APTT) could reliably predict GDM with sensitivity of 88.3% and specificity of 99.47% (AUC of 94.2%). If we only use hepatic and renal function examination, a cutoff of DBIL and FPG with sensitivity of 82.6% and specificity of 90.0% (AUC of 91.0%) was obvious and a negative correlation with PAT-PT (r=-0.430549) and patient activated partial thromboplastin time (PAT-APTT) (r=-0.725638). A negative correlation with direct bilirubin (DBIL) (r=-0.379882) and positive correlation with fasting plasma glucose (FPG) (r = 0.458332) neglect coagulation function examination. CONCLUSION: The results of this study point out the possible roles of PAT-PT and PAT-APTT as potential novel biomarkers for the prediction and earlier diagnosis of GDM. A first 19 weeks risk prediction model, which incorporates novel biomarkers, accurately identifies women at high risk of GDM, and relevant measures can be applied early to achieve the prevention and control effects.
Asunto(s)
Diabetes Gestacional , Biomarcadores , Glucemia , Estudios de Casos y Controles , Diabetes Gestacional/diagnóstico , Femenino , Humanos , Aprendizaje Automático , Embarazo , Primer Trimestre del Embarazo , Factores de RiesgoRESUMEN
AIM: Intrahepatic cholestasis of pregnancy (ICP) is a pregnancy-specific liver disease associated with a significant risk of fetal complications including pre-term delivery and fetal death. Typically, it was diagnosed in the third trimester of pregnancy. This study utilized characteristics from routine maternal examinations in the first 20 weeks' gestation to predict ICP in pregnant women. METHODS: This is a retrospective case-control study. 13,329 medical records were collected on pregnant women presenting to the West China Second University Hospital between December 2017 and December 2018. After screening according to strict criteria, a total of 487 patients, 250 intrahepatic cholestasis of pregnancy cases, and 237 controls were selected for this study. We collected seven maternal characteristics indices for analysis and forty-three routine blood examination indices were obtained from routine hepatic, renal, and coagulation function examinations. The least absolute shrinkage and selection operator regression was applied for variable selection. Classification and regression trees, logistic regression, random forests, and light gradient boosting machines were fit for predictive modeling. We randomly divided 25% of the original data as testing set to conduct internal validation of the performance of the prediction model. The area under the receiver operating characteristic curves (AUC) was used to compare methods. RESULTS: Eight variables were selected out as potentially significant predictors that could reliably predict ICP. The sensitivity, specificity, accuracy, and AUC of the final prediction model obtained by light gradient boosting machines were 72.41, 79.69, 76.23, and 79.77%, respectively. Significantly higher platelet large cell ratio, alanine aminotransferase, glutamyl transpeptidase, and fibrinogen levels were found in cases as compared to healthy controls, while activated partial thromboplastin time and mean corpuscular hemoglobin concentration levels were significantly lower (p < .001). CONCLUSIONS: The combination of alanine aminotransferase, glutamyl transpeptidase, fibrinogen, platelet large cell ratio, activated partial thromboplastin time, lactate dehydrogenase, creatinine, and mean corpuscular hemoglobin concentration levels can effectively predict ICP in the first 20 weeks of gestation. These could help provide direction for earlier detection and prevention of ICP.
Asunto(s)
Colestasis Intrahepática , Complicaciones del Embarazo , Humanos , Femenino , Embarazo , Alanina Transaminasa , Estudios Retrospectivos , Estudios de Casos y Controles , gamma-Glutamiltransferasa , Colestasis Intrahepática/diagnóstico , Complicaciones del Embarazo/diagnóstico , FibrinógenoRESUMEN
Background: Central vein occlusion (CVO) is a serious problem in hemodialysis patients. There is an unsatisfactory result for refractory CVO by sharp recanalization alone. This study evaluated the efficacy and safety of blunt impingement followed by sharp recanalization for the treatment of CVO in hemodialysis patients. Methods: This study retrospectively examined hemodialysis patients with CVO who failed to recanalize using standard guidewire and catheter techniques in our department. In the first instance, all CVOs were recanalized using blunt impingement techniques, including a 6-Fr long sheath (Cook Incorporated, Bloomington, IN USA) and an 8-Fr sheath of Rosch-Uchida Transjugular Liver Access Set (RUPS-100; Cook Incorporated, Bloomington, IN, USA). If this was not successful, sharp recanalization devices were applied, including the stiff tip of a guidewire (Terumo, Tokyo, Japan), the RUPS-100, and the percutaneous transhepatic cholangial drainage (PTCD) needle (Cook Incorporated, USA). All patients were followed up at least 4 months postoperatively. The technical success rate, arteriovenous access patency rates, and operation-related complications were analyzed. Results: The procedural success rate was 100.0% (30 of 30). Thirty patients with CVO underwent blunt impingement with a technique success rate of 70.0% (21 of 30), and 9 patients received sharp recanalization after failed blunt impingement, with a technique success rate of 100.0% (9 of 9). The primary patency rates at 6 and 12 months postoperatively were 86.7% and 53.3%, respectively. The primary assisted patency rates were 93.3% and 63.3%, and the secondary patency rates were 93.3% and 70.0% at 6 and 12 months, respectively. One major procedure-related complication was detected, namely, a small injury of the superior vena cava (SVC) wall in a patient receiving recanalization via the stiff end of a guidewire, but this did not require further treatment. Conclusions: It is potentially effective and safe for interventionalists to use blunt impingement followed by sharp recanalization techniques to treat chronic CVO that is refractory to traversal using traditional catheter and guidewire techniques.
RESUMEN
AIM: To investigate the extent of post-traumatic growth, and the correlation between post-traumatic growth and self-perceived stress, post-traumatic growth and self-perceived burden among CAPD patients. DESIGN: A cross-sectional study. METHODS: This was a multi-centre study including 752 patients from 44 hospitals. Self-perceived stress, self-perceived burden and post-traumatic growth were measured using the post-traumatic growth inventory (PTGI), the Chinese version of the perceived stress questionnaire (CPSQ) and the self-perceived burden scale (SPBS). A multiple stepwise regression analysis was fit with the total PTGI score as the outcome of interest. RESULTS: Patients concurrently experienced post-traumatic growth and stress following peritoneal dialysis. The initiation of patients' education level, employment status and self-perceived stress were all found to relate to growth among Chinese CAPD patients. There was not sufficient evidence to suggest that self-perceived burden was related to experiencing growth.
Asunto(s)
Diálisis Peritoneal Ambulatoria Continua , Diálisis Peritoneal , Crecimiento Psicológico Postraumático , Estudios Transversales , Humanos , Diálisis Peritoneal Ambulatoria Continua/efectos adversos , Encuestas y CuestionariosRESUMEN
Testing for SARS-CoV-2 antibodies is commonly used to determine prior COVID-19 infections and to gauge levels of infection- or vaccine-induced immunity. Michigan Medicine, a primary regional health center, provided an ideal setting to understand serologic testing patterns over time. Between 27 April 2020 and 3 May 2021, characteristics for 10,416 individuals presenting for SARS-CoV-2 antibody tests (10,932 tests in total) were collected. Relative to the COVID-19 vaccine roll-out date, 14 December 2020, the data were split into a pre- (8026 individuals) and post-vaccine launch (2587 individuals) period and contrasted with untested individuals to identify factors associated with tested individuals and seropositivity. Exploratory analysis of vaccine-mediated seropositivity was performed in 347 fully vaccinated individuals. Predictors of tested individuals included age, sex, smoking, neighborhood variables, and pre-existing conditions. Seropositivity in the pre-vaccine launch period was 9.2% and increased to 46.7% in the post-vaccine launch period. In the pre-vaccine launch period, seropositivity was significantly associated with age (10 year; OR = 0.80 (0.73, 0.89)), ever-smoker status (0.49 (0.35, 0.67)), respiratory disease (4.38 (3.13, 6.12)), circulatory disease (2.09 (1.48, 2.96)), liver disease (2.06 (1.11, 3.84)), non-Hispanic Black race/ethnicity (2.18 (1.33, 3.58)), and population density (1.10 (1.03, 1.18)). Except for the latter two, these associations remained statistically significant in the post-vaccine launch period. The positivity rate of fully vaccinated individual was 296/347(85.3% (81.0%, 88.8%)).
RESUMEN
AIM: The study aimed to investigate the current status of reproductive concerns and explore the associated factors among young female chronic kidney disease (CKD) patients. DESIGN: A multi-center cross-sectional study was designed. METHODS: The study was conducted in six representative tertiary hospitals across southwest China. A total of 295 female Chronic kidney disease patients between 18-45 years of age completed a 20 min, web-based survey, which included demographics and disease-related information questionnaire, Reproductive Concerns Scale, Generalized Anxiety Disorder 7 (GAD-7) instrument and Patient Health Questionnaire 9 (PHQ-9) instrument. RESULT: The survey total collected 270 valid questionnaires. The mean reproductive concern score was 54.39 ± 10.90 (out of a maximum of 90), with the mean scores for sub-scales ranging from 7.80 ± 1.69 to 10.44 ± 1.85. Multiple regression analysis showed that those with higher reproductive concerns were more likely to have pregnancy intentions, to be in Chronic kidney disease stages 1-3, and to have a higher GAD-7 score. This study offered further evidence of the need for improved education and emotional support surrounding reproductive concerns among young Chinese women with Chronic kidney disease.
Asunto(s)
Trastornos de Ansiedad , Insuficiencia Renal Crónica , Estudios Transversales , Femenino , Humanos , Embarazo , Insuficiencia Renal Crónica/epidemiología , Reproducción , Encuestas y CuestionariosRESUMEN
BACKGROUND: Understanding risk factors for short- and long-term COVID-19 outcomes have implications for current guidelines and practice. We study whether early identified risk factors for COVID-19 persist one year later and through varying disease progression trajectories. METHODS: This was a retrospective study of 6,731 COVID-19 patients presenting to Michigan Medicine between March 10, 2020 and March 10, 2021. We describe disease progression trajectories from diagnosis to potential hospital admission, discharge, readmission, or death. Outcomes pertained to all patients: rate of medical encounters, hospitalization-free survival, and overall survival, and hospitalized patients: discharge versus in-hospital death and readmission. Risk factors included patient age, sex, race, body mass index, and 29 comorbidity conditions. RESULTS: Younger, non-Black patients utilized healthcare resources at higher rates, while older, male, and Black patients had higher rates of hospitalization and mortality. Diabetes with complications, coagulopathy, fluid and electrolyte disorders, and blood loss anemia were risk factors for these outcomes. Diabetes with complications, coagulopathy, fluid and electrolyte disorders, and blood loss were associated with lower discharge and higher inpatient mortality rates. CONCLUSIONS: This study found differences in healthcare utilization and adverse COVID-19 outcomes, as well as differing risk factors for short- and long-term outcomes throughout disease progression. These findings may inform providers in emergency departments or critical care settings of treatment priorities, empower healthcare stakeholders with effective disease management strategies, and aid health policy makers in optimizing allocations of medical resources.
Asunto(s)
COVID-19/epidemiología , Hospitalización , Aceptación de la Atención de Salud/estadística & datos numéricos , Adolescente , COVID-19/diagnóstico , Femenino , Mortalidad Hospitalaria , Humanos , Masculino , Persona de Mediana Edad , Pronóstico , Estudios Retrospectivos , Factores de RiesgoRESUMEN
BACKGROUND: The prognostic value of blood pressure variability (BPV) in patients receiving hemodialysis is inconclusive. In this study, we aimed to assess the association between BPV and clinical outcomes in the hemodialysis population. METHODS: Pubmed/Medline, EMBASE, Ovid, the Cochrane Library, and the Web of Science databases were searched for relevant articles published until April 1, 2020. Studies on the association between BPV and prognosis in patients receiving hemodialysis were included. RESULTS: A total of 14 studies (37,976 patients) were included in the analysis. In patients receiving hemodialysis, systolic BPV was associated with higher all-cause (hazard ratio [HR]: 1.13; 95% confidence interval [CI]: 1.07-1.19; p < 0.001) and cardiovascular (HR: 1.16; 95% CI: 1.10-1.22; p < 0.001) mortality. In the stratified analysis of systolic BPV, interdialytic systolic BPV, rather than 44-h ambulatory systolic BPV or intradialytic systolic BPV, was identified to be related to both all-cause (HR: 1.11; 95% CI: 1.05-1.17; p = 0.001) and cardiovascular (HR: 1.14; 95% CI: 1.06-1.22; p < 0.001) mortality. Among the different BPV metrics, the coefficient of variation of systolic blood pressure was a predictor of both all-cause (p = 0.01) and cardiovascular (p = 0.002) mortality. Although diastolic BPV was associated with all-cause mortality (HR: 1.09; 95% CI: 1.01-1.17; p = 0.02) in patients receiving hemodialysis, it failed to predict cardiovascular mortality (HR: 0.86; 95% CI: 0.52-1.42; p = 0.56). CONCLUSIONS: This meta-analysis revealed that, in patients receiving hemodialysis, interdialytic systolic BPV was associated with both increased all-cause and cardiovascular mortality. Furthermore, the coefficient of variation of systolic blood pressure was identified as a potentially promising metric of BPV in predicting all-cause and cardiovascular mortality. The use of 44-h ambulatory systolic BPV, intradialytic systolic BPV, and metrics of diastolic BPV in the prognosis of the hemodialysis population require further investigation (PROSPERO registry number: CRD42019139215).
RESUMEN
OBJECTIVE: To examine which factors are driving improvement in the Dialysis Facility Compare (DFC) star ratings and to test whether nonclinical facility characteristics are associated with observed longitudinal changes in the star ratings. DATA SOURCES: Data were collected from eligible patients in over 6,000 Medicare-certified dialysis facilities from three annual star rating and individual measure updates, publicly released on DFC in October 2015, October 2016, and April 2018. STUDY DESIGN: Changes in the star rating and individual quality measures were investigated across three public data releases. Year-to-year changes in the star ratings were linked to facility characteristics, adjusting for baseline differences in quality measure performance. DATA COLLECTION: Data from publicly reported quality measures, including standardized mortality, hospitalization, and transfusion ratios, dialysis adequacy, type of vascular access for dialysis, and management of mineral and bone disease, were extracted from annual DFC data releases. PRINCIPAL FINDINGS: The proportion of four- and five-star facilities increased from 30.0% to 53.4% between October 2015 and April 2018. Quality improvement was driven by the domain of care containing the dialysis adequacy and hypercalcemia measures. Additionally, independently owned facilities and facilities belonging to smaller dialysis organizations had significantly lower odds of year-to-year improvement than facilities belonging to either of the two large dialysis organizations (Odds Ratio [OR]: 0.736, 95% Confidence Interval [CI]: 0.631-0.856 and OR: 0.797, 95% CI: 0.723-0.879, respectively). CONCLUSIONS: The percentage of four- and five-star facilities has increased markedly over a three-year time period. These changes were driven by improvement in the specific quality measures that may be most directly under the control of the dialysis facility.
Asunto(s)
Fallo Renal Crónico/terapia , Medicare/tendencias , Calidad de la Atención de Salud/tendencias , Diálisis Renal/tendencias , Anciano , Benchmarking/tendencias , Femenino , Accesibilidad a los Servicios de Salud/tendencias , Humanos , Masculino , Indicadores de Calidad de la Atención de Salud/tendencias , Estados UnidosRESUMEN
Importance: There is a need for studies to evaluate the risk factors for COVID-19 and mortality among the entire Medicare long-term dialysis population using Medicare claims data. Objective: To identify risk factors associated with COVID-19 and mortality in Medicare patients undergoing long-term dialysis. Design, Setting, and Participants: This retrospective, claims-based cohort study compared mortality trends of patients receiving long-term dialysis in 2020 with previous years (2013-2019) and fit Cox regression models to identify risk factors for contracting COVID-19 and postdiagnosis mortality. The cohort included the national population of Medicare patients receiving long-term dialysis in 2020, derived from clinical and administrative databases. COVID-19 was identified through Medicare claims sources. Data were analyzed on May 17, 2021. Main Outcomes and Measures: The 2 main outcomes were COVID-19 and all-cause mortality. Associations of claims-based risk factors with COVID-19 and mortality were investigated prediagnosis and postdiagnosis. Results: Among a total of 498â¯169 Medicare patients undergoing dialysis (median [IQR] age, 66 [56-74] years; 215â¯935 [43.1%] women and 283â¯227 [56.9%] men), 60â¯090 (12.1%) had COVID-19, among whom 15â¯612 patients (26.0%) died. COVID-19 rates were significantly higher among Black (21â¯787 of 165â¯830 patients [13.1%]) and Hispanic (13â¯530 of 86â¯871 patients [15.6%]) patients compared with non-Black patients (38â¯303 of 332â¯339 [11.5%]), as well as patients with short (ie, 1-89 days; 7738 of 55â¯184 patients [14.0%]) and extended (ie, ≥90 days; 10â¯737 of 30â¯196 patients [35.6%]) nursing home stays in the prior year. Adjusting for all other risk factors, residing in a nursing home 1 to 89 days in the prior year was associated with a higher hazard for COVID-19 (hazard ratio [HR] vs 0 days, 1.60; 95% CI 1.56-1.65) and for postdiagnosis mortality (HR, 1.31; 95% CI, 1.25-1.37), as was residing in a nursing home for an extended stay (COVID-19: HR, 4.48; 95% CI, 4.37-4.59; mortality: HR, 1.12; 95% CI, 1.07-1.16). Black race (HR vs non-Black: HR, 1.25; 95% CI, 1.23-1.28) and Hispanic ethnicity (HR vs non-Hispanic: HR, 1.68; 95% CI, 1.64-1.72) were associated with significantly higher hazards of COVID-19. Although home dialysis was associated with lower COVID-19 rates (HR, 0.77; 95% CI, 0.75-0.80), it was associated with higher mortality (HR, 1.18; 95% CI, 1.11-1.25). Conclusions and Relevance: These results shed light on COVID-19 risk factors and outcomes among Medicare patients receiving long-term chronic dialysis and could inform policy decisions to mitigate the significant extra burden of COVID-19 and death in this population.
Asunto(s)
COVID-19/etiología , Enfermedades Renales/mortalidad , Medicare , Diálisis Renal , Anciano , COVID-19/epidemiología , COVID-19/mortalidad , Etnicidad , Femenino , Humanos , Enfermedades Renales/epidemiología , Enfermedades Renales/terapia , Masculino , Persona de Mediana Edad , Casas de Salud , Modelos de Riesgos Proporcionales , Estudios Retrospectivos , Factores de Riesgo , SARS-CoV-2 , Estados Unidos/epidemiologíaRESUMEN
INTRODUCTION: Hemodialysis catheter-related superior vena cava (SVC) occlusions can cause considerable morbidity for patients and be challenging to treat if refractory to conventional guide wire transversal. This pilot study assessed the feasibility and safety of sharp recanalization of SVC occlusion in hemodialysis patients. METHODS: This study retrospectively enrolled hemodialysis patients treated in West China Hospital diagnosed with SVC occlusion who failed traditional guide wire transversal from January 2014 to November 2017. In brief, a guide wire from the femoral approach was advanced to the lower end of the obstructive lesion to act as a target, while the stiff end of hydrophilic wire was advanced though a jugular approach. Under fluoroscopic guidance in biplane imaging, the occlusive SVC lesion was penetrated with the stiff wire that was snared and pulled through. Graded dilation of the SVC and subsequent tunneled-cuffed catheter implantation were performed. Demographic information and clinical outcomes were recorded and evaluated. FINDINGS: Sixteen patients with a mean age of 62 ± 13 years (13 females and 3 males) who received SVC sharp recanalization were included in this study. The sharp recanalization procedure was successfully performed in 14 patients (87.5%). Two patients were complicated with SVC laceration and hemopericardium but remained asymptomatic and required no surgical repair. One patient suffered ventricular fibrillation during procedure. Despite the return of spontaneous circulation, the patient unfortunately died of gastrointestinal tract bleeding after 3 days in ICU. Follow-up suggested the 6-month catheter patency to be 92.85% and 12-month catheter patency to be 58.33%. No long-term procedure-related complications were recorded. DISCUSSION: Sharp recanalization might be a feasible strategy in managing SVC occlusion in hemodialysis patients. The potential life-threatening complications (cardiac arrhythmia and SVC laceration) necessitate strict eligibility screening, skillful operation, and avoidance of over-dilation of SVC.
Asunto(s)
Cateterismo/efectos adversos , Diálisis Renal/efectos adversos , Síndrome de la Vena Cava Superior/cirugía , Vena Cava Superior/anomalías , Anciano , Estudios de Factibilidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios RetrospectivosRESUMEN
Importance The diagnostic tests for COVID-19 have a high false negative rate, but not everyone with an initial negative result is re-tested. Michigan Medicine, being one of the primary regional centers accepting COVID-19 cases, provided an ideal setting for studying COVID-19 repeated testing patterns during the first wave of the pandemic. Objective To identify the characteristics of patients who underwent repeated testing for COVID-19 and determine if repeated testing was associated with patient characteristics and with downstream outcomes among positive cases. Design This cross-sectional study described the pattern of testing for COVID-19 at Michigan Medicine. The main hypothesis under consideration is whether patient characteristics differed between those tested once and those who underwent multiple tests. We then restrict our attention to those that had at least one positive test and study repeated testing patterns in patients with severe COVID-19 related outcomes (testing positive, hospitalization and ICU care). Setting Demographic and clinical characteristics, test results, and health outcomes for 15,920 patients presenting to Michigan Medicine between March 10 and June 4, 2020 for a diagnostic test for COVID-19 were collected from their electronic medical records on June 24, 2020. Data on the number and types of tests administered to a given patient, as well as the sequences of patient-specific test results were derived from records of patient laboratory results. Participants Anyone tested between March 10 and June 4, 2020 at Michigan Medicine with a diagnostic test for COVID-19 in their Electronic Health Records were included in our analysis. Exposures Comparison of repeated testing across patient demographics, clinical characteristics, and patient outcomes Main Outcomes and Measures Whether patients underwent repeated diagnostic testing for SARS CoV-2 in Michigan Medicine Results Between March 10th and June 4th, 19,540 tests were ordered for 15,920 patients, with most patients only tested once (13596, 85.4%) and never testing positive (14753, 92.7%). There were 5 patients who got tested 10 or more times and there were substantial variations in test results within a patient. After fully adjusting for patient and neighborhood socioeconomic status (NSES) and demographic characteristics, patients with circulatory diseases (OR: 1.42; 95% CI: (1.18, 1.72)), any cancer (OR: 1.14; 95% CI: (1.01, 1.29)), Type 2 diabetes (OR: 1.22; 95% CI: (1.06, 1.39)), kidney diseases (OR: 1.95; 95% CI: (1.71, 2.23)), and liver diseases (OR: 1.30; 95% CI: (1.11, 1.50)) were found to have higher odds of undergoing repeated testing when compared to those without. Additionally, as compared to non-Hispanic whites, non-Hispanic blacks were found to have higher odds (OR: 1.21; 95% CI: (1.03, 1.43)) of receiving additional testing. Females were found to have lower odds (OR: 0.86; 95% CI: (0.76, 0.96)) of receiving additional testing than males. Neighborhood poverty level also affected whether to receive additional testing. For 1% increase in proportion of population with annual income below the federal poverty level, the odds ratio of receiving repeated testing is 1.01 (OR: 1.01; 95% CI: (1.00, 1.01)). Focusing on only those 1167 patients with at least one positive result in their full testing history, patient age in years (OR: 1.01; 95% CI: (1.00, 1.03)), prior history of kidney diseases (OR: 2.15; 95% CI: (1.36, 3.41)) remained significantly different between patients who underwent repeated testing and those who did not. After adjusting for both patient demographic factors and NSES, hospitalization (OR: 7.44; 95% CI: (4.92, 11.41)) and ICU-level care (OR: 6.97; 95% CI: (4.48, 10.98)) were significantly associated with repeated testing. Of these 1167 patients, 306 got repeated testing and 1118 tests were done on these 306 patients, of which 810 (72.5%) were done during inpatient stays, substantiating that most repeated tests for test positive patients were done during hospitalization or ICU care. Additionally, using repeated testing data we estimate the "real world" false negative rate of the RT-PCR diagnostic test was 23.8% (95% CI: (19.5%, 28.5%)). Conclusions and Relevance This study sought to quantify the pattern of repeated testing for COVID-19 at Michigan Medicine. While most patients were tested once and received a negative result, a meaningful subset of patients (2324, 14.6% of the population who got tested) underwent multiple rounds of testing (5,944 tests were done in total on these 2324 patients, with an average of 2.6 tests per person), with 10 or more tests for five patients. Both hospitalizations and ICU care differed significantly between patients who underwent repeated testing versus those only tested once as expected. These results shed light on testing patterns and have important implications for understanding the variation of repeated testing results within and between patients.
RESUMEN
BACKGROUND: Prior research on reducing variation in housestaff handoff procedures have depended on proprietary checkout software. Use of low-technology standardization techniques has not been widely studied. PURPOSE: We wished to determine if standardizing the process of intern sign-out using low-technology sign-out tools could reduce perception of errors and missing handoff data. METHODS: We conducted a pre-post prospective study of a cohort of 34 interns on a general internal medicine ward. Night interns coming off duty and day interns reassuming care were surveyed on their perception of erroneous sign-out data, mistakes made by the night intern overnight, and occurrences unanticipated by sign-out. Trainee satisfaction with the sign-out process was assessed with a 5-point Likert survey. RESULTS: There were 399 intern surveys performed 8 weeks before and 6 weeks after the introduction of a standardized sign-out form. The response rate was 95% for the night interns and 70% for the interns reassuming care in the morning. After the standardized form was introduced, night interns were significantly (p < .003) less likely to detect missing sign-out data including missing important diseases, contingency plans, or medications. Standardized sign-out did not significantly alter the frequency of dropped tasks or missed lab and X-ray data as perceived by the night intern. However, the day teams thought there were significantly less perceived errors on the part of the night intern (p = .001) after introduction of the standardized sign-out sheet. There was no difference in mean Likert scores of resident satisfaction with sign-out before and after the intervention. CONCLUSION: Standardized written sign-out sheets significantly improve the completeness and effectiveness of handoffs between night and day interns. Further research is needed to determine if these process improvements are related to better patient outcomes.