Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 73
Filtrar
1.
Transplant Direct ; 10(2): e1572, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38264297

RESUMO

Background: Pulmonary embolism (PE) is a rare yet serious postoperative complication for lung transplant recipients (LTRs). The association between timing and severity of PE and the development of chronic allograft lung dysfunction (CLAD) has not been described. Methods: A single-center, retrospective cohort analysis of first LTRs included bilateral or single lung transplants and excluded multiorgan transplants and retransplants. PEs were confirmed by computed tomography angiography or ventilation/perfusion (VQ) scans. Infarctions were confirmed on computed tomography angiography by a trained physician. The PE severity was defined by the Pulmonary Embolism Severity Index (PESI) score, a 30-d post-PE mortality risk calculator, and stratified by low I and II (0-85), intermediate III and IV (85-125), and high V (>125). PE and PESI were analyzed in the outcomes of overall survival, graft failure, and chronic lung allograft dysfunction (CLAD). Results: We identified 57 of 928 patients (6.14%) who had at least 1 PE in the LTR cohort with a median follow-up of 1623 d. In the subset with PE, the median PESI score was 85 (75.8-96.5). Most of the PESI scores (32/56 available) were in the low-risk category. In the CLAD analysis, there were 49 LTRs who had a PE and 16 LTRs (33%) had infarction. When treating PE as time-dependent and adjusting for covariates, PE was significantly associated with death (hazard ratio [HR] 1.8; 95% confidence interval [CI], 1.3-2.5), as well as increased risk of graft failure, defined as retransplant, CLAD, or death (HR 1.8; 95% CI, 1.3-2.5), and CLAD (HR 1.7; 95% CI, 1.2-2.4). Infarction was not associated with CLAD or death. The PESI risk category was not a significant predictor of death or CLAD. Conclusions: PE is associated with decreased survival and increased hazard of developing CLAD. PESI score was not a reliable predictor of CLAD or death in this lung transplant cohort.

3.
JAMA Netw Open ; 6(12): e2348914, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38127347

RESUMO

Importance: Studies elucidating determinants of residential neighborhood-level health inequities are needed. Objective: To quantify associations of structural racism indicators with neighborhood prevalence of chronic kidney disease (CKD), diabetes, and hypertension. Design, Setting, and Participants: This cross-sectional study used public data (2012-2018) and deidentified electronic health records (2017-2018) to describe the burden of structural racism and the prevalence of CKD, diabetes, and hypertension in 150 residential neighborhoods in Durham County, North Carolina, from US census block groups and quantified their associations using bayesian models accounting for spatial correlations and residents' age. Data were analyzed from January 2021 to May 2023. Exposures: Global (neighborhood percentage of White residents, economic-racial segregation, and area deprivation) and discrete (neighborhood child care centers, bus stops, tree cover, reported violent crime, impervious areas, evictions, election participation, income, poverty, education, unemployment, health insurance coverage, and police shootings) indicators of structural racism. Main Outcomes and Measures: Outcomes of interest were neighborhood prevalence of CKD, diabetes, and hypertension. Results: A total of 150 neighborhoods with a median (IQR) of 1708 (1109-2489) residents; median (IQR) of 2% (0%-6%) Asian residents, 30% (16%-56%) Black residents, 10% (4%-20%) Hispanic or Latino residents, 0% (0%-1%) Indigenous residents, and 44% (18%-70%) White residents; and median (IQR) residential income of $54 531 ($37 729.25-$78 895.25) were included in analyses. In models evaluating global indicators, greater burden of structural racism was associated with greater prevalence of CKD, diabetes, and hypertension (eg, per 1-SD decrease in neighborhood White population percentage: CKD prevalence ratio [PR], 1.27; 95% highest density interval [HDI], 1.18-1.35; diabetes PR, 1.43; 95% HDI, 1.37-1.52; hypertension PR, 1.19; 95% HDI, 1.14-1.25). Similarly in models evaluating discrete indicators, greater burden of structural racism was associated with greater neighborhood prevalence of CKD, diabetes, and hypertension (eg, per 1-SD increase in reported violent crime: CKD PR, 1.15; 95% HDI, 1.07-1.23; diabetes PR, 1.20; 95% HDI, 1.13-1.28; hypertension PR, 1.08; 95% HDI, 1.02-1.14). Conclusions and Relevance: This cross-sectional study found several global and discrete structural racism indicators associated with increased prevalence of health conditions in residential neighborhoods. Although inferences from this cross-sectional and ecological study warrant caution, they may help guide the development of future community health interventions.


Assuntos
Diabetes Mellitus , Hipertensão , Insuficiência Renal Crônica , Humanos , Estudos Transversais , Teorema de Bayes , Prevalência , Racismo Sistêmico , Doença Crônica , Hipertensão/epidemiologia
4.
JAMA Netw Open ; 6(12): e2347826, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38100105

RESUMO

Importance: It is unclear whether center-level factors are associated with racial equity in living donor kidney transplant (LDKT). Objective: To evaluate center-level factors and racial equity in LDKT during an 11-year time period. Design, Setting, and Participants: A retrospective cohort longitudinal study was completed in February 2023, of US transplant centers with at least 12 annual LDKTs from January 1, 2008, to December 31, 2018, identified in the Health Resources Services Administration database and linked to the US Renal Data System and the Scientific Registry of Transplant Recipients. Main Outcomes and Measures: Observed and model-based estimated Black-White mean LDKT rate ratios (RRs), where an RR of 1 indicates racial equity and values less than 1 indicate a lower rate of LDKT of Black patients compared with White patients. Estimated yearly best-case center-specific LDKT RRs between Black and White individuals, where modifiable center characteristics were set to values that would facilitate access to LDKT. Results: The final cohorts of patients included 394 625 waitlisted adults, of whom 33.1% were Black and 66.9% were White, and 57 222 adult LDKT recipients, of whom 14.1% were Black and 85.9% were White. Among 89 transplant centers, estimated yearly center-level RRs between Black and White individuals accounting for center and population characteristics ranged from 0.0557 in 2008 to 0.771 in 2018. The yearly median RRs ranged from 0.216 in 2016 to 0.285 in 2010. Model-based estimations for the hypothetical best-case scenario resulted in little change in the minimum RR (from 0.0557 to 0.0549), but a greater positive shift in the maximum RR from 0.771 to 0.895. Relative to the observed 582 LDKT in Black patients and 3837 in White patients, the 2018 hypothetical model estimated an increase of 423 (a 72.7% increase) LDKTs for Black patients and of 1838 (a 47.9% increase) LDKTs for White patients. Conclusions and Relevance: In this cohort study of patients with kidney failure, no substantial improvement occurred over time either in the observed or the covariate-adjusted estimated RRs. Under the best-case hypothetical estimations, modifying centers' participation in the paired exchange and voucher programs and increased access to public insurance may contribute to improved racial equity in LDKT. Additional work is needed to identify center-level and program-specific strategies to improve racial equity in access to LDKT.


Assuntos
Transplante de Rim , Adulto , Humanos , Estudos de Coortes , Doadores Vivos , Estudos Longitudinais , Estudos Retrospectivos , Compostos Radiofarmacêuticos
5.
J Clin Orthop Trauma ; 43: 102209, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37502096

RESUMO

Background: Race and insurance status are independent predictors of healthcare outcomes following lower-extremity trauma. Level 1 trauma centers show better outcomes overall, but it is has not been extensively studied as to whether they specifically lower complication rates and shorten length of stay in those with Black race, with low socioeconomic status, and/or a lack of private health insurance. We performed a study with the objective of determining whether Level I trauma centers can improve the complication rate of those shown to be at high risk of experiencing adverse outcomes due to socioeconomic differences. Hypothesis: Level 1 trauma centers will be successful in mitigating the disparity in complication rates and length of stay associated with racial and socioeconomic differences among trauma patients experiencing an open tibia fracture. Patients and methods: The National Trauma Databank was reviewed from 2008 to 2015, identifying 81,855 encounters with an open tibia fracture, and 33,047 at a Level I trauma center. Regression models determined effects of race and insurance status on outcomes by trauma center while controlling for confounders. Results: Black race [OR 1.36, 95% CI, 1.17-1.58; p < 0.05] and "other" race [OR 1.28, 95% CI, 1.07-1.52; p < 0.05] were associated with higher odds of injury-specific complications. Patients without private insurance and of non-White or Black race in comparison to White patients had a significantly longer length of stay [coefficient 1.66, 95% CI, 1.37-1.94; p < 0.001]. These differences persisted in patients treated at an American College of Surgeons (ACS) Level I trauma center. Discussion: Treatment at an ACS Level I trauma center did not reduce the independent effects of race and insurance status on outcomes after open tibia fracture, emphasizing the need to recognize this disparity and improve care for at-risk populations.

6.
Clin Transplant ; 37(10): e15048, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37363857

RESUMO

INTRODUCTION: The advent of new technologies to reduce primary graft dysfunction (PGD) and improve outcomes after heart transplantation are costly. Adoption of these technologies requires a better understanding of health care utilization, specifically the costs related to PGD. METHODS: Records were examined from all adult patients who underwent orthotopic heart transplantation (OHT) between July 1, 2013 and July 30, 2019 at a single institution. Total costs were categorized into variable, fixed, direct, and indirect costs. Patient costs from time of transplantation to hospital discharge were transformed with the z-score transformation and modeled in a linear regression model, adjusted for potential confounders and in-hospital mortality. The quintile of patient costs was modeled using a proportional odds model, adjusted for confounders and in-hospital mortality. RESULTS: 359 patients were analyzed, including 142 with PGD and 217 without PGD. PGD was associated with a .42 increase in z-score of total patient costs (95% CI: .22-.62; p < .0001). Additionally, any grade of PGD was associated with a 2.95 increase in odds for a higher cost of transplant (95% CI: 1.94-4.46, p < .0001). These differences were substantially greater when PGD was categorized as severe. Similar results were obtained for fixed, variable, direct, and indirect costs. CONCLUSIONS: PGD after OHT impacts morbidity, mortality, and health care utilization. We found that PGD after OHT results in a significant increase in total patient costs. This increase was substantially higher if the PGD was severe. SUMMARY: Primary graft dysfunction after heart transplantation impacts morbidity, mortality, and health care utilization. PGD after OHT is costly and investments should be made to reduce the burden of PGD after OHT to improve patient outcomes.

7.
JAMA Netw Open ; 6(2): e2255626, 2023 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-36763360

RESUMO

Importance: Hypertension self-management is recommended for optimal blood pressure (BP) control, but self-identified residential contextual factors that hinder hypertension self-care are understudied. Objective: To quantify perceived neighborhood health and hypertension self-care and assess interactions with the area deprivation index (ADI) and healthy food availability at home. Design, Setting, and Participants: A cross-sectional study was conducted in Baltimore, Maryland, including primary care adults enrolled in the Achieving Blood Pressure Control Together trial between September 1, 2013, and June 30, 2014. Participants were Black and had at least 2 BP readings greater than or equal to 140/90 mm Hg in the 6 months before enrollment. Analyses were conducted from August 5, 2021, to January 28, 2022. Exposures: Participants' perceived neighborhood health, defined as the mean standardized score across 4 subdomains of aesthetic quality, walkability, safety, and violence, with a higher score signifying better neighborhood health. Main Outcomes and Measures: Hypertension self-care behavior and self-efficacy. Multivariable generalized linear models were fit regressing each outcome on perceived neighborhood health (higher scores on each domain signify better perceived neighborhood health), adjusted for confounders, and interaction terms between neighborhood health and potential modifiers (ADI [higher percentiles correspond to more deprivation] and healthy food availability [higher scores indicate greater availability]) of the primary association were included. Results: Among 159 participants (median [IQR] age, 57 [49-64] years; mean [SD] age, 57 (11) years; 117 women [74%]), median (IQR) hypertension self-care behavior was 50 (45-56) and self-efficacy was 64 (57-72). Better perceived neighborhood health was associated with greater hypertension self-care behavior (ß, 2.48; 95% CI, 0.63-4.33) and self-efficacy (ß, 4.42; 95% CI, 2.25-6.59); these associations persisted for all neighborhood health subdomains except aesthetic quality. There were no statistically significant interactions between perceived neighborhood health or its subdomains with ADI on self-care behavior (P = .74 for interaction) or self-efficacy (P = .85 for interaction). However, better perceived neighborhood aesthetic quality had associations with greater self-care behavior specifically at higher healthy food availability at home scores: ß at -1 SD, -0.29; 95% CI, -2.89 to 2.30 vs ß at 1 SD, 2.97; 95% CI, 0.46-5.47; P = .09 for interaction). Likewise, associations of perceived worse neighborhood violence with lower self-care behavior were attenuated at higher healthy food availability at home scores (ß for -1 SD, 3.69; 95% CI, 1.31-6.08 vs ß for 1 SD, 0.01; 95% CI, -2.53 to 2.54; P = .04 for interaction). Conclusions and Relevance: In this cross-sectional study, better perceived neighborhood health was associated with greater hypertension self-care among Black individuals with hypertension, particularly among those with greater in-home food availability. Thus, optimizing hypertension self-management may require multifaceted interventions targeting both the patients' perceived contextual neighborhood barriers to self-care and availability of healthy food resources in the home.


Assuntos
Hipertensão , Autocuidado , Adulto , Humanos , Feminino , Pessoa de Meia-Idade , Estudos Transversais , Hipertensão/epidemiologia , Hipertensão/terapia , Pressão Sanguínea , Violência
8.
Chest ; 164(1): 159-168, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36681147

RESUMO

BACKGROUND: Frailty, measured as a single construct, is associated variably with poor outcomes before and after lung transplantation. The usefulness of a comprehensive frailty assessment before transplantation is unknown. RESEARCH QUESTION: How are multiple frailty constructs, including phenotypic and cumulative deficit models, muscle mass, exercise tolerance, and social vulnerabilities, measured before transplantation, associated with short-term outcomes after lung transplantation? STUDY DESIGN AND METHODS: We conducted a retrospective cohort study of 515 lung recipients who underwent frailty assessments before transplantation, including the short physical performance battery (SPPB), transplant-specific frailty index (FI), 6-min walk distance (6MWD), thoracic sarcopenia, and social vulnerability indexes. We tested the association between frailty measures before transplantation and outcomes after transplantation using logistic regression to model 1-year survival and zero-inflated negative binomial regression to model hospital-free days (HFDs) in the first 90 days after transplantation. Adjustment covariates included age, sex, native lung disease, transplantation type, lung allocation score, BMI, and primary graft dysfunction. RESULTS: Before transplantation, 51.3% of patients were frail by FI (FI ≥ 0.25) and no patients were frail by SPPB. In multivariate adjusted models that also included FI, SPPB, and 6MWD, greater frailty by FI, but not SPPB, was associated with fewer HFDs (-0.006 per 0.01 unit worsening; 95% CI, -0.01 to -0.002 per 0.01 unit worsening) among discharged patients. Greater SPPB deficits were associated with decreased odds of 1-year survival (OR, 0.51 per 1 unit worsening; 95% CI, 0.28-0.93 per 1 unit worsening). Correlation among frailty measurements overall was poor. No association was found between thoracic sarcopenia, 6MWD, or social vulnerability assessments and short-term outcomes after lung transplantation. INTERPRETATION: Both phenotypic and cumulative deficit models measured before transplantation are associated with short-term outcomes after lung transplantation. Cumulative deficit measures of frailty may be more relevant in the first 90 days after transplantation, whereas phenotypic frailty may have a stronger association with 1-year survival.


Assuntos
Fragilidade , Transplante de Pulmão , Sarcopenia , Humanos , Fragilidade/complicações , Estudos Retrospectivos , Sarcopenia/epidemiologia , Sarcopenia/complicações , Pulmão
9.
Transplant Proc ; 55(1): 56-65, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36623960

RESUMO

BACKGROUND: To evaluate the effect of the Affordable Care Act (ACA) Medicaid expansion on payor mix among patients on the kidney and liver transplant waiting list as well as waiting list and post-transplant outcomes. DESIGN: Using the Scientific Registry of Transplant Recipients, we performed a secondary data analysis of all patients on the kidney and liver transplant waiting list from 2007 to 2018. We described changes in payor mix by timing of state Medicaid expansion. We used competing risks models to estimate cause-specific hazard ratios for the effects of insurance and era on death/delisting and transplant. We used a Poisson regression model to estimate the effect of insurance and era on incidence rate ratio of inactivations on the waiting list. We used Cox proportional hazards models to estimate the effect of insurance and era on graft and patient survival. RESULTS: A decade after implementation of the ACA, the prevalence of Medicaid beneficiaries listed for transplant increased by 2.5% (from 7.4% to 9.9%) for kidney and by 2.6% (15.3% to 17.9%) for liver. Expansion states had greater increases than nonexpansion states (kidney 3.8% vs 0.6%, liver 5.3% vs -1.8%). Among wait-listed patients, the magnitude of association of Medicaid insurance vs private insurance with transplant decreased over time for kidney candidates (era 1 subdistribution hazard ratio (SHR), 0.62 [95% CI, 0.60-0.64] vs era 3 SHR, 0.77 [95% CI, 0.74-0.70]) but increased for liver candidates (era 1 SHR, 0.85 [95% CI, 0.83-0.90] vs era 3 SHR 0.79 [95% CI, 0.77-0.82]). Medicaid-insured kidney and liver recipients had greater hazards of graft failure; this did not change over time (kidney: HR, 1.23 [95% CI, 1.06-1.44] liver: HR, 1.05 [95% CI, 0.94-1.17]). CONCLUSIONS: For the millions of patients with chronic kidney and liver diseases, implementation of the ACA has resulted in only modest increases in access to transplant for the publicly insured vs the privately insured.


Assuntos
Transplante de Fígado , Patient Protection and Affordable Care Act , Estados Unidos , Humanos , Medicaid , Sistema de Registros , Rim
10.
Am J Transplant ; 23(3): 377-386, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36695687

RESUMO

The choice of deprivation index can influence conclusions drawn regarding the extent of deprivation within a community and the identification of the most deprived communities in the United States. This study aimed to determine the degree of correlation among deprivation indices commonly used to characterize transplant populations. We used a retrospective cohort consisting of adults listed for liver or kidney transplants between 2008 and 2018 to compare 4 deprivation indices: neighborhood deprivation index, social deprivation index (SDI), area deprivation index, and social vulnerability index. Pairwise correlation between deprivation indices by transplant referral regions was measured using Spearman correlations of population-weighted medians and upper quartiles. In total, 52 individual variables were used among the 4 deprivation indices with 25% overlap. For both organs, the correlation between the population-weighted 75th percentile of the deprivation indices by transplant referral region was highest between SDI and social vulnerability index (liver and kidney, 0.93) and lowest between area deprivation index and SDI (liver, 0.19 and kidney, 0.15). The choice of deprivation index affects the applicability of research findings across studies examining the relationship between social risk and clinical outcomes. Appropriate application of these measures to transplant populations requires careful index selection based on the intended use and included variable relevance.


Assuntos
Transplante de Rim , Adulto , Humanos , Estados Unidos , Estudos Retrospectivos , Características de Residência
11.
J Urban Health ; 99(6): 984-997, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36367672

RESUMO

There is tremendous interest in understanding how neighborhoods impact health by linking extant social and environmental drivers of health (SDOH) data with electronic health record (EHR) data. Studies quantifying such associations often use static neighborhood measures. Little research examines the impact of gentrification-a measure of neighborhood change-on the health of long-term neighborhood residents using EHR data, which may have a more generalizable population than traditional approaches. We quantified associations between gentrification and health and healthcare utilization by linking longitudinal socioeconomic data from the American Community Survey with EHR data across two health systems accessed by long-term residents of Durham County, NC, from 2007 to 2017. Census block group-level neighborhoods were eligible to be gentrified if they had low socioeconomic status relative to the county average. Gentrification was defined using socioeconomic data from 2006 to 2010 and 2011-2015, with the Steinmetz-Wood definition. Multivariable logistic and Poisson regression models estimated associations between gentrification and development of health indicators (cardiovascular disease, hypertension, diabetes, obesity, asthma, depression) or healthcare encounters (emergency department [ED], inpatient, or outpatient). Sensitivity analyses examined two alternative gentrification measures. Of the 99 block groups within the city of Durham, 28 were eligible (N = 10,807; median age = 42; 83% Black; 55% female) and 5 gentrified. Individuals in gentrifying neighborhoods had lower odds of obesity (odds ratio [OR] = 0.89; 95% confidence interval [CI]: 0.81-0.99), higher odds of an ED encounter (OR = 1.10; 95% CI: 1.01-1.20), and lower risk for outpatient encounters (incidence rate ratio = 0.93; 95% CI: 0.87-1.00) compared with non-gentrifying neighborhoods. The association between gentrification and health and healthcare utilization was sensitive to gentrification definition.


Assuntos
Características de Residência , Segregação Residencial , Humanos , Feminino , Adulto , Masculino , Aceitação pelo Paciente de Cuidados de Saúde , Razão de Chances , Obesidade
12.
JAMA Netw Open ; 5(9): e2231863, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-36107423

RESUMO

Importance: System and center-level interventions to improve health equity in organ transplantation benefit from robust characterization of the referral population served by each transplant center. Transplant referral regions (TRRs) define geographic catchment areas for transplant centers in the US, but accurately characterizing the demographics of populations within TRRs using US Census data poses a challenge. Objective: To compare 2 methods of linking US Census data with TRRs-a geospatial intersection method and a zip code cross-reference method. Design, Setting, and Participants: This cohort study compared spatial congruence of spatial intersection and zip code cross-reference methods of characterizing TRRs at the census block level. Data included adults aged 18 years and older on the waiting list for kidney transplant from 2008 through 2018. Exposures: End-stage kidney disease. Main Outcomes and Measures: Multiple assignments, where a census tract or block group crossed the boundary between 2 hospital referral regions and was assigned to multiple different TRRs; misassigned area, the portion of census tracts or block groups assigned to a TRR using either method but fall outside of the TRR boundary. Results: In total, 102 TRRs were defined for 238 transplant centers. The zip code cross-reference method resulted in 4627 multiple-assigned census block groups (representing 18% of US land area assigned to TRRs), while the spatial intersection method eliminated this problem. Furthermore, the spatial method resulted in a mean and median reduction in misassigned area of 65% and 83% across all TRRs, respectively, compared with the zip code cross-reference method. Conclusions and Relevance: In this study, characterizing populations within TRRs with census block groups provided high spatial resolution, complete coverage of the country, and balanced population counts. A spatial intersection approach avoided errors due to duplicative and incorrect assignments, and allowed more detailed and accurate characterization of the sociodemographics of populations within TRRs; this approach can enrich transplant center knowledge of local referral populations, assist researchers in understanding how social determinants of health may factor into access to transplant, and inform interventions to improve heath equity.


Assuntos
Falência Renal Crônica , Transplante de Órgãos , Adulto , Estudos de Coortes , Humanos , Falência Renal Crônica/epidemiologia , Encaminhamento e Consulta , Listas de Espera
13.
Kidney Med ; 4(9): 100521, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36090772

RESUMO

Rationale & Objective: Choosing from multiple kidney failure treatment modalities can create decisional conflict, but little is known about this experience before decision implementation. We explored decisional conflict about treatment for kidney failure and its associated patient characteristics in the context of advanced chronic kidney disease (CKD). Study Design: Cross-sectional study. Setting & Participants: Adults (N = 427) who had advanced CKD, received nephrology care in Pennsylvania-based clinics, and had no history of dialysis or transplantation. Predictors: Participants' sociodemographic, physical health, nephrology care/knowledge, and psychosocial characteristics. Outcomes: Participants' results on the Sure of myself; Understand information; Risk-benefit ratio; Encouragement (SURE) screening test for decisional conflict (no decisional conflict vs decisional conflict). Analytical Approach: We used multivariable logistic regression to quantify associations between aforementioned participant characteristics and decisional conflict. We repeated analyses among a subgroup of participants at highest risk of kidney failure within 2 years. Results: Most (76%) participants reported treatment-related decisional conflict. Participant characteristics associated with lower odds of decisional conflict included complete satisfaction with patient-kidney team treatment discussions (OR, 0.16; 95% CI, 0.03-0.88; P = 0.04), attendance of treatment education classes (OR, 0.38; 95% CI, 0.16-0.90; P = 0.03), and greater treatment-related decision self-efficacy (OR, 0.97; 95% CI, 0.94-0.99; P < 0.01). Sensitivity analyses showed a similarly high prevalence of decisional conflict (73%) and again demonstrated associations of class attendance (OR, 0.26; 95% CI, 0.07-0.96; P = 0.04) and decision self-efficacy (OR, 0.95; 95% CI, 0.91-0.99; P = 0.03) with decisional conflict. Limitations: Single-health system study. Conclusions: Decisional conflict was highly prevalent regardless of CKD progression risk. Findings suggest efforts to reduce decisional conflict should focus on minimizing the mismatch between clinical practice guidelines and patient-reported engagement in treatment preparation, facilitating patient-kidney team treatment discussions, and developing treatment education programs and decision support interventions that incorporate decision self-efficacy-enhancing strategies.

14.
Pain Rep ; 7(5): e1027, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35999902

RESUMO

Objectives: Pain is an individual experience that should incorporate patient-centered care. This study seeks to incorporate patient perspectives toward expanding nonpharmacologic treatment options for pain from the emergency department (ED). Methods: In this cross-sectional study of adult patients in ED with musculoskeletal neck, back, or extremity pain, patient-reported outcomes were collected including willingness to try and prior use of various nonpharmacologic pain treatments, sociodemographics, clinical characteristics, functional outcomes, psychological distress, and nonmusculoskeletal symptoms. Least absolute shrinkage and selection operator regression identified variables associated with (1) willingness to try and (2) having previously tried nonpharmacologic treatments. Results: Responses were analyzed from 206 adults, with a mean age of 45.4 (SD 16.4) years. The majority (90.3%) of patients in ED were willing to try at least one form of nonpharmacologic pain treatment, with 70.4%, 81.6%, and 70.9% willing to try respective subcategories of active (eg, exercise), passive (eg, heat), and psychosocial (eg, prayer) modalities. Only 56.3% of patients had previously tried any, with 35.0%, 52.4%, and 41.3% having tried active, passive, and psychosocial modalities, respectively. Patient-level factors associated with willingness included pain in upper back, more severe pain-related symptoms, and functional impairments. The factor most consistently associated with treatment use was health care provider encouragement to do so. Conclusions: Patients in ED report high willingness to try nonpharmacologic treatments for pain. Higher pain severity and interference may indicate greater willingness, while health care provider encouragement correlated with treatment use. These findings may inform future strategies to increase the introduction of nonpharmacologic treatments from the ED.

15.
Kidney360 ; 3(1): 158-163, 2022 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-35368562

RESUMO

Participants who identified as female and Black reported more thorough discussions of dialysis than transplant.Participants with low incomes and education reported more thorough discussions of dialysis than transplant.


Assuntos
Diálise Renal , Insuficiência Renal Crônica , Feminino , Humanos , Insuficiência Renal Crônica/epidemiologia , Terapia de Substituição Renal
16.
Laryngoscope ; 132(11): 2217-2223, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34978078

RESUMO

OBJECTIVES/HYPOTHESIS: To evaluate the ability of the Eustachian Tube Dysfunction Questionnaire-7 (ETDQ-7) to discriminate between patients with Eustachian tube dysfunction (ETD) and Non-ETD diagnoses, and identify symptom information to improve ability to discriminate these groups. STUDY DESIGN: Cohort study. METHODS: Pilot retrospective study with consecutive adult patients presenting to otology clinics and one general otolaryngology clinic in an academic health system. Patients were administered ETDQ-7 with eight additional symptom items. Electronic health records were reviewed for demographic and diagnostic information. Patients were grouped into diagnosis categories: 1) True ETD, 2) experiencing ear fullness (EF) not due to ETD, and 3) Control patients without ETD-related disorders or EF. ETDQ-7 and symptom item scores were compared by the diagnosis group. Receiver-operative characteristics curves and area under the curve (AUC) were generated for each ETD diagnosis group based on ETDQ-7 and symptom scores. RESULTS: Of the 108 patients included in this study 74 (68.5%) were diagnosed with ETD. Patients with ETD had higher (indicating worse symptom burden) overall ETDQ-7 scores than Control group (Median [Q1, Q3] 3.0, [1.7, 4.1]; versus 1.5 [1.0, 3.4] P = .008). There was no statistically significant difference between overall ETDQ-7 scores for ETD and Non-ETD EF patients (P = .389). The AUC for the ETDQ-7 in discriminating ETD from other conditions that cause EF was 0.569; the addition of 8 symptom questions to the ETDQ-7 improved AUC to 0.801. CONCLUSION: Additional patient-reported symptom information may improve the ability to discriminate ETD from other similarly presenting diagnoses when using ETDQ-7. LEVEL OF EVIDENCE: 3 Laryngoscope, 132:2217-2223, 2022.


Assuntos
Otopatias , Tuba Auditiva , Adulto , Estudos de Coortes , Otopatias/diagnóstico , Humanos , Estudos Prospectivos , Estudos Retrospectivos , Inquéritos e Questionários
17.
Stat ; 11(1)2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36937572

RESUMO

This manuscript describes an experiential learning program for future collaborative biostatisticians (CBs) developed within an academic medical center. The program is a collaborative effort between the Biostatistics, Epidemiology, and Research Design (BERD) Methods Core and the Master of Biostatistics (MB) program, both housed in the Department of Biostatistics and Bioinformatics at Duke University School of Medicine and supported in partnership with the Duke Clinical and Translational Science Institute. To date, the BERD Core Training and Internship Program (BCTIP) has formally trained over 80 students to work on collaborative teams that are integrated throughout the Duke School of Medicine. This manuscript focuses on the setting for the training program, the experiential learning model on which it is based, the structure of the program, and lessons learned to date.

18.
Ear Hear ; 43(3): 961-971, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34711743

RESUMO

OBJECTIVES: In this study, we sought to evaluate whether older patients with hearing loss who underwent surgery were at greater risk of postsurgical complications, increased inpatient length-of-stay (LOS), and hospital readmission. DESIGN: This was a retrospective cohort study of patients receiving surgery at a tertiary medical center. Utilizing electronic health record data from two merged datasets, we identified patients 65 years and older, undergoing major surgery between January 1, 2014 and January 31, 2017, and who had audiometric evaluation before surgery. Patients were classified as having either normal hearing or hearing loss based on pure-tone average in the better ear. A Generalized Estimating Equations approach was used to fit multivariable regression models for outcome variables of interest. RESULTS: Of patients ≥65 years undergoing major surgery in our time frame, a total of 742 surgical procedures were performed on 621 patients with available audiometric data. After adjusting for age, sex, race, and comorbidities, hearing loss was associated with an increase in the odds of developing postoperative complications. Every 10 dB increase in hearing loss was associated with a 14% increase in the odds of developing a postoperative complication (odds ratio = 1.14, 95% confidence interval = 1.01-1.29, p = 0.031). Hearing loss was not significantly associated with increased hospital LOS, 30-day readmission, or 90-day readmission. CONCLUSIONS: Hearing loss was significantly associated with developing postoperative complications in older adults undergoing major surgery. Screening for hearing impairment may be a useful addition to the preoperative assessment and perioperative management of older patients undergoing surgery.


Assuntos
Surdez , Perda Auditiva , Idoso , Surdez/complicações , Perda Auditiva/complicações , Perda Auditiva/epidemiologia , Humanos , Tempo de Internação , Readmissão do Paciente , Complicações Pós-Operatórias/epidemiologia , Complicações Pós-Operatórias/etiologia , Estudos Retrospectivos
19.
Ear Hear ; 43(2): 487-494, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34334680

RESUMO

OBJECTIVES: Falls are considered a significant public health issue and falls risk increases with age. There are many age-related physiologic changes that occur that increase postural instability and the risk for falls (i.e., age-related sensory declines in vision, vestibular, somatosensation, age-related orthopedic changes, and polypharmacy). Hearing loss has been shown to be an independent risk factor for falls. The primary objective of this study was to determine if hearing aid use modified (reduced) the association between self-reported hearing status and falls or falls-related injury. We hypothesized that hearing aid use would reduce the impact of hearing loss on the odds of falling and falls-related injury. If hearing aid users have reduced odds of falling compared with nonhearing aid users, then that would have an important implications for falls prevention healthcare. DESIGN: Data were drawn from the 2004-2016 surveys of the Health and Retirement Study (HRS). A generalized estimating equation approach was used to fit logistic regression models to determine whether or not hearing aid use modifies the odds of falling and falls injury associated with self-reported hearing status. RESULTS: A total of 17,923 individuals were grouped based on a self-reported history of falls. Self-reported hearing status was significantly associated with odds of falling and with falls-related injury when controlling for demographic factors and important health characteristics. Hearing aid use was included as an interaction in the fully-adjusted models and the results showed that there was no difference in the association between hearing aid users and nonusers for either falls or falls-related injury. CONCLUSIONS: The results of the present study show that when examining self-reported hearing status in a longitudinal sample, hearing aid use does not impact the association between self-reported hearing status and the odds of falls or falls-related injury.


Assuntos
Auxiliares de Audição , Perda Auditiva , Acidentes por Quedas , Perda Auditiva/complicações , Perda Auditiva/epidemiologia , Humanos , Aposentadoria , Autorrelato
20.
Acta Mater Med ; 1(3): 320-332, 2022 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37274016

RESUMO

In clinical trials, the primary analysis is often either a test of absolute/relative change in a measured outcome or a corresponding responder analysis. Though each of these tests may be reasonable, determining which test is most suitable for a particular research study is still an open question. These tests may require different sample sizes, define different clinically meaningful differences, and most importantly, lead to different study conclusions. This paper aims to compare a typical non-inferiority test using absolute change as the study endpoint to the corresponding responder analysis in terms of sample size requirements, statistical power, and hypothesis testing results. From numerical analysis, using absolute change as an endpoint generally requires a larger sample size; therefore, when the sample size is the same, the responder analysis has higher power. The cut-off value and non-inferiority margin are critical which can meaningfully impact whether the two types of endpoints yield conflicting conclusions. Specifically, an extreme cut-off value is more likely to cause different conclusions. However, this impact decreases as population variance increases. One important reason for conflicting conclusions is that the population distribution is not normal. To eliminate conflicting results, researchers should pay attention to the population distribution and cut-off value selection.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA