RESUMO
BACKGROUND: Recent efforts to increase access to kidney transplant (KTx) in the United States include increasing referrals to transplant programs, leading to more pretransplant services. Transplant programs reconcile the costs of these services through the Organ Acquisition Cost Center (OACC). OBJECTIVE: The aim of this study was to determine the costs associated with pretransplant services by applying microeconomic methods to OACC costs reported by transplant hospitals. RESEARCH DESIGN, SUBJECTS, AND MEASURES: For all US adult kidney transplant hospitals from 2013 through 2018 (n=193), we crosslinked the total OACC costs (at the hospital-fiscal year level) to proxy measures of volumes of pretransplant services. We used a multiple-output cost function, regressing total OACC costs against proxy measures for volumes of pretransplant services and adjusting for patient characteristics, to calculate the marginal cost of each pretransplant service. RESULTS: Over 1015 adult hospital-years, median OACC costs attributable to the pretransplant services were $5 million. Marginal costs for the pretransplant services were: initial transplant evaluation, $9k per waitlist addition; waitlist management, $2k per patient-year on the waitlist; deceased donor offer management, $1k per offer; living donor evaluation, procurement and follow-up: $26k per living donor. Longer time on dialysis among patients added to the waitlist was associated with higher OACC costs at the transplant hospital. CONCLUSIONS: To achieve the policy goals of more access to KTx, sufficient funding is needed to support the increase in volume of pretransplant services. Future studies should assess the relative value of each service and explore ways to enhance efficiency.
Assuntos
Transplante de Rim , Listas de Espera , Humanos , Transplante de Rim/economia , Transplante de Rim/estatística & dados numéricos , Estados Unidos , Masculino , Feminino , Pessoa de Meia-Idade , Definição da Elegibilidade , Adulto , Obtenção de Tecidos e Órgãos/economia , Custos de Cuidados de Saúde/estatística & dados numéricosRESUMO
Numerous United States transplant centers require solid organ transplantation candidates to be vaccinated against the coronavirus disease of 2019 to be active on the United Network for Organ Sharing waiting list. This study examined characteristics of adult patients on one center's kidney transplantation waiting list whose status was inactivated due to a lack of coronavirus disease 2019 vaccination by July 1, 2022, and who did not subsequently provide proof of vaccination by August 31, 2022 (cases). Patients in the control group were retrospectively matched to patients in the case group in a 4-to-1 fashion according to age, sex, and "active" status on the waiting list. Multivariable logistic regression was performed, with race/ethnicity, primary language, health insurance, education, and Vaccine Equity Metric (VEM, a measure of health equity at the zip code level) quartile as covariates. Results revealed that patients from zip codes in the lowest VEM quartile (odds ratio [OR] 1.89; P = .02) and those insured by governmental payors (Medicare: OR, 2.00; P < .01 and Medicaid: OR, 2.89; P < .01) had higher odds of being inactivated than those from zip codes that make up the highest VEM quartile and those insured by commercial payors, respectively. These findings serve as a cautionary tale regarding universal pretransplantation vaccination requirements, which may raise equity concerns that should be considered upon policy implementation.
Assuntos
COVID-19 , Listas de Espera , Adulto , Humanos , Idoso , Estados Unidos/epidemiologia , Estudos de Casos e Controles , Estudos Retrospectivos , Medicare , COVID-19/epidemiologia , COVID-19/prevenção & controle , VacinaçãoRESUMO
PROBLEM: With the dissolution of the Step 2 Clinical Skills exam, medical programs have a greater responsibility to teach and assess clinical skills in the preclerkship years. Clinical teaching this early has traditionally been avoided because of insufficient integration with biomedical sciences, curricular time constraints, and concerns about overwhelming novice learners with clinical learning objectives. To overcome these barriers, the authors created a clinical framework for the biomedical science curriculum by integrating a series of virtual interactive patient (VIP) videos. APPROACH: Matriculating first-year medical students were enrolled in a clinically integrated biomedical science course that used VIP videos to teach and assess clinical skills. The VIP videos were enhanced with interactive pop-in windows, and at the conclusion of each video, students performed a clinical documentation task and received immediate feedback. The authors implemented 7 VIP cases during fall 2021 in which first-year medical students wrote the patient care plan, problem representation statement, or clinical reasoning assessment. Student responses were independently evaluated by course faculty using a 4-level scoring scale. The authors calculated the pooled mean scores for each documentation task and examined student feedback. OUTCOMES: Seven VIP encounters were assigned to 124 students (mean response rate, 98.5%). Pooled mean scores on the clinical documentation tasks showed that most students were able to achieve levels 3 or 4 when writing the patient care plan (97 [82%] to 113 [94%]), addressing social determinants of health (80 [67%]), writing an accurate problem representation statement (113 [91%] to 117 [94%]), and performing clinical reasoning skills (48 [40%] to 95 [82%]). NEXT STEPS: VIP encounters were feasible to produce, effective at integrating course content, successful at assessing student clinical documentation skills, and well received. The authors will continue to produce, implement, and study the VIP as an integrating learning tool in undergraduate medical education.
Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Humanos , Currículo , Aprendizagem , Docentes , Competência ClínicaRESUMO
OBJECTIVES: The aim of this study was to show how the US government could save approximately 47 000 patients with chronic kidney failure each year from suffering on dialysis and premature death by compensating living kidney donors enough to completely end the kidney shortage. METHODS: Supply and demand analysis was used to estimate the number of donated kidneys needed to end the kidney shortage and the level of compensation required to encourage this number of donations. These results were then input into a detailed cost-benefit analysis to estimate the economic value of kidney transplantation to (1) the average kidney recipient and their caregiver, (2) taxpayers, and (3) society in general. RESULTS: We estimate half of patients diagnosed with kidney failure each year-approximately 62 000 patients-could be saved from suffering on dialysis and premature death if they could receive an average of 1½ kidney transplants. However, currently there are only enough donated kidneys to save approximately 15 000 patients. To encourage sufficient donations to save the other 47 000 patients, the government would have to compensate living kidney donors approximately $77 000 (±50%) per donor. The value of transplantation to an average kidney recipient (and caregiver) would be approximately $1.5 million, and the savings from the recipient not needing expensive dialysis treatments would be approximately $1.2 million. CONCLUSIONS: This analysis reveals the huge benefit that compensating living kidney donors would provide to patients with kidney failure and their caregivers and, conversely, the huge cost that is being imposed on these patients and their families by the current legal prohibition against such compensation.
Assuntos
Falência Renal Crônica , Transplante de Rim , Humanos , Estados Unidos , Análise Custo-Benefício , Doadores Vivos , Falência Renal Crônica/cirurgia , Diálise RenalRESUMO
Living donor liver transplantation has expanded in recent years, particularly in North America. As experience with this procedure has matured over the last 25 years, centers are increasingly faced with potential living donors who are more medically complex. As donors move through the evaluation process, completing the informed consent process continues to be challenged by a paucity of granular data demonstrating long-term outcomes and overall safety specifically in the otherwise "healthy" living liver donor population. Two recently published studies examined long-term outcomes post-living liver donation using Korean registry data and reported similar results, with excellent overall survival when compared to appropriately matched controls. However, the authors of these studies were presented differently, with one reporting an alarmist view based on one aspect of a suboptimal analysis approach using an inappropriate comparator group. Herein, the North American Living Liver Donor Innovation Group (NALLDIG) consortium discusses these two studies and their potential impact on living liver donation in North America, ultimately highlighting the importance of scientific integrity in data presentation and dissemination when using transplant registry data.
Assuntos
Transplante de Fígado , Transplantes , Humanos , Fígado , Doadores Vivos , Sistema de RegistrosRESUMO
Importance: While recent policy reforms aim to improve access to kidney transplantation for patients with end-stage kidney disease, the cost implications of kidney waiting list expansion are not well understood. The Organ Acquisition Cost Center (OACC) is the mechanism by which Medicare reimburses kidney transplantation programs, at cost, for costs attributable to kidney transplantation evaluation and waiting list management, but these costs have not been well described to date. Objectives: To describe temporal trends in mean OACC costs per kidney transplantation and to identify factors most associated with cost. Design, Setting, and Participants: This economic evaluation included all kidney transplantation waiting list candidates and recipients in the United States from 2012 to 2017. A population-based study of cost center reports was conducted using data from all Center of Medicare & Medicaid-certified transplantation hospitals. Data analysis was conducted from June to August 2021. Exposures: Year, local price index, transplantation and waiting list volume of transplantation program, and comorbidity burden. Main Outcomes and Measures: Mean OACC costs per kidney transplantation. Results: In 1335 hospital-years from 2012 through 2017, Medicare's share of OACC costs increased from $0.95 billion in 2012 to $1.32 billion in 2017 (3.7% of total Medicare End-Stage Renal Disease program expenditure). Median (IQR) OACC costs per transplantation increased from $81â¯000 ($66â¯000 to $103â¯000) in 2012 to $100â¯000 ($82â¯000 to $125â¯000) in 2017. Kidney organ procurement costs contributed to 36% of mean OACC costs per transplantation throughout the study period. During the study period, transplantation hospitals experienced increases in kidney waiting list volume, kidney waiting list active volume, kidney transplantation volume, and comorbidity burden. For a median-sized transplantation program, mean OACC costs per transplantation decreased with more transplants (-$3500 [95% CI, -$4300 to -$2700] per 10 transplants; P < .001) and increased with year ($4400 [95% CI, $3500 to $5300] per year; P < .001), local price index ($1900 [95% CI, $200 to $3700] per 10-point increase; P = .03), patients listed active on the waiting list ($3100 [95% CI, $1700 to $4600] per 100 patients; P < .001), and patients on the waiting list with high comorbidities ($1500 [9% CI, $600 to $2500] per 1% increase in proportion of waitlisted patients with the highest comorbidity score; P = .002). Conclusions and Relevance: In this study, OACC costs increased at 4% per year from 2012 to 2017 and were not solely attributable to the cost of organ procurement. Expanding the waiting list will likely contribute to further increases in the mean OACC costs per transplantation and substantially increase Medicare liability.
Assuntos
Falência Renal Crônica , Transplante de Rim , Obtenção de Tecidos e Órgãos , Idoso , Feminino , Humanos , Falência Renal Crônica/epidemiologia , Falência Renal Crônica/cirurgia , Masculino , Medicare , Estados Unidos , Listas de EsperaRESUMO
Background: Sacroiliac (SI) joint subchondral resorption on high-resolution magnetic resonance imaging (MRI) may be an early sign of the development of sacroiliitis. At our institution, high-resolution intermediate-weighted (proton density) MRI sequences are used in the workup of suspected spondyloarthritis (SpA). Questions/Purpose: We sought to test the hypothesis that SI joint subchondral resorption might be a useful MRI feature in the diagnosis of sacroiliitis. Methods: We retrospectively reviewed the records of patients with suspected SpA from a single rheumatologist's practice from January 1, 2010, to December 31, 2017. Patients had an MRI of the SI joints, using our institution's specialized protocol, and underwent standard physical examination and laboratory evaluation. The sensitivity and specificity of SI joint subchondral resorption in the identification of sacroiliitis were estimated using the clinical diagnosis as the reference standard and from a Bayesian latent class model with conditional dependence. Results: SI joint subchondral resorption on SI joint MRI was highly correlated with a positive diagnosis in patients worked up for axial SpA. It demonstrated superior sensitivity when compared with other MRI features used in the MRI diagnosis of sacroiliitis, such as bone marrow edema pattern, erosion, and ankylosis. Interobserver reliability was high for subchondral resorption. Conclusion: This retrospective study found that subchondral resorption on MRI evaluation of the SI joints appeared to be a sensitive indicator of SpA, potentially of early disease. This imaging feature warrants evaluation in other cohorts of patients suspected of having axial SpA to validate diagnostic performance in diverse populations.
RESUMO
Over the past decade, Pfizer has focused efforts to improve its research and development (R&D) productivity. By the end of 2020, Pfizer had achieved an industry-leading clinical success rate of 21%, a tenfold increase from 2% in 2010 and well above the industry benchmark of â¼11%. The company had also maintained the quality of innovation, because 75% of its approvals between 2016 and 2020 had at least one expedited regulatory designation (e.g., Breakthrough Therapy). Pfizer's Signs of Clinical Activity (SOCA) paradigm enabled better decision-making and, along with other drivers (biology and modality), contributed to this productivity improvement. These laid a strong foundation for the rapid and effective development of the Coronavirus 2019 (COVID-19) vaccine with BioNTech, as well as the antiviral candidate Paxlovid™, under the company's 'lightspeed' paradigm.
Assuntos
Indústria Farmacêutica/economia , Pesquisa/economia , Antivirais/economia , Vacina BNT162/economia , COVID-19/economia , Vacinas contra COVID-19/economia , HumanosRESUMO
OBJECTIVE: We aimed to compare general surgery emergency (GSE) volume, demographics and disease severity before and during COVID-19. BACKGROUND: Presentations to the emergency department (ED) for GSEs fell during the early COVID-19 pandemic. Barriers to accessing care may be heightened, especially for vulnerable populations, and patients delaying care raises public health concerns. METHODS: We included adult patients with ED presentations for potential GSEs at a single quaternary-care hospital from January 2018 to August 2020. To compare GSE volumes in total and by subgroup, an interrupted time-series analysis was performed using the March shelter-in-place order as the start of the COVID-19 period. Bivariate analysis was used to compare demographics and disease severity. RESULTS: 3255 patients (28/week) presented with potential GSEs before COVID-19, while 546 (23/week) presented during COVID-19. When shelter-in-place started, presentations fell by 8.7/week (31%) from the previous week (p<0.001), driven by decreases in peritonitis (ß=-2.76, p=0.017) and gallbladder disease (ß=-2.91, p=0.016). During COVID-19, patients were younger (54 vs 57, p=0.001), more often privately insured (44% vs 38%, p=0.044), and fewer required interpreters (12% vs 15%, p<0.001). Fewer patients presented with sepsis during the pandemic (15% vs 20%, p=0.009) and the average severity of illness decreased (p<0.001). Length of stay was shorter during the COVID-19 period (3.91 vs 5.50 days, p<0.001). CONCLUSIONS: GSE volumes and severity fell during the pandemic. Patients presenting during the pandemic were less likely to be elderly, publicly insured and have limited English proficiency, potentially exacerbating underlying health disparities and highlighting the need to improve care access for these patients. LEVEL OF EVIDENCE: III.
RESUMO
BACKGROUND: The goal is to provide a national analysis of organ procurement organization (OPO) costs. METHODS: Five years of data, for 51 of the 58 OPOs (2013-2017, a near census) were obtained under a FOIA. OPOs are not-for-profit federal contractors with a geographic monopoly. A generalized 15-factor cost regression model was estimated with adjustments to precision of estimates (P) for repeated observations. Selected measures were validated by comparison to IRS forms. RESULTS: Decease donor organ procurement is a $1B/y operation with over 26 000 transplants/y. Over 60% of the cost of an organ is overhead. Profits are $2.3M/OPO/y. Total assets are $45M/OPO and growing at 9%/y. "Tissue" (skin, bones) generates $2-3M profit/OPO/y. A comparison of the highest with the lower costing OPOs showed our model explained 75% of the cost difference. Comparing costs across OPOs showed that highest-cost OPOs are smaller, import 44% more kidneys, face 6% higher labor costs, report 98% higher compensation for support personnel, spend 46% more on professional education, have 44% fewer assets, compensate their Executive Director 36% less, and have a lower procurement performance (SDRR) score. CONCLUSIONS: Profits and assets suggest that OPOs are fiscally secure and OPO finances are not a source of the organ shortage. Asset accumulation ($45M/OPO) of incumbents suggests establishing a competitive market with new entrants is unlikely. Kidney-cost allocations support tissue procurements. Professional education spending does not reduce procurement costs. OPO importing of organs from other OPOs is a complex issue possibly increasing cost ($6K/kidney).
Assuntos
Obtenção de Tecidos e Órgãos , Transplantes , Coleta de Dados , Humanos , Rim , Doadores de TecidosRESUMO
A potential solution to the deceased donor organ shortage is to expand donor acceptability criteria. The procurement cost implications of using nonstandard donors is unknown. Using 5 years of US organ procurement organization (OPO) data, we built a cost function model to make cost projections: the total cost was the dependent variable; production outputs, including the number of donors and organs procured, were the independent variables. In the model, procuring one kidney or procuring both kidneys from double/en bloc transplantation from a single-organ donor resulted in a marginal cost of $55 k (95% confidence interval [CI] $28 k, $99 k) per kidney, and procuring only the liver from a single-organ donor results in a marginal cost of $41 k (95% CI $12 k, $69 k) per liver. Procuring two kidneys for two candidates from a donor lowered the marginal cost to $36 k (95% CI $22 k, $66 k) per kidney, and procuring two kidneys and a liver lowers the marginal cost to $24 k (95% CI $17 k, $45 k) per organ. Economies of scale were observed, where high OPO volume was correlated with lower costs. Despite higher cost per organ than for standard donors, kidney transplantation from nonstandard donors remained cost-effective based on contemporary US data.
Assuntos
Transplante de Rim , Obtenção de Tecidos e Órgãos , Análise Custo-Benefício , Humanos , Rim , Doadores de TecidosRESUMO
BACKGROUND: Desensitization protocols for HLA-incompatible living donor kidney transplantation (ILDKT) vary across centers. The impact of these, as well as other practice variations, on ILDKT outcomes remains unknown. METHODS: We sought to quantify center-level variation in mortality and graft loss following ILDKT using a 25-center cohort of 1358 ILDKT recipients with linkage to Scientific Registry of Transplant Recipients for accurate outcome ascertainment. We used multilevel Cox regression with shared frailty to determine the variation in post-ILDKT outcomes attributable to between-center differences and to identify any center-level characteristics associated with improved post-ILDKT outcomes. RESULTS: After adjusting for patient-level characteristics, only 6 centers (24%) had lower mortality and 1 (4%) had higher mortality than average. Similarly, only 5 centers (20%) had higher graft loss and 2 had lower graft loss than average. Only 4.7% of the differences in mortality (P < 0.01) and 4.4% of the differences in graft loss (P < 0.01) were attributable to between-center variation. These translated to a median hazard ratio of 1.36 for mortality and 1.34 of graft loss for similar candidates at different centers. Post-ILDKT outcomes were not associated with the following center-level characteristics: ILDKT volume and transplanting a higher proportion of highly sensitized, prior transplant, preemptive, or minority candidates. CONCLUSIONS: Unlike most aspects of transplantation in which center-level variation and volume impact outcomes, we did not find substantial evidence for this in ILDKT. Our findings support the continued practice of ILDKT across these diverse centers.
Assuntos
Rejeição de Enxerto/prevenção & controle , Sobrevivência de Enxerto/efeitos dos fármacos , Antígenos HLA/imunologia , Disparidades em Assistência à Saúde , Histocompatibilidade , Imunossupressores/uso terapêutico , Isoanticorpos/sangue , Transplante de Rim , Doadores Vivos , Padrões de Prática Médica , Adulto , Feminino , Rejeição de Enxerto/sangue , Rejeição de Enxerto/imunologia , Rejeição de Enxerto/mortalidade , Humanos , Imunossupressores/efeitos adversos , Transplante de Rim/efeitos adversos , Transplante de Rim/mortalidade , Masculino , Pessoa de Meia-Idade , Indicadores de Qualidade em Assistência à Saúde , Sistema de Registros , Medição de Risco , Fatores de Risco , Fatores de Tempo , Resultado do Tratamento , Estados UnidosRESUMO
PURPOSE: To identify post-liver transplant CT findings which predict graft failure within 1 year. MATERIALS AND METHODS: We evaluated the CT scans of 202 adult liver transplants performed in our institution who underwent CT within 3 months after transplantation. We recorded CT findings of liver perfusion defect (LPD), parenchymal homogeneity, and the diameters and attenuations of the hepatic vessels. Findings were correlated to 1-year graft failure, and interobserver variability was assessed. RESULTS: Forty-one (20.3%) of the 202 liver grafts failed within 1 year. Graft failure was highly associated with LPD (n = 18/25, or 67%, versus 15/98, or 15%, p < 0.001), parenchymal hypoattenuation (n = 20/41, or 48.8% versus 17/161, or 10.6%, p < 0.001), and smaller diameter of portal veins (right portal vein [RPV], 10.7 ± 2.7 mm versus 14.7 ± 2.2 mm, and left portal vein [LPV], 9.8 ± 3.0 mm versus 12.4 ± 2.2 mm, p < 0.001, respectively). Of these findings, LPD (hazard ratio [HR], 5.43, p < 0.001) and small portal vein diameters (HR, RPV, 3.33, p < 0.001, and LPV, 3.13, p < 0.05) independently predicted graft failure. All the measurements showed fair to moderate interobserver agreement (0.233~0.597). CONCLUSION: For patients who have CT scan within the first 3 months of liver transplantation, findings of LPD and small portal vein diameters predict 1-year graft failure. KEY POINTS: â¢Failed grafts are highly associated with liver perfusion defect, hypoattenuation, and small portal vein. â¢Right portal vein < 11.5 mm and left portal vein < 10.0 mm were associated with poor graft outcome. â¢Liver perfusion defect and small portal vein diameter independently predicted graft failure.
Assuntos
Transplante de Fígado , Adulto , Humanos , Fígado/diagnóstico por imagem , Doadores Vivos , Veia Porta/diagnóstico por imagem , Tomografia Computadorizada por Raios XRESUMO
AIMS/HYPOTHESIS: Using variable diabetic retinopathy screening intervals, informed by personal risk levels, offers improved engagement of people with diabetes and reallocation of resources to high-risk groups, while addressing the increasing prevalence of diabetes. However, safety data on extending screening intervals are minimal. The aim of this study was to evaluate the safety and cost-effectiveness of individualised, variable-interval, risk-based population screening compared with usual care, with wide-ranging input from individuals with diabetes. METHODS: This was a two-arm, parallel-assignment, equivalence RCT (minimum 2 year follow-up) in individuals with diabetes aged 12 years or older registered with a single English screening programme. Participants were randomly allocated 1:1 at baseline to individualised screening at 6, 12 or 24 months for those at high, medium and low risk, respectively, as determined at each screening episode by a risk-calculation engine using local demographic, screening and clinical data, or to annual screening (control group). Screening staff and investigators were observer-masked to allocation and interval. Data were collected within the screening programme. The primary outcome was attendance (safety). A secondary safety outcome was the development of sight-threatening diabetic retinopathy. Cost-effectiveness was evaluated within a 2 year time horizon from National Health Service and societal perspectives. RESULTS: A total of 4534 participants were randomised. After withdrawals, there were 2097 participants in the individualised screening arm and 2224 in the control arm. Attendance rates at first follow-up were equivalent between the two arms (individualised screening 83.6%; control arm 84.7%; difference -1.0 [95% CI -3.2, 1.2]), while sight-threatening diabetic retinopathy detection rates were non-inferior in the individualised screening arm (individualised screening 1.4%, control arm 1.7%; difference -0.3 [95% CI -1.1, 0.5]). Sensitivity analyses confirmed these findings. No important adverse events were observed. Mean differences in complete case quality-adjusted life-years (EuroQol Five-Dimension Questionnaire, Health Utilities Index Mark 3) did not significantly differ from zero; multiple imputation supported the dominance of individualised screening. Incremental cost savings per person with individualised screening were £17.34 (95% CI 17.02, 17.67) from the National Health Service perspective and £23.11 (95% CI 22.73, 23.53) from the societal perspective, representing a 21% reduction in overall programme costs. Overall, 43.2% fewer screening appointments were required in the individualised arm. CONCLUSIONS/INTERPRETATION: Stakeholders involved in diabetes care can be reassured by this study, which is the largest ophthalmic RCT in diabetic retinopathy screening to date, that extended and individualised, variable-interval, risk-based screening is feasible and can be safely and cost-effectively introduced in established systematic programmes. Because of the 2 year time horizon of the trial and the long time frame of the disease, robust monitoring of attendance and retinopathy rates should be included in any future implementation. TRIAL REGISTRATION: ISRCTN 87561257 FUNDING: The study was funded by the UK National Institute for Health Research. Graphical abstract.
Assuntos
Análise Custo-Benefício , Retinopatia Diabética/diagnóstico , Programas de Rastreamento/efeitos adversos , Programas de Rastreamento/economia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Humanos , Pessoa de Meia-Idade , Fatores de Risco , Reino Unido , Adulto JovemRESUMO
Sickle cell disease (SCD) is the most common inherited blood disorder in the United States. It is a medically and socially complex, multisystem illness that affects individuals throughout the lifespan. Given improvements in care, most children with SCD survive into adulthood. However, access to adult sickle cell care is poor in many parts of the United States, resulting in increased acute care utilization, disjointed care delivery, and early mortality for patients. A dearth of nonmalignant hematology providers, the lack of a national SCD registry, and the absence of a centralized infrastructure to facilitate comparative quality assessment compounds these issues. As part of a workshop designed to train health care professionals in the skills necessary to establish clinical centers focused on the management of adults living with SCD, we defined an SCD center, elucidated required elements of a comprehensive adult SCD center, and discussed different models of care. There are also important economic impacts of these centers at an institutional and health system level. As more clinicians are trained in providing adult-focused SCD care, center designation will enhance the ability to undertake quality improvement and compare outcomes between SCD centers. Activities will include an assessment of the clinical effectiveness of expanded access to care, the implementation of SCD guidelines, and the efficacy of newly approved targeted medications. Details of this effort are provided.
Assuntos
Anemia Falciforme , Doenças Hematológicas , Adulto , Anemia Falciforme/terapia , Criança , Acessibilidade aos Serviços de Saúde , Humanos , Estados UnidosRESUMO
In 2011 Yale New Haven Hospital, in response to high utilization of acute care services and widespread patient and health care personnel dissatisfaction, set out to improve its care of adults living with sickle cell disease. Re-organization components included recruitment of additional personnel; re-locating inpatients to a single nursing unit; reducing the number of involved providers; personalized care plans for pain management; setting limits upon access to parenteral opioids; and an emphasis upon clinic visits focused upon home management of pain as well as specialty and primary care. Outcomes included dramatic reductions in inpatient days (79%), emergency department visits (63%), and hospitalizations (53%); an increase in outpatient visits (31%); and a decrease in costs (49%). Providers and nurses viewed the re-organization and outcomes positively. Most patients reported improvements in pain control and life style; many patients thought the re-organization process was unfair. Their primary complaint was a lack of shared decision-making. We attribute the contrast in these perspectives to the inherent difficulties of managing recurrent acute and chronic pain with opioids, especially within the context of the imbalance in wellness, power, and privilege between persons living with sickle cell disease, predominantly persons of color and poor socio-economic status, and health care organizations and their personnel.
Assuntos
Anemia Falciforme/terapia , Hospitais Universitários , Atenção Primária à Saúde/organização & administração , Adulto , Assistência Ambulatorial/estatística & dados numéricos , Analgésicos Opioides/uso terapêutico , Custos e Análise de Custo/estatística & dados numéricos , Feminino , Hospitalização/estatística & dados numéricos , Humanos , Pacientes Internados/estatística & dados numéricos , Masculino , Enfermeiras e Enfermeiros/estatística & dados numéricos , Manejo da Dor/estatística & dados numéricos , Avaliação de Resultados da Assistência ao Paciente , Médicos/estatística & dados numéricos , Fatores SocioeconômicosRESUMO
BACKGROUND: Device rupture is considered a major complication associated with breast implants. The U.S. Food and Drug Administration recommends magnetic resonance imaging (MRI) surveillance 3 years after implantation and then every 2 years, but adherence to these recommendations is poor. The authors identified current practice management for breast implant rupture surveillance by surveying practicing U.S. plastic surgeons. METHODS: An online survey of all active members of the American Society of Plastic Surgeons was performed. Questions analyzed imaging practice patterns related to breast implants. Logistic regression models were used to analyze determinants for radiographic imaging in breast implant patients. RESULTS: The survey had a response rate of 16.5 percent. For patients with breast implants, 37.7 percent of respondents recommended MRI at the recommended intervals. Fifty-five percent perform imaging only if there is a problem with the implant. Academic surgeons more frequently recommended MRI (56.3 percent and 39.3 percent; p = 0.0002). Surgeons with less than 5 years of experience are four times more likely to order MRI than surgeons with over 25 years' experience (60.8 percent and 28.1 percent; p < 0.0001). Furthermore, lower volume surgeons recommend significantly more MRI (45.2 percent and 27.3 percent; p = 0.001). Respondents are almost two times more likely to recommend MRI in reconstructive versus cosmetic patients (51.2 percent and 35.6 percent; p = 0.0004). CONCLUSIONS: MRI limitations include high costs, time commitments, and equipment constraints. Fewer than 40 percent of survey respondents suggest the recommended screening frequency to their patients; however, academic, low-volume, early-career surgeons are more likely to recommend MRI implant monitoring. Screening recommendations need to be evidence based and align with common practices to prevent undue system, provider, and patient burden.
Assuntos
Implante Mamário/efeitos adversos , Implantes de Mama/efeitos adversos , Fidelidade a Diretrizes/estatística & dados numéricos , Contratura Capsular em Implantes/diagnóstico por imagem , Padrões de Prática Médica/estatística & dados numéricos , United States Food and Drug Administration/normas , Implante Mamário/instrumentação , Feminino , Fidelidade a Diretrizes/economia , Humanos , Contratura Capsular em Implantes/prevenção & controle , Imageamento por Ressonância Magnética/economia , Imageamento por Ressonância Magnética/normas , Imageamento por Ressonância Magnética/estatística & dados numéricos , Guias de Prática Clínica como Assunto , Padrões de Prática Médica/economia , Padrões de Prática Médica/normas , Cirurgiões/estatística & dados numéricos , Inquéritos e Questionários/estatística & dados numéricos , Fatores de Tempo , Estados UnidosRESUMO
Using 5 years of US organ procurement organization (OPO) data, we determined the cost of recovering a viable (ie, transplanted) kidney for each of 51 OPOs. We also examined the effects on OPO costs of the recovery of nonviable (ie, discarded) kidneys and other OPO metrics. Annual cost reports from 51 independent OPOs were used to determine the cost per recovered kidney for each OPO. A quadratic regression model was employed to estimate the relationship between the cost of kidneys and the number of viable kidneys recovered, as well as other OPO performance indicators. The cost of transplanted kidneys at individual OPOs ranged widely from $24 000 to $56 000, and the average was $36 000. The cost of a viable kidney tended to decline with the number of kidneys procured up to 549 kidneys per year and then increase. Of the total 81 401 kidneys recovered, 66 454 were viable and 14 947 (18.4%) were nonviable. The costs of kidneys varied widely over the OPOs studied, and costs were a function of the recovered number of viable and nonviable organs, local cost levels, donation after cardiac death, year, and Standardized Donor Rate Ratio. Cost increases were 3% per year.
Assuntos
Transplante de Rim , Obtenção de Tecidos e Órgãos , Morte , Humanos , Rim , Doadores de TecidosRESUMO
The Pediatric End-Stage Liver Disease (PELD) score is intended to determine priority for children awaiting liver transplantation. This study examines the impact of PELD's incorporation of "growth failure" as a threshold variable, defined as having weight or height <2 standard deviations below the age and gender norm (z-score <2). First, we demonstrate the "growth failure gap" created by PELD's current calculation methods, in which children have z-scores <2 but do not meet PELD's growth failure criteria and thus lose 6-7 PELD points. Second, we utilized United Network for Organ Sharing (UNOS) data to investigate the impact of this "growth failure gap." Among 3291 pediatric liver transplant candidates, 26% met PELD-defined growth failure, and 17% fell in the growth failure gap. Children in the growth failure gap had a higher risk of waitlist mortality than those without growth failure (adjusted subhazard ratio [SHR] 1.78, 95% confidence interval [95% CI] 1.05-3.02, P = .03). They also had a higher risk of posttransplant mortality (adjusted HR 1.55, 95% CI 1.03-2.32, P = .03). For children without PELD exception points (n = 1291), waitlist mortality risk nearly tripled for those in the gap (SHR 2.89, 95% CI 1.39-6.01, P = .005). Current methods for determining growth failure in PELD disadvantage candidates arbitrarily and increase their waitlist mortality risk. PELD should be revised to correct this disparity.