Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Pediatr Hematol Oncol ; : 1-13, 2024 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-39449234

RESUMO

Temporal trends demonstrate improved survival for many types of common pediatric cancer. Studies have not examined improvement in very rare pediatric cancers or compared these improvements to more common cancers. In this cohort study of the Surveillance, Epidemiology, and End Results (SEER) registry, we examined patients from 1975 to 2016 who were 0-19 years of age at the time of diagnosis. Cancers were grouped by decade of diagnosis and 3 cancer frequency groups: Common, Intermediate, and Rare. Trends in mortality across decades and by cancer frequency were compared using Kaplan-Meier curves and adjusted Cox proportional hazards models. A total of 50,222 patients were available for analysis, with the top 10 cancers grouped as Common (67%), 13 cancers grouped with Intermediate (24%), and 37 cancers as Rare (9%). Rare cancers had higher rates of children who were older and Black. 5-year survival increased from 63% to 86% across all cancers from the 1970s to the 2010s. The hazard ratio (HR) for mortality decreased from the reference point of 1 in the 1970s to 0.27 (95% CI: 0.25-0.30) in the 2010s in Common cancers, while the HR only dropped to 0.60 (0.49-0.73) over that same period for rare cancers. Pediatric oncology patients have experienced dramatic improvement in mortality since the 1970s, with mortality falling by nearly 75% in common cancers. Unfortunately, rare pediatric cancers continue to lag behind more common and therefore better studied cancers, highlighting the need for a renewed focus on research efforts for children with these rare diseases.

2.
Ann Surg Oncol ; 30(13): 8509-8518, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37695458

RESUMO

BACKGROUND: Large decreases in cancer diagnoses were seen early in the COVID-19 pandemic. However, the evolution of these deficits since the end of 2020 and the advent of widespread vaccination is unknown. METHODS: This study examined data from the Veterans Health Administration (VA) from 1 January 2018 through 28 February 2022 and identified patients with screening or diagnostic procedures or new cancer diagnoses for the four most common cancers in the VA health system: prostate, lung, colorectal, and bladder cancers. Monthly procedures and new diagnoses were calculated, and the pre-COVID era (January 2018 to February 2020) was compared with the COVID era (March 2020 to February 2022). RESULTS: The study identified 2.5 million patients who underwent a diagnostic or screening procedure related to the four cancers. A new cancer was diagnosed for 317,833 patients. During the first 2 years of the pandemic, VA medical centers performed 13,022 fewer prostate biopsies, 32,348 fewer cystoscopies, and 200,710 fewer colonoscopies than in 2018-2019. These persistent deficits added a cumulative deficit of nearly 19,000 undiagnosed prostate cancers and 3300 to 3700 undiagnosed cancers each for lung, colon, and bladder. Decreased diagnostic and screening procedures correlated with decreased new diagnoses of cancer, particularly cancer of the prostate (R = 0.44) and bladder (R = 0.27). CONCLUSION: Disruptions in new diagnoses of four common cancers (prostate, lung, bladder, and colorectal) seen early in the COVID-19 pandemic have persisted for 2 years. Although reductions improved from the early pandemic, new reductions during the Delta and Omicron waves demonstrate the continued impact of the COVID-19 pandemic on cancer care.


Assuntos
COVID-19 , Neoplasias Colorretais , Neoplasias da Próstata , Masculino , Humanos , Pandemias , Bexiga Urinária
3.
Ann Vasc Surg ; 86: 277-285, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35595211

RESUMO

BACKGROUND: Despite advancements in medical care and surgical techniques, major amputation continues to be associated with risks for morbidity and mortality. Palliative care programs may help alleviate symptoms and align patients' goals and the care they receive with their treatment plan. Access to specialty palliative medicine among vascular surgery patients is limited. Here, we aim to describe utilization and impact of formal palliative care consultation for patients receiving major amputations. METHODS: This is a retrospective, secondary data analysis project examining the records of patients who received major amputations by the vascular surgery team between 2016 and 2021. Demographics, operative, and postoperative outcomes were recorded. The primary outcome variable was palliative care consultation during index admission (the admission in which the patient received their first major amputation). Secondary outcomes were in-hospital mortality and code status at the time of death, if death occurred during the index admission, location of death, and discharge destination. RESULTS: The cohort comprised of 292 patients (39% female, 53% Black, mean age 63), who received a lower extremity major amputation. Most patients (65%) underwent amputation for limb ischemia. One-year mortality after first major amputation was 29%. Average length of stay was 20 days. Thirty-five (12%) patients received a palliative care consultation during the hospitalization in which they received their first major amputation. On multivariable analysis, patients were more likely to receive a palliative care consult during their index admission if they had undergone a thorough knee amputation (OR = 2.89, P = 0.039) or acute limb ischemia (OR = 4.25, P = 0.005). A formal palliative care consult was associated with lower likelihood of in-hospital death and increased likelihood of discharge to hospice (OR = 0.248, P = 0.0167, OR = 1.283, P < 0.001).There were no statistically significant differences in the code status of patients who received a palliative care consultation. CONCLUSIONS: In a large academic medical center, palliative medicine consultation was associated with lower in-hospital mortality among patients with advanced vascular disease and major limb amputation. These data will hopefully stimulate much needed prospective research to develop and test tools to identify patients in need and derive evidence about the impact of palliative care services.


Assuntos
Cuidados Paliativos , Doenças Vasculares Periféricas , Humanos , Feminino , Pessoa de Meia-Idade , Masculino , Mortalidade Hospitalar , Estudos Retrospectivos , Estudos Prospectivos , Tempo de Internação , Resultado do Tratamento , Amputação Cirúrgica , Encaminhamento e Consulta , Isquemia/diagnóstico , Isquemia/cirurgia , Extremidade Inferior/irrigação sanguínea
5.
Liver Transpl ; 21(12): 1465-70, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26359787

RESUMO

The emerging epidemic of older patients with cirrhosis has led to a sharp increase in the number of ≥65 year olds considering liver transplantation (LT). However, clinicians lack objective measures to risk stratify older patients. We aimed to determine whether the short physical performance battery (SPPB), a well-validated geriatric measure of physical function, has greater prognostic value in older versus younger LT candidates. Adult outpatients listed for LT with laboratory Model for End-Stage Liver Disease score ≥ 12 underwent physical function testing using the SPPB, consisting of gait speed, chair stands, and balance. Patients were categorized by age ("younger," < 65 years; "older," ≥ 65 years) and SPPB ("impaired," ≤ 9; "robust," > 9). Competing risks models associated age and SPPB with wait-list death/delisting. Of 463 LT candidates, 21% were ≥ 65 years and 18% died or were delisted. Older patients had slower gait (1.1 versus 1.3 m/seconds; P < 0.001), a trend of slower chair stands (12.8 versus 11.8 seconds; P = 0.06), and a smaller proportion able to complete all balance tests (65% versus 78%; P = 0.01); SPPB was lower in older versus younger patients (10 versus 11; P = 0.01). When compared to younger robust patients as a reference group, younger impaired patients (hazard ratio [HR], 1.77; P = 0.03) and older impaired patients (HR, 2.70; P = 0.003) had significantly higher risk of wait-list mortality, but there was no difference in risk for older robust patients (HR 1.38; P = 0.35) [test of equality, P = 0.01]. After adjustment for Model for End-Stage Liver Disease-sodium (MELD-Na) score, only older impaired patients had an increased risk of wait-list mortality compared to younger robust patients (HR, 2.36; P = 0.01; test of equality P = 0.05). In conclusion, functional impairment, as assessed by the SPPB, predicts death/delisting for LT candidates ≥65 years independent of MELD-Na. Further research into activity-based interventions to reduce adverse transplant outcomes in this population is warranted.


Assuntos
Avaliação Geriátrica , Transplante de Fígado , Seleção de Pacientes , Listas de Espera/mortalidade , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Estados Unidos/epidemiologia
6.
Liver Int ; 35(10): 2294-300, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25757956

RESUMO

BACKGROUND & AIMS: Current clinical assays for total 25-hydroxy (OH) vitamin D measure vitamin D bound to vitamin D-binding protein (DBP) and albumin plus unbound ('free') D. We investigated the relationship between total and free 25(OH)D with bone metabolism markers in normal (>3.5 g/dl) vs. low (≤3.5 g/dl) albumin cirrhotics. METHODS: Eighty-two cirrhotics underwent measurement of free and total 25(OH)D by immunoassay, DBP and markers of bone metabolism [intact parathyroid hormone (iPTH), C-telopeptide (CTX), bone-specific alkaline phosphatase (BSAP), osteocalcin, amino-terminal pro-peptide of type 1-collagen (P1NP)]. Pearson's coefficients assessed relevant associations. RESULTS: Cirrhotics with low (n = 54) vs. normal (n = 28) albumin had lower total 25(OH)D (12.1 vs. 21.7 ng/ml), free 25(OH)D (6.2vs.8.6 pg/ml) and DBP(91.4 vs. 140.3 µg/ml) [P < 0.01 for each]. iPTH was similar in low and normal albumin groups (33 vs. 28 pg/ml; P = 0.38), although serum CTX(0.46vs.0.28 ng/ml) and BSAP(31.7 vs. 24.8 µg/L) were increased (P < 0.01). An inverse relationship was observed between total 25(OH)D and iPTH in normal (r = -0.47, P = 0.01) but not low albumin cirrhotics (r = 0.07, P = 0.62). Similar associations were seen between free 25(OH)D and iPTH(Normal: r = -0.46, P = 0.01; Low: r = -0.03, P = 0.84). BSAP, osteocalcin and P1NP were elevated above the normal range in all cirrhotics but not consistently associated with total or free 25(OH)D. CONCLUSIONS: Cirrhotics with low vs. normal albumin have lower levels of DBP, total and free 25(OH)D. The expected relationship between total or free 25(OH)D with iPTH was observed in normal but not in low albumin cirrhotics, demonstrating that total 25(OH)D is not an accurate marker of bioactive vitamin D status in cirrhotics with synthetic dysfunction. Additional investigation into the role of vitamin D supplementation and its impact on bone mineral homoeostasis in this population is needed.


Assuntos
Albuminas/análise , Fosfatase Alcalina/sangue , Remodelação Óssea , Cirrose Hepática/sangue , Hormônio Paratireóideo/sangue , Proteína de Ligação a Vitamina D/sangue , Vitamina D/sangue , Biomarcadores/sangue , Cálcio/sangue , Suplementos Nutricionais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos
7.
Liver Int ; 35(9): 2167-73, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25644788

RESUMO

BACKGROUND & AIMS: The US liver allocation system effectively prioritizes most liver transplant candidates by disease severity as assessed by the Model for End-Stage Liver Disease (MELD) score. Yet, one in five dies on the wait-list. We aimed to determine whether clinician assessments of health status could identify this subgroup of patients at higher risk for wait-list mortality. METHODS: From 2012-2013, clinicians of all adult liver transplant candidates with laboratory MELD≥12 were asked at the clinic visit: 'How would you rate your patient's overall health today (0 = excellent, 5 = very poor)?' The odds of death/delisting for being too sick for the transplant by clinician-assessment score ≥3 vs. <3 were assessed by logistic regression. RESULTS: Three hundred and forty-seven liver transplant candidates (36% female) had a mean follow-up of 13 months. Men differed from women by disease aetiology (<0.01) but were similar in age and markers of liver disease severity (P > 0.05). Mean clinician assessment differed between men and women (2.3 vs. 2.6; P = 0.02). The association between clinician-assessment and MELD was ρ = 0.28 (P < 0.01). 53/347 (15%) died/were delisted. In univariable analysis, a clinician-assessment score ≥ 3 was associated with increased odds of death/delisting (2.57; 95% CI 1.42-4.66). After adjustment for MELD and age, a clinician-assessment score ≥ 3 was associated with 2.25 (95% CI 1.22-4.15) times the odds of death/delisting compared to a clinician-assessment score < 3. CONCLUSIONS: A standardized clinician assessment of health status can identify liver transplant candidates at high risk for wait-list mortality independent of MELD score. Objectifying this 'eyeball test' may inform interventions targeted at this vulnerable subgroup to optimize wait-list outcomes.


Assuntos
Doença Hepática Terminal/mortalidade , Doença Hepática Terminal/cirurgia , Nível de Saúde , Transplante de Fígado , Idoso , Feminino , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Análise Multivariada , Prognóstico , Medição de Risco , Fatores de Risco , Índice de Gravidade de Doença , Listas de Espera
8.
J Am Geriatr Soc ; 72(1): 170-180, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37725439

RESUMO

BACKGROUND: Frailty is an important geriatric syndrome predicting adverse health outcomes in older adults. However, the longitudinal characteristics of frailty components in post-hip fracture patients are less understood. Adopting the Fried frailty definition, we examined the longitudinal trends and sex trajectory differences in frailty and its components over 1 year post-fracture. METHODS: Three hundred and twenty-seven hip fracture patients (162 men and 165 women with mean age 80.1 and 81.5) from Baltimore Hip Studies 7th cohort with measurements at 22 days after admission, and months 2, 6, and 12 post-fracture were analyzed. Frailty components included: grip strength, gait speed, weight, total energy expenditure, and exhaustion. Longitudinal analysis used mixed effect models. RESULTS: At baseline, men were sicker with worse cognitive status, and had higher weight and grip strength, but lower total energy expenditure than women (p < 0.001). The prevalence of frailty was 31.5%, 30.2%, and 28.2% at months 2, 6, and 12 respectively, showing no longitudinal trends or sex differences. However, its components showed substantial recovery trends over the post-fracture year after confounding adjustments, including increasing gait speed, reducing risk of exhaustion, and stabilized weight loss and energy expenditure over time. Particularly, while men's grip strength tended to remain stable over first year post surgery within patients, women's grip strength reduced significantly over time within patients. On average over time within patients, women were more active with higher energy expenditures but lower grip strength and weight than men. CONCLUSION: Significant recovery trends and sex differences were observed in frailty components during first year post-fracture. Overall frailty status did not show those trends over months 2-12 since a summary measure might obscure changes in components. Therefore, frailty components provided important multi-dimensional information on the complex recovery process of patients, indicating targets for intervention beyond the global binary measure of frailty.


Assuntos
Fragilidade , Fraturas do Quadril , Humanos , Feminino , Masculino , Idoso , Idoso de 80 Anos ou mais , Fragilidade/epidemiologia , Idoso Fragilizado/psicologia , Estudos Prospectivos , Fraturas do Quadril/epidemiologia , Hospitalização , Avaliação Geriátrica/métodos
9.
J Vasc Surg Venous Lymphat Disord ; 12(2): 101693, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37838307

RESUMO

OBJECTIVE: Venous thromboembolism (VTE) is a preventable complication of hospitalization. Risk-stratification is the cornerstone of prevention. The Caprini and Padua are two of the most commonly used risk-assessment models (RAMs) to quantify VTE risk. Both models perform well in select, high-risk cohorts. Although VTE RAMs were designed for use in all hospital admissions, they are mostly tested in select, high-risk cohorts. We aim to evaluate the two RAMs in a large, unselected cohort of patients. METHODS: We analyzed consecutive first hospital admissions of 1,252,460 unique surgical and non-surgical patients to 1298 Veterans Affairs facilities nationwide between January 2016 and December 2021. Caprini and Padua scores were generated using the Veterans Affairs' national data repository. We determined the ability of the two RAMs to predict VTE within 90 days of admission. In secondary analyses, we evaluated prediction at 30 and 60 days, in surgical vs non-surgical patients, after excluding patients with upper extremity deep vein thrombosis, in patients hospitalized ≥72 hours, after including all-cause mortality in a composite outcome, and after accounting for prophylaxis in the predictive model. We used area under the receiver operating characteristic curves (AUCs) as the metric of prediction. RESULTS: A total of 330,388 (26.4%) surgical and 922,072 (73.6%) non-surgical consecutively hospitalized patients (total N = 1,252,460) were analyzed. Caprini scores ranged from 0 to 28 (median, 4; interquartile range [IQR], 3-6); Padua scores ranged from 0-13 (median, 1; IQR, 1-3). The RAMs showed good calibration and higher scores were associated with higher VTE rates. VTE developed in 35,557 patients (2.8%) within 90 days of admission. The ability of both models to predict 90-day VTE was low (AUCs: Caprini, 0.56; 95% confidence interval [CI], 0.56-0.56; Padua, 0.59; 95% CI, 0.58-0.59). Prediction remained low for surgical (Caprini, 0.54; 95% CI, 0.53-0.54; Padua, 0.56; 95% CI, 0.56-0.57) and non-surgical patients (Caprini, 0.59; 95% CI, 0.58-0.59; Padua, 0.59; 95% CI, 0.59-0.60). There was no clinically meaningful change in predictive performance in any of the sensitivity analyses. CONCLUSIONS: Caprini and Padua RAM scores have low ability to predict VTE events in a cohort of unselected consecutive hospitalizations. Improved VTE RAMs must be developed before they can be applied to a general hospital population.


Assuntos
Tromboembolia Venosa , Veteranos , Humanos , Tromboembolia Venosa/diagnóstico , Tromboembolia Venosa/epidemiologia , Tromboembolia Venosa/etiologia , Fatores de Risco , Estudos Retrospectivos , Medição de Risco
10.
J Vasc Surg Venous Lymphat Disord ; : 101968, 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39305950

RESUMO

OBJECTIVE: Venous thromboembolism (VTE) is a preventable cause of hospitalization-related morbidity and mortality. VTE prevention requires accurate risk stratification. Federal agencies mandated VTE risk assessment for all hospital admissions. We have shown that the widely used Caprini (30 risk factors) and Padua (11 risk factors) VTE risk-assessment models (RAMs) have limited predictive ability for VTE when used for all general hospital admissions. Here, we test whether combining the risk factors from all 23 available VTE RAMs improves VTE risk prediction. METHODS: We analyzed data from the first hospitalizations of 1,282,014 surgical and non-surgical patients admitted to 1298 Veterans Affairs facilities nationwide between January 2016 and December 2021. We used logistic regression to predict VTE within 90 days of admission using risk factors from all 23 available VTE RAMs. Area under the receiver operating characteristic curves (AUC), sensitivity, specificity, and positive (PPV) and negative predictive values (NPV) were used to quantify the predictive power of our models. The metrics were computed at two diagnostic thresholds that maximized (1) the value of sensitivity + specificity-1; and (2) PPV and were compared using McNemar's test. The Delong-Delong test was used to compare AUCs. RESULTS: After excluding those with missing data, 1,185,633 patients (mean age, 66 years; 93% male; and 72% White) were analyzed, of whom 33,253 (2.8%) had a VTE (deep venous thrombosis [DVT], n = 19,218, 1.6%; pulmonary embolism [PE], n = 10,190, 0.9%; PE + DVT, n = 3845, 0.3%). Our composite RAM included 102 risk factors and improved prediction of VTE compared with the Caprini RAM risk factors (AUC composite model: 0.74; AUC Caprini risk-factor model: 0.63; P < .0001). When the sum of sensitivity and specificity-1 was maximized, the composite model demonstrated small improvements in sensitivity, specificity and PPV; NPV was high in both models. When PPV was maximized, the PPV of the composite model was improved but remained low. The nature of the relationship between NPV and PPV precluded any further gain in PPV by sacrificing NPV and sensitivity. CONCLUSIONS: Using a composite of 102 risk factors from all available VTE RAMs, we improved VTE prediction in a large, national cohort of >1 million general hospital admissions. However, neither model has a sensitivity or PPV that permits it to be a reliable predictor of VTE. We demonstrate the limits of currently available VTE risk prediction tools; no available RAM is ready for widespread use in the general hospital population.

11.
J Vasc Surg Venous Lymphat Disord ; 11(6): 1182-1191.e13, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37499868

RESUMO

BACKGROUND: Venous thromboembolism (pulmonary embolism and deep vein thrombosis) is an important preventable cause of in-hospital death. Prophylaxis with low doses of anticoagulants reduces the incidence of venous thromboembolism but can also cause bleeding. It is, therefore, important to stratify the risk of bleeding for hospitalized patients when considering pharmacologic prophylaxis. The IMPROVE (international medical prevention registry on venous thromboembolism) and Consensus risk assessment models (RAMs) are the two tools available for such patients. Few studies have evaluated their ability to predict bleeding in a large, unselected cohort of patients. We assessed the ability of the IMPROVE and Consensus bleeding RAMs to predict bleeding within 90 days of hospitalization in a comprehensive analysis encompassing all hospitalized patients, regardless of surgical vs nonsurgical status. METHODS: We analyzed consecutive first hospital admissions of 1,228,448 unique surgical and nonsurgical patients to 1298 Veterans Affairs facilities nationwide between January 2016 and December 2021. IMPROVE and Consensus scores were generated using data from a repository of their common electronic medical records. We assessed the ability of the two RAMs to predict bleeding within 90 days of admission. We used area under the receiver operating characteristic curves to determine the prediction of bleeding by each RAM. RESULTS: Of 1,228,448 hospitalized patients, 324,959 (26.5%) were surgical and 903,489 (73.5%) were nonsurgical. Of these patients, 68,372 (5.6%) had a bleeding event within 90 days of admission. The Consensus RAM scores ranged from -5.60 to -1.21 (median, -4.93; interquartile range, -5.60 to -4.93). The IMPROVE RAM scores ranged from 0 to 22 (median, 3.5; interquartile range, 2.5-5). Both showed good calibration, with higher scores associated with higher bleeding rates. The ability of both RAMs to predict 90-day bleeding was low (area under the receiver operating characteristic curve 0.61 for the IMPROVE RAM and 0.59 for the Consensus RAM). The predictive ability was also low at 30 and 60 days for surgical and nonsurgical patients, patients receiving prophylactic, therapeutic, or no anticoagulation, and patients hospitalized for ≥72 hours. Prediction was also low across different bleeding outcomes (ie, any bleeding, gastrointestinal bleeding, nongastrointestinal bleeding, and bleeding or death). CONCLUSIONS: In this large, unselected, nationwide cohort of surgical and nonsurgical hospital admissions, increasing IMPROVE and Consensus bleeding RAM scores were associated with increasing bleeding rates. However, both RAMs had low ability to predict bleeding at 0 to 90 days after admission. Thus, the currently available RAMs require modification and rigorous reevaluation before they can be applied universally.


Assuntos
Tromboembolia Venosa , Humanos , Tromboembolia Venosa/diagnóstico , Tromboembolia Venosa/prevenção & controle , Tromboembolia Venosa/tratamento farmacológico , Mortalidade Hospitalar , Anticoagulantes/efeitos adversos , Medição de Risco , Hemorragia/induzido quimicamente , Fatores de Risco
12.
medRxiv ; 2023 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-36993603

RESUMO

Background: Venous thromboembolism (VTE) is a preventable complication of hospitalization. Risk-stratification is the cornerstone of prevention. The Caprini and Padua are the most commonly used risk-assessment models to quantify VTE risk. Both models perform well in select, high-risk cohorts. While VTE risk-stratification is recommended for all hospital admissions, few studies have evaluated the models in a large, unselected cohort of patients. Methods: We analyzed consecutive first hospital admissions of 1,252,460 unique surgical and non-surgical patients to 1,298 VA facilities nationwide between January 2016 and December 2021. Caprini and Padua scores were generated using the VA's national data repository. We first assessed the ability of the two RAMs to predict VTE within 90 days of admission. In secondary analyses, we evaluated prediction at 30 and 60 days, in surgical versus non-surgical patients, after excluding patients with upper extremity DVT, in patients hospitalized ≥72 hours, after including all-cause mortality in the composite outcome, and after accounting for prophylaxis in the predictive model. We used area under the receiver-operating characteristic curves (AUC) as the metric of prediction. Results: A total of 330,388 (26.4%) surgical and 922,072 (73.6%) non-surgical consecutively hospitalized patients (total n=1,252,460) were analyzed. Caprini scores ranged from 0-28 (median, interquartile range: 4, 3-6); Padua scores ranged from 0-13 (1, 1-3). The RAMs showed good calibration and higher scores were associated with higher VTE rates. VTE developed in 35,557 patients (2.8%) within 90 days of admission. The ability of both models to predict 90-day VTE was low (AUCs: Caprini 0.56 [95% CI 0.56-0.56], Padua 0.59 [0.58-0.59]). Prediction remained low for surgical (Caprini 0.54 [0.53-0.54], Padua 0.56 [0.56-0.57]) and non-surgical patients (Caprini 0.59 [0.58-0.59], Padua 0.59 [0.59-0.60]). There was no clinically meaningful change in predictive performance in patients admitted for ≥72 hours, after excluding upper extremity DVT from the outcome, after including all-cause mortality in the outcome, or after accounting for ongoing VTE prophylaxis. Conclusions: Caprini and Padua risk-assessment model scores have low ability to predict VTE events in a cohort of unselected consecutive hospitalizations. Improved VTE risk-assessment models must be developed before they can be applied to a general hospital population.

13.
J Vasc Surg Venous Lymphat Disord ; 10(6): 1401-1409.e7, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35926802

RESUMO

OBJECTIVE: Hospital-acquired venous thromboembolism (VTE, including pulmonary embolism [PE] and deep vein thrombosis [DVT]) is a preventable cause of hospital death. The Caprini risk assessment model (RAM) is one of the most commonly used tools to assess VTE risk. The RAM is operationalized in clinical practice by grouping several risk scores into VTE risk categories that drive decisions on prophylaxis. A correlation between increasing Caprini scores and rising VTE risk is well-established. We assessed whether the increasing VTE risk categories assigned on the basis of recommended score ranges also correlate with increasing VTE risk. METHODS: We conducted a systematic review of articles that used the Caprini RAM to assign VTE risk categories and that reported corresponding VTE rates. A Medline and EMBASE search retrieved 895 articles, of which 57 fulfilled inclusion criteria. RESULTS: Forty-eight (84%) of the articles were cohort studies, 7 (12%) were case-control studies, and 2 (4%) were cross-sectional studies. The populations varied from postsurgical to medical patients. There was variability in the number of VTE risk categories assigned by individual studies (6 used 5 risk categories, 37 used 4, 11 used 3, and 3 used 2), and in the cutoff scores defining the risk categories (scores from 0 alone to 0-10 for the low-risk category; from ≥5 to ≥10 for high risk). The VTE rates reported for similar risk categories also varied across studies (0%-12.3% in the low-risk category; 0%-40% for high risk). The Caprini RAM is designed to assess composite VTE risk; however, two studies reported PE or DVT rates alone, and many of the other studies did not specify the types of DVTs analyzed. The Caprini RAM predicts VTE at 30 days after assessment; however, only 17 studies measured outcomes at 30 days; the remaining studies had either shorter or longer follow-ups (0-180 days). CONCLUSIONS: The usefulness of the Caprini RAM is limited by heterogeneity in its implementation across centers. The score-derived VTE risk categorization has significant variability in the number of risk categories being used, the cutpoints used to define the risk categories, the outcome being measured, and the follow-up duration. This factor leads to similar risk categories being associated with different VTE rates, which impacts the clinical and research implications of the results. To enhance generalizability, there is a need for studies that validate the RAM in a broad population of medical and surgical patients, identify standardized risk categories, define risk of DVT and PE as distinct end points, and measure outcomes at standardized follow-up time points.


Assuntos
Embolia Pulmonar , Tromboembolia Venosa , Trombose Venosa , Humanos , Embolia Pulmonar/epidemiologia , Estudos Retrospectivos , Medição de Risco/métodos , Fatores de Risco , Tromboembolia Venosa/diagnóstico , Tromboembolia Venosa/epidemiologia , Tromboembolia Venosa/etiologia , Trombose Venosa/complicações
15.
JSLS ; 25(2)2021.
Artigo em Inglês | MEDLINE | ID: mdl-34135563

RESUMO

BACKGROUND: Minimally Invasive Surgery (MIS) is one of the more recently established surgical fellowships, with many candidates applying due to a perception of inadequate exposure to advanced MIS during residency. The desire for advanced training should be reflected in increased competitiveness for fellowship positions. The aim of this study is to determine the desirability of MIS fellowships over time through review of national application data. METHODS: We reviewed the fellowship match statistics obtained from The Fellowship Council, the organizing body behind the MIS fellowship match. Data from January 1, 2008 - December 31, 2019 were included. We compared match rates to other specialties using the National Resident Matching Program, a nonprofit organization established for US residency and some fellowship programs. RESULTS: In the period of 2008 to 2019, the number of certified MIS fellowship programs increased from 124 to 141. While this program expansion was associated with a 19% increase in available positions, the number of applications increased 36%. As a result, the number of positions filled increased from 83% to 97%, but the match rate among US applicants fell from 82% to 71% during this interval. In comparison, the match rates for pediatric surgery, surgical oncology, vascular surgery, and surgical critical care fellowships remained largely unchanged, most recently 50%, 56%, 99%, and 100% respectively. CONCLUSION: Over the last decade, US residents have shown an increased interest in pursuing MIS fellowship positions. As a consequence, the match process for MIS fellowships is becoming increasingly competitive.


Assuntos
Bolsas de Estudo/tendências , Internato e Residência/economia , Procedimentos Cirúrgicos Minimamente Invasivos/educação , Educação de Pós-Graduação em Medicina/estatística & dados numéricos , Humanos , Especialidades Cirúrgicas/educação
16.
World J Gastroenterol ; 24(33): 3770-3775, 2018 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-30197482

RESUMO

AIM: To investigate beta-blocker (BB) use in patients with cirrhosis and determine their effects on physical frailty and overall survival. METHODS: Adult outpatients with cirrhosis listed for liver transplantation underwent testing of physical frailty using the performance-based Liver Frailty Index, comprised of chair stands, grip strength, and balance testing, as well as self-reported assessments of exhaustion and physical activity. BB use was assessed from medical chart review. Univariable and multivariable logistic regression were performed to determine BB use and their association with measures of physical frailty. Competing risk analyses were performed to determine the effect of BB use on wait-list mortality, as defined by death or delisting for being too sick for transplant. RESULTS: Of 344 patients, 35% were female, median age was 60, median model for end stage liver disease was 15, and 53% were prescribed a BB. Compared to those not on BB, patients on BB were similar except for percentage female (25% vs 46%; P < 0.001) and BMI (29 vs 28; P = 0.008). With respect to tests of physical frailty, BB use was not associated with increased odds of frailty (by the Liver Frailty Index), exhaustion, or low physical activity. BB use was, however, significantly associated with a decreased adjusted risk of mortality (SHR 0.55; P = 0.005). CONCLUSION: In patients with cirrhosis awaiting liver transplantation, BB use is not associated with physical frailty. We confirmed the known survival benefits with BB use, and concerns about adverse effects should not deter their utilization when indicated.


Assuntos
Antagonistas Adrenérgicos beta/efeitos adversos , Doença Hepática Terminal/mortalidade , Fragilidade/epidemiologia , Cirrose Hepática/mortalidade , Listas de Espera/mortalidade , Doença Hepática Terminal/complicações , Doença Hepática Terminal/diagnóstico , Doença Hepática Terminal/terapia , Varizes Esofágicas e Gástricas/etiologia , Varizes Esofágicas e Gástricas/prevenção & controle , Feminino , Fragilidade/induzido quimicamente , Hemorragia Gastrointestinal/etiologia , Hemorragia Gastrointestinal/prevenção & controle , Humanos , Cirrose Hepática/complicações , Cirrose Hepática/diagnóstico , Cirrose Hepática/terapia , Transplante de Fígado , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Índice de Gravidade de Doença , Análise de Sobrevida , Resultado do Tratamento
18.
Transplantation ; 100(8): 1692-8, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27314169

RESUMO

BACKGROUND: Sarcopenia and functional impairment are common and lethal extrahepatic manifestations of cirrhosis. We aimed to determine the association between computed tomography (CT)-based measures of muscle mass and quality (sarcopenia) and performance-based measures of muscle function. METHODS: Adults listed for liver transplant underwent testing of muscle function (grip strength, Short Physical Performance Battery [SPPB]) within 3 months of abdominal CT. Muscle mass (cm/m) = total cross-sectional area of psoas, paraspinal, and abdominal wall muscles at L3 on CT, normalized for height. Muscle quality = mean Hounsfield units for total skeletal muscle area at L3. RESULTS: Among 292 candidates, median grip strength was 31 kg, SPPB score was 11, muscle mass was 49 cm/m, and muscle quality was 35 Hounsfield units. Grip strength weakly correlated with muscle mass (ρ = 0.26, P < 0.001) and quality (ρ = 0.27, P < 0.001) in men, and muscle quality (ρ = 0.23, P = 0.02), but not muscle mass, in women. Short Physical Performance Battery correlated weakly with muscle quality in men (ρ = 0.38, P < 0.001) and women (ρ = 0.25, P = 0.02), however, did not correlate with muscle mass in men or women. After adjustment for sex, model for end-stage liver disease (MELD)-Na, hepatocellular carcinoma, and body mass index, grip strength (hazard ratio [HR], 0.74; 95% confidence interval [95% CI], 0.59-0.92; P = 0.008), SPPB (HR, 0.89; 95% CI, 0.82-0.97; P = 0.01), and muscle quality (HR, 0.77; 95% CI, 0.63-0.95; P = 0.02) were associated with waitlist mortality, but muscle mass was not (HR, 0.91; 95% CI, 0.75-1.11; P = 0.35). CONCLUSIONS: Performance-based tests of muscle function are only modestly associated with CT-based muscle measures. Given that they predict waitlist mortality and can be conducted quickly and economically, tests of muscle function may have greater clinical utility than CT-based measures of sarcopenia.


Assuntos
Hepatopatias/complicações , Transplante de Fígado , Força Muscular , Músculo Esquelético/diagnóstico por imagem , Músculo Esquelético/fisiopatologia , Sarcopenia/diagnóstico , Tomografia Computadorizada por Raios X , Listas de Espera , Idoso , Distribuição de Qui-Quadrado , Feminino , Nível de Saúde , Humanos , Hepatopatias/diagnóstico , Hepatopatias/mortalidade , Hepatopatias/cirurgia , Masculino , Pessoa de Meia-Idade , Análise Multivariada , Razão de Chances , Tamanho do Órgão , Valor Preditivo dos Testes , Prognóstico , Modelos de Riscos Proporcionais , Reprodutibilidade dos Testes , Fatores de Risco , Sarcopenia/diagnóstico por imagem , Sarcopenia/etiologia , Sarcopenia/fisiopatologia , Fatores de Tempo , Listas de Espera/mortalidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA