ABSTRACT
BACKGROUND: Obesity is a risk factor for cholelithiasis leading to acute cholecystitis which is treated with cholecystectomy. The purpose of this study was to analyze the associations between body mass index class and the intended operative approach (laparoscopic versus open) for and outcomes of cholecystectomy for acute cholecystitis. METHODS: We conducted a retrospective cohort study using the American College of Surgeons National Surgical Quality Improvement Program data from 2008-2013. The effects of body mass index class on intended procedure type (laparoscopic versus open), conversion from laparoscopic to open operation, and outcomes after cholecystectomy were examined using multivariable logistic regression. RESULTS: Data on 20,979 patients who underwent cholecystectomy for acute cholecystitis showed that 18,228 (87%) had a laparoscopic operation; 639 (4%) of these patients required conversion to an open approach; and 2,751 (13%) underwent intended open cholecystectomy. There was an independent association between super obesity (body mass index 50+) and an intended open operation (odds ratio 1.53, 95% confidence interval 1.14-2.05, P = .01). An intended open procedure (odds ratio 3.10, 95% confidence interval 2.40-4.02, P < .0001) and conversion (odds ratio 3.45, 95% confidence interval 2.16-5.50, P < .0001) were associated with increased risk of death/serious morbidity in a model, even when controlling for all other important factors. In the same model, body mass index class was not associated with increased death/serious morbidity. Outcomes after conversion were not substantially worse than outcomes after intended open cholecystectomy. CONCLUSION: This study supports the possibility that an intended open approach to acute cholecystitis, not body mass index class, is associated with worse outcomes after cholecystectomy. An initial attempt at laparoscopy may benefit patients, even those at the highest end of the body mass index spectrum.
Subject(s)
Body Mass Index , Cholecystectomy, Laparoscopic , Cholecystitis, Acute/surgery , Conversion to Open Surgery , Obesity, Morbid/complications , Adult , Aged , Cholecystitis, Acute/complications , Cholecystitis, Acute/mortality , Female , Humans , Logistic Models , Male , Middle Aged , Patient Selection , Quality Improvement , Retrospective Studies , Treatment OutcomeABSTRACT
BACKGROUND: Arthralgia is a common and debilitating side-effect experienced by breast cancer patients receiving aromatase inhibitors (AIs) and often results in premature drug discontinuation. METHODS: We conducted a randomised controlled trial of electro-acupuncture (EA) as compared to waitlist control (WLC) and sham acupuncture (SA) in postmenopausal women with breast cancer who self-reported arthralgia attributable to AIs. Acupuncturists performed 10 EA/SA treatments over 8 weeks using a manualised protocol with 2 Hz electro-stimulation delivered by a TENS unit. Acupuncturists administered SA using Streitberger (non-penetrating) needles at non-traditional acupuncture points without electro-stimulation. The primary end-point was pain severity by Brief Pain Inventory (BPI) between EA and WLC at Week 8; durability of response at Week 12 and comparison of EA to SA were secondary aims. FINDINGS: Of the 67 randomly assigned patients, mean reduction in pain severity was greater in the EA group than in the WLC group at Week 8 (-2.2 versus -0.2, p=0.0004) and at Week 12 (-2.4 versus -0.2, p<0.0001). Pain-related interference measured by BPI also improved in the EA group compared to the WLC group at both Week 8 (-2.0 versus 0.2, p=0.0006) and Week 12 (-2.1 versus -0.1, p=0.0034). SA produced a magnitude of change in pain severity and pain-related interference at Week 8 (-2.3, -1.5 respectively) and Week 12 (-1.7, -1.3 respectively) similar to that of EA. Participants in both EA and SA groups reported few minor adverse events. INTERPRETATIONS: Compared to usual care, EA produced clinically important and durable improvement in arthralgia related to AIs in breast cancer patients, and SA had a similar effect. Both EA and SA were safe.
Subject(s)
Aromatase Inhibitors/adverse effects , Arthralgia/therapy , Breast Neoplasms/drug therapy , Electroacupuncture/methods , Acupuncture Points , Adult , Aged , Arthralgia/chemically induced , Double-Blind Method , Female , Humans , Middle Aged , Pain Measurement/methods , Time Factors , Treatment Outcome , Waiting ListsABSTRACT
PURPOSE: To determine the incidence of dose-limiting (DL) chemotherapy-induced peripheral neuropathy (CIPN) events in clinical practice. PATIENTS AND METHODS: This retrospective cohort study included 488 women who received docetaxel or paclitaxel. The primary outcome was a DL event (dose delay, dose reduction, or treatment discontinuation) attributed to CIPN (DL CIPN). The paired t test was used to test the difference in received cumulative dose and planned cumulative dose by dose reduction and treatment discontinuation status. RESULTS: A total of 150 unique DL events occurred in 120 women (24.6%). More than one third (37.3%; n=56) of the events were attributed to CIPN. The 56 DL CIPN events occurred in 50 women (10.2%). DL CIPN incidence differed significantly by agent (docetaxel, 2.4%; n=five of 209; paclitaxel, 16.1%; n=45 of 279; P<.001). DL CIPN occurred in 24.5% and 14.4% of women who received paclitaxel 80 mg/m2 weekly for 12 cycles and 175 mg/m2 biweekly for four cycles, respectively (adjusted odds ratio, 2.11; 95% CI, 0.97 to 4.60; P=.06). The cumulative dose actually received was significantly lower than the planned cumulative dose among women who had a dose reduction or treatment termination attributed to CIPN (9.4% less; P<.001 and 28.4% less; P<.001, respectively). CONCLUSION: Oncologists limited the dosing of chemotherapy because of CIPN in a significant proportion of paclitaxel recipients, most frequently in those who received a weekly regimen. Patients who had their dose reduced or discontinued received significantly less cumulative chemotherapy than planned. The implications of these DL CIPN events on treatment outcomes must be investigated.
Subject(s)
Antineoplastic Agents/adverse effects , Breast Neoplasms/drug therapy , Paclitaxel/adverse effects , Peripheral Nervous System Diseases/chemically induced , Taxoids/adverse effects , Adult , Antineoplastic Agents/administration & dosage , Breast Neoplasms/epidemiology , Docetaxel , Dose-Response Relationship, Drug , Female , Humans , Middle Aged , Paclitaxel/administration & dosage , Pennsylvania/epidemiology , Retrospective Studies , Taxoids/administration & dosageABSTRACT
UNLABELLED: The effectiveness of amitriptyline, carbamazepine, gabapentin, and tramadol for the treatment of neuropathic pain has been demonstrated, but it is unknown which one is the most cost-effective. We designed a cost-utility analysis of a hypothetical cohort with neuropathic pain of postherpetic or diabetic origin. The perspective of the economic evaluation was that of a third-party payor. For effectiveness and safety estimates, we performed a systematic review of the literature. For direct cost estimates, we used average wholesale prices, and the American Medicare and Clinical Laboratory Fee Schedules. For utilities of health states, we used the Health Utilities Index. We modeled 1 month of therapy. For comparisons among treatments, we estimated incremental cost per utility gained. To allow for uncertainty from variations in drug effectiveness, safety, and amount of medication needed, we conducted a probabilistic Monte Carlo simulation. Amitriptyline was the cheapest strategy, followed by carbamazepine, and both were equally beneficial. Gabapentin was the most expensive as well as the least beneficial. A multivariable probabilistic simulation produced similar results to the base-case scenario. In summary, amitriptyline and carbamazepine are more cost-effective than tramadol and gabapentin and should be considered as first-line treatment for neuropathic pain in patients free of renal or cardiovascular disease. PERSPECTIVE: Prescription practices should be based on the best available evidence, which includes the evaluation of the medication's cost-effectiveness. This does not mean that the cheapest or the most expensive, but rather the most cost-effective medication should be chosen-the one whose benefits are worth the harms and costs. We report a cost-effectiveness evaluation of treatments for neuropathic pain.
Subject(s)
Amines/economics , Amitriptyline/economics , Analgesics/economics , Carbamazepine/economics , Cyclohexanecarboxylic Acids/economics , Neuralgia/drug therapy , Tramadol/economics , gamma-Aminobutyric Acid/economics , Administration, Oral , Amines/administration & dosage , Amines/adverse effects , Amitriptyline/administration & dosage , Amitriptyline/adverse effects , Analgesics/administration & dosage , Analgesics/adverse effects , Carbamazepine/administration & dosage , Carbamazepine/adverse effects , Cohort Studies , Cost-Benefit Analysis , Cyclohexanecarboxylic Acids/administration & dosage , Cyclohexanecarboxylic Acids/adverse effects , Decision Trees , Drug Costs , Gabapentin , Humans , Tramadol/administration & dosage , Tramadol/adverse effects , Treatment Outcome , gamma-Aminobutyric Acid/administration & dosage , gamma-Aminobutyric Acid/adverse effectsABSTRACT
OBJECTIVE: Little is known about risk factors that increase the risk of development of opioid side effects. Our objective was to evaluate the effect of the type of opioid, age, gender, and race on the incidence of side effects from short-term opioid use. METHODS: A secondary analysis of a retrospective cohort study in 35 community-based and tertiary hospitals was done. There were 8855 black or white subjects aged 16 years and older. Patients received meperidine (INN, pethidine), morphine, or fentanyl as part of their treatment. Measurements were made to assess the presence of nausea and vomiting and respiratory depression. RESULTS: Of the patients, 26% had nausea and vomiting and 1.5% had respiratory depression after opioid administration. After adjustment for opioid dose, route of administration, age, gender, and race, meperidine produced less nausea and vomiting (odds ratio [OR] = 0.7; 95% confidence interval [CI], 0.5-0.8) and less respiratory depression (OR = 0.6; 95% CI, 0.2-0.9) than morphine. The risk of respiratory depression increased with age. Compared with patients aged between 16 and 45 years, those aged between 61 and 70 years had 2.8 times the risk of development of respiratory depression (95% CI, 1.2-6.6); those aged between 71 and 80 years had 5.4 times the risk (95% CI, 2.4-11.8); and those aged older than 80 years had 8.7 times the risk (95% CI, 3.8-20.0). Men had less nausea and vomiting than women (OR = 0.5; 95% CI, 0.4-0.6). White subjects had more nausea and vomiting than black subjects (OR = 1.4; 95% CI, 1.1-1.7). CONCLUSIONS: Meperidine produced fewer side effects than morphine during short-term use. The risk of respiratory depression increases substantially after 60 years of age. Women have nausea and vomiting more often than men. The effect of race deserves further investigation.
Subject(s)
Aging/physiology , Analgesics, Opioid/adverse effects , Adolescent , Adult , Aged , Aged, 80 and over , Cohort Studies , Female , Fentanyl/adverse effects , Humans , Ketorolac Tromethamine/adverse effects , Male , Meperidine/adverse effects , Middle Aged , Morphine/adverse effects , Nausea/chemically induced , Nausea/epidemiology , Pennsylvania/epidemiology , Product Surveillance, Postmarketing , Racial Groups , Respiratory Insufficiency/chemically induced , Retrospective Studies , Sex Characteristics , Vomiting/chemically induced , Vomiting/epidemiologyABSTRACT
The aim of this study was to use Monte Carlo simulations to compare logistic regression with propensity scores in terms of bias, precision, empirical coverage probability, empirical power, and robustness when the number of events is low relative to the number of confounders. The authors simulated a cohort study and performed 252,480 trials. In the logistic regression, the bias decreased as the number of events per confounder increased. In the propensity score, the bias decreased as the strength of the association of the exposure with the outcome increased. Propensity scores produced estimates that were less biased, more robust, and more precise than the logistic regression estimates when there were seven or fewer events per confounder. The logistic regression empirical coverage probability increased as the number of events per confounder increased. The propensity score empirical coverage probability decreased after eight or more events per confounder. Overall, the propensity score exhibited more empirical power than logistic regression. Propensity scores are a good alternative to control for imbalances when there are seven or fewer events per confounder; however, empirical power could range from 35% to 60%. Logistic regression is the technique of choice when there are at least eight events per confounder.