RESUMEN
BACKGROUND CONTEXT: Postoperative infection after spinal deformity correction in pediatric patients is associated with significant costs. Identifying risk factors associated with postoperative infection would help surgeons identify high-risk patients that may require interventions to minimize infection risk. PURPOSE: To investigate risk factors associated with 30-day postoperative infection in pediatric patients who have received posterior arthrodesis for spinal deformity correction. STUDY DESIGN/SETTING: Retrospective review of prospectively collected data. PATIENT SAMPLE: The National Surgical Quality Improvement Program Pediatric database for years 2016-2021 was used for this study. Patients were included if they received posterior arthrodesis for scoliosis or kyphosis correction (CPT 22,800, 22,802, 22,804). Anterior only approaches were excluded. OUTCOME MEASURES: TThe outcome of interest was 30-day postoperative infection. METHODS: Patient demographics and outcomes were analyzed using descriptive statistics. Multivariable logistic regression analysis using likelihood ratio backward selection method was used to identify significant risk factors for 30-day infection to create the Pediatric Scoliosis Infection Risk Score (PSIR Score). ROC curve analysis, predicted probabilities, and Hosmer Lemeshow goodness-of-fit test were done to assess the scoring system on a validation cohort. RESULTS: A total of 31,742 patients were included in the study. The mean age was 13.8 years and 68.7% were female. The 30-day infection rate was 2.2%. Reoperation rate in patients who had a post-operative infection was 59.4%. Patients who had post-operative infection had a higher likelihood of non-home discharge (X2 = 124.8, p < 0.001). In our multivariable regression analysis, high BMI (OR = 1.01, p < 0.001), presence of open wound (OR = 3.18, p < 0.001), presence of ostomy (OR = 1.51, p < 0.001), neuromuscular etiology (OR = 1.56, p = 0.009), previous operation (OR = 1.74, p < 0.001), increasing ASA class (OR = 1.43, p < 0.001), increasing operation time in hours (OR = 1.11, p < 0.001), and use of only minimally invasive techniques (OR = 4.26, p < 0.001) were associated with increased risk of 30-day post-operative infection. Idiopathic etiology (OR = 0.53, p < 0.001) and intraoperative topical antibiotic use (B = 0.71, p = 0.003) were associated with reduced risk of 30-day postoperative infection. The area under the curve was 0.780 and 0.740 for the derivation cohort and validation cohort, respectively. CONCLUSIONS: To our knowledge, this is the largest study of risk factors for infection in pediatric spinal deformity surgery. We found 5 patient factors (BMI, ASA, osteotomy, etiology, and previous surgery, and 3 surgeon-controlled factors (surgical time, antibiotics, MIS) associated with risk. The Pediatric Scoliosis Infection Risk Score (PSIR) Score can be applied for risk stratification and to investigate implementation of novel protocols to reduce infection rates in high-risk patients.
RESUMEN
OBJECTIVE: The Calgary Postoperative Pain after Spine Surgery (CAPPS) score was developed to identify patients at risk of experiencing poorly controlled pain after spine surgery. The goal of this study was to independently validate the CAPPS score on a prospectively collected patient sample. METHODS: Poor postoperative pain control was defined as a mean numeric rating scale (NRS) for pain >4 at rest in the first 24 hours after surgery. Baseline characteristics in this study (validation cohort) were compared to those of the development cohort used to create the CAPPS score. Predictive performance of the CAPPS score was assessed by the area under the curve (AUC) and percentage misclassification for discrimination. A graphical comparison between predicted probability vs. observed incidence of poorly controlled pain was performed for calibration. RESULTS: Fifty-two percent of 201 patients experienced poorly controlled pain. The validation cohort exhibited lower depression scores and a higher proportion using daily opioid medications compared to the development cohort. The AUC was 0.74 [95%CI = 0.68-0.81] in the validation cohort compared to 0.73 [95%CI = 0.69-0.76] in the development cohort for the eight-tier CAPPS score. When stratified between the low- vs. extreme-risk and low- vs. high-risk groups, the percentage misclassification was 21.2% and 30.7% in the validation cohort, compared to 29.9% and 38.0% in the development cohort, respectively. The predicted probability closely mirrored the observed incidence of poor pain control across all scores. CONCLUSIONS: The CAPPS score, based on seven easily obtained and reliable prognostic variables, was validated using a prospectively collected, independent sample of patients.
Asunto(s)
Dolor Postoperatorio , Humanos , Dolor Postoperatorio/diagnóstico , Dolor Postoperatorio/epidemiología , Dolor Postoperatorio/etiología , PronósticoRESUMEN
OBJECTIVES: To examine adverse events and associated factors and outcomes during transition from ICU to hospital ward (after ICU discharge). DESIGN: Multicenter cohort study. SETTING: Ten adult medical-surgical Canadian ICUs. PATIENTS: Patients were those admitted to one of the 10 ICUs from July 2014 to January 2016. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Two ICU physicians independently reviewed progress and consultation notes documented in the medical record within 7 days of patient's ICU discharge date to identify and classify adverse events. The adverse event data were linked to patient characteristics and ICU and ward physician surveys collected during the larger prospective cohort study. Analyses were conducted using multivariable logistic regression. Of the 451 patients included in the study, 84 (19%) experienced an adverse event, the majority (62%) within 3 days of transfer from ICU to hospital ward. Most adverse events resulted only in symptoms (77%) and 36% were judged to be preventable. Patients with adverse events were more likely to be readmitted to the ICU (odds ratio, 5.5; 95% CI, 2.4-13.0), have a longer hospital stay (mean difference, 16.1 d; 95% CI, 8.4-23.7) or die in hospital (odds ratio, 4.6; 95% CI, 1.8-11.8) than those without an adverse event. ICU and ward physician predictions at the time of ICU discharge had low sensitivity and specificity for predicting adverse events, ICU readmissions, and hospital death. CONCLUSIONS: Adverse events are common after ICU discharge to hospital ward and are associated with ICU readmission, increased hospital length of stay and death and are not predicted by ICU or ward physicians.
Asunto(s)
Errores Médicos/estadística & datos numéricos , Transferencia de Pacientes , Adulto , Canadá/epidemiología , Continuidad de la Atención al Paciente , Femenino , Mortalidad Hospitalaria , Humanos , Unidades de Cuidados Intensivos/estadística & datos numéricos , Masculino , Alta del Paciente/estadística & datos numéricos , Readmisión del Paciente/estadística & datos numéricos , Transferencia de Pacientes/estadística & datos numéricos , Estudios Retrospectivos , Factores de RiesgoRESUMEN
Myomatous erythrocytosis syndrome (MES) is gynaecological condition marked by isolated erythrocytosis and a fibroid uterus. This report presents a case of MES and reviews common clinical presentations, hematological trends, and patient outcomes. This study was a combined case report and review of published cases of MES. Cases were identified using Medline and EMBASE databases. Binomial statistics were used to compare clinical characteristics among patients with MES. Kruskal-Wallis one-way analysis of variance was used to compare hematological values across time points (Canadian Task Force Classification III). A total of 57 cases of MES were reviewed. The mean age at presentation was 48.7 years. Commonly reported signs or symptoms at presentation include abdominopelvic distension or mass (93%), skin discolouration (33%), and menstrual irregularities (25%). There was no difference in parity (Pâ¯=â¯0.42), menopausal status (Pâ¯=â¯0.87), or hydronephrosis on imaging (Pâ¯=â¯0.48) among patients. Preoperative phlebotomy to reduce the risk of thromboembolic complications was performed in half of all cases. On average, a 51% reduction in serum erythropoietin levels was observed following surgical resection (Pâ¯=â¯0.004). In conclusion, patients with MES present with signs and symptoms attributed to either an abdominopelvic mass or erythrocytosis. Preoperative phlebotomy to decrease the severity of erythrocytosis has been used to mitigate the risk of thrombotic complications. Surgical resection of the offending leiomyoma is a valid approach for the treatment of MES.
Asunto(s)
Leiomioma/diagnóstico , Policitemia/diagnóstico , Neoplasias Uterinas/diagnóstico , Adulto , Diagnóstico Diferencial , Femenino , Humanos , Histerectomía , Leiomioma/diagnóstico por imagen , Leiomioma/patología , Paridad , Policitemia/sangre , Policitemia/diagnóstico por imagen , Síndrome , Neoplasias Uterinas/diagnóstico por imagen , Neoplasias Uterinas/patologíaRESUMEN
BACKGROUND: The expansion of age-related degenerative spine pathologies has led to increased referrals to spine surgeons. However, the majority of patients referred for surgical consultation do not need surgery, leading to inefficient use of healthcare resources. This study aims to elucidate preoperative patient variables that are predictive of patients being offered spine surgery. METHODS: We conducted an observational cohort study on patients referred to our institution between May 2013 and January 2015. Patients completed a detailed preclinic questionnaire on items such as history of presenting illness, quality-of-life questionnaires, and past medical history. The primary end point was whether surgery was offered. A multivariable logistical regression using the random forest method was used to determine the odds of being offered surgery based on preoperative patient variables. RESULTS: An analysis of 1194 patients found that preoperative patient variables that reduced the odds of surgery being offered include mild pain (odds ratio [OR] 0.37, p=0.008), normal walking distance (OR 0.51, p=0.007), and normal sitting tolerance (OR 0.58, p=0.01). Factors that increased the odds of surgery include radiculopathy (OR 2.0, p=0.001), patient's belief that they should have surgery (OR 1.9, p=0.003), walking distance <50 ft (OR 1.9, p=0.01), relief of symptoms when bending forward (OR 1.7, p=0.008) and sitting (OR 1.6, p=0.009), works more slowly (OR 1.6 p=0.01), aggravation of symptoms by Valsalva (OR 1.4, p=0.03), and pain affecting sitting/standing (OR 1.1, p=0.001). CONCLUSIONS: We identified 11 preoperative variables that were predictive of whether patients were offered surgery, which are important factors to consider when screening outpatient spine referrals.
CONTEXTE: L'augmentation des pathologies de la colonne vertébrale liées au vieillissement de la population a entraîné un accroissement des cas de patients adressés à des chirurgiens spécialistes de la colonne vertébrale. Cela dit, la majorité de ces patients n'ont pas besoin d'une telle intervention chirurgicale, ce qui entraîne une utilisation inefficace des ressources prévues pour les soins de santé. Cette étude vise donc, en regard de ces patients, à déterminer les variables préopératoires susceptibles de prédire ceux à qui l'on offrira finalement une chirurgie de la colonne vertébrale. MÉTHODES: Nous avons réalisé une étude de cohorte observationnelle portant sur des patients ayant été acheminés vers notre établissement entre mai 2013 et janvier 2015. Ces patients ont tout d'abord complété un questionnaire préclinique détaillé abordant notamment les aspects suivants : les antécédents d'apparition de leur maladie, le fait d'avoir rempli auparavant des questionnaires portant sur leur qualité de vie et leurs antécédents médicaux. Le principal indicateur ici évalué a été dans quelle mesure une intervention chirurgicale fut offerte. À l'aide la méthode dite de « forêts des arbres décisionnels ¼ (random forest method), nous avons effectué une régression logistique à variables multiples afin de déterminer la probabilité qu'un patient se voit offrir une intervention chirurgicale. Pour ce faire, nous avons utilisé les variables préopératoires évoquées ci-dessus. RÉSULTATS: Parmi les 1194 patients analysés, nous avons déterminé qu'une douleur modérée (RC 0,37 ; p = 0,008), la capacité de parcourir à pied une distance normale (RC 0,51 ; p = 0,007) et la capacité normale de tolérer une position assise (RC 0,58 ; p = 0,01) étaient les variables préopératoires qui réduisaient la probabilité de se voir offrir une chirurgie. Parmi les variables augmentant au contraire la probabilité d'être acheminé vers un service de chirurgie, mentionnons les suivantes : être atteint de radiculopathie (RC 2,0 ; p = 0,001) ; le fait qu'un patient estime qu'il devrait bénéficier d'une opération chirurgicale (RC 1,9 ; p = 0,003) ; une capacité de marche inférieure à plus ou moins 15 mètres (50 pieds) (RC 1,9 ; p = 0,01) ; le soulagement des symptômes en se penchant vers l'avant (RC 1,7 ; p = 0,008) ou en s'asseyant (RC 1,6 ; p = 0,009) ; le fait de travailler plus lentement (RC 1,6 ; p = 0,01) ; l'aggravation des symptômes en lien avec la manÅuvre de Valsalva (RC 1,4 ; p = 0,03) ; et des douleurs associées au fait de s'asseoir et de se lever (RC 1,1 ; p = 0,001). CONCLUSIONS: Au total, nous avons identifié 11 variables préopératoires qui peuvent nous aider à prédire dans quelle mesure des patients sont susceptibles de se voir offrir une intervention chirurgicale. Il est donc important d'en tenir compte au moment de sélectionner des patients externes ayant été acheminés vers un service de chirurgie en raison de troubles de la colonne vertébrale.
Asunto(s)
Procedimientos Quirúrgicos Electivos/métodos , Selección de Paciente , Cuidados Preoperatorios/métodos , Enfermedades de la Columna Vertebral/cirugía , Adulto , Anciano , Estudios de Cohortes , Femenino , Humanos , Modelos Logísticos , Masculino , Persona de Mediana Edad , Dolor/diagnóstico , Calidad de Vida , Encuestas y CuestionariosRESUMEN
OBJECTIVE As the cost of health care continues to increase, there is a growing emphasis on evaluating the relative economic value of treatment options to guide resource allocation. The objective of this systematic review was to evaluate the current evidence regarding the cost-effectiveness of cranial neurosurgery procedures. METHODS The authors performed a systematic review of the literature using PubMed, EMBASE, and the Cochrane Library, focusing on themes of economic evaluation and cranial neurosurgery following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. Included studies were publications of cost-effectiveness analysis or cost-utility analysis between 1995 and 2017 in which health utility outcomes in life years (LYs), quality-adjusted life years (QALYs), or disability-adjusted life years (DALYs) were used. Three independent reviewers conducted the study appraisal, data abstraction, and quality assessment, with differences resolved by consensus discussion. RESULTS In total, 3485 citations were reviewed, with 53 studies meeting the inclusion criteria. Of those, 34 studies were published in the last 5 years. The most common subspecialty focus was cerebrovascular (32%), followed by neurooncology (26%) and functional neurosurgery (24%). Twenty-eight (53%) studies, using a willingness to pay threshold of US$50,000 per QALY or LY, found a specific surgical treatment to be cost-effective. In addition, there were 11 (21%) studies that found a specific surgical option to be economically dominant (both cost saving and having superior outcome), including endovascular thrombectomy for acute ischemic stroke, epilepsy surgery for drug-refractory epilepsy, and endoscopic pituitary tumor resection. CONCLUSIONS There is an increasing number of cost-effectiveness studies in cranial neurosurgery, especially within the last 5 years. Although there are numerous procedures, such as endovascular thrombectomy for acute ischemic stroke, that have been conclusively proven to be cost-effective, there remain promising interventions in current practice that have yet to meet cost-effectiveness thresholds.
Asunto(s)
Ensayos Clínicos como Asunto/economía , Análisis Costo-Beneficio , Economía Médica , Procedimientos Neuroquirúrgicos/economía , Análisis Costo-Beneficio/tendencias , Craneotomía/economía , Craneotomía/tendencias , Economía Médica/tendencias , Humanos , Procedimientos Neuroquirúrgicos/tendenciasRESUMEN
PURPOSE: Studies in the adult literature suggest that preoperative laboratory investigations and cross-match are performed unnecessarily and rarely lead to changes in clinical management. The purposes of this study were the following: (1) to explore whether preoperative laboratory investigations in neurosurgical children alter clinical management and (2) to determine the utilization of cross-matched blood perioperatively in elective pediatric neurosurgical cases. METHODS: We reviewed pediatric patient charts for elective neurosurgery procedures (June 2010-June 2014) at out institution. Variables collected include preoperative complete blood count (CBC), electrolytes, coagulation, group and screen, and cross-match. A goal of the review was to identify instances of altered clinical management, as a consequence of preoperative blood work. The number of cross-matched blood units transfused perioperatively was also determined. RESULTS: Four hundred seventy-seven electively scheduled pediatric neurosurgical patients were reviewed. Preoperative CBC was done on 294, and 39.8 % had at least one laboratory abnormality. Electrolytes (84 patients) and coagulation panels (241 patients) were abnormal in 23.8 and 24.5 %, respectively. The preoperative investigations led to a change in clinical management in three patients, two of which were associated with significant past medical history. Group and screen test was performed in 62.5 % of patients and 57.9 % had their blood cross-matched. Perioperative blood transfusions (71 % of these patients were under 3 years of age) were received by 3.6 % of patients (17/477). The cross-match to transfusion ratio was 16. CONCLUSION: This study suggests that the results of preoperative laboratory exams have limited value, apart from cases with oncology and complex preexisting conditions. Additionally, cross-matching might be excessively conducted in elective pediatric neurosurgical cases.
Asunto(s)
Tipificación y Pruebas Cruzadas Sanguíneas/métodos , Transfusión Sanguínea/métodos , Cuidados Preoperatorios , Adolescente , Encefalopatías/cirugía , Niño , Preescolar , Procedimientos Quirúrgicos Electivos , Femenino , Humanos , Lactante , Recién Nacido , Neoplasias Infratentoriales , Masculino , Pediatría , Estudios Retrospectivos , Adulto JovenRESUMEN
This study aimed to test the effects of a 1-h classroom-based workshop, led by medical students, on mental illness stigma amongst secondary school students. Students (aged 14-17) from three public secondary schools in British Columbia participated in the workshop. A questionnaire measuring stigma (including stereotype endorsement and desire for social distance) was administered immediately before (T1), immediately after (T2), and 1-month after the workshop (T3). A total of 279 students met the study inclusion criteria. Total scores on the stigma scale decreased by 23 % between T1 and T2 (p < 0.01). This was sustained 1-month post-workshop with a 21 % stigma reduction compared to pre-intervention (p < 0.01). This effect was primarily due to improvements in scores that measured desire for social distance. There were no significant changes in scores that measured stereotype endorsement. Adolescents' stigmatizing attitudes can be effectively reduced through a 1-h easily implementable and cost-effective classroom-based workshop led by medical students.
Asunto(s)
Conocimientos, Actitudes y Práctica en Salud , Trastornos Mentales/psicología , Estigma Social , Estereotipo , Estudiantes/psicología , Adolescente , Consejo , Femenino , Promoción de la Salud/métodos , Humanos , Masculino , Salud Mental , Evaluación de Programas y Proyectos de Salud , Distancia Psicológica , Instituciones Académicas , Encuestas y CuestionariosRESUMEN
BACKGROUND CONTEXT: A significant proportion of patients experience poorly controlled surgical pain and fail to achieve satisfactory clinical improvement after spine surgery. However, a direct association between these variables has not been previously demonstrated. PURPOSE: To investigate the association between poor postoperative pain control and patient-reported outcomes after spine surgery. STUDY DESIGN: Ambispective cohort study. PATIENT SAMPLE: Consecutive adult patients (≥18-years old) undergoing inpatient elective cervical or thoracolumbar spine surgery. OUTCOME MEASURE: Poor surgical outcome was defined as failure to achieve a minimal clinically important difference (MCID) of 30% improvement on the Oswestry Disability Index or Neck Disability Index at follow-up (3-months, 1-year, and 2-years). METHODS: Poor pain control was defined as a mean numeric rating scale score of >4 during the first 24-hours after surgery. Multivariable mixed-effects regression was used to investigate the relationship between poor pain control and changes in surgical outcomes while adjusting for known confounders. Secondarily, the Calgary Postoperative Pain After Spine Surgery (CAPPS) Score was investigated for its ability to predict poor surgical outcome. RESULTS: Of 1294 patients, 47.8%, 37.3%, and 39.8% failed to achieve the MCID at 3-months, 1-year, and 2-years, respectively. The incidence of poor pain control was 56.9%. Multivariable analyses showed poor pain control after spine surgery was independently associated with failure to achieve the MCID (OR 2.35 [95% CI=1.59-3.46], p<.001) after adjusting for age (p=.18), female sex (p=.57), any nicotine products (p=.041), ASA physical status >2 (p<.001), ≥3 motion segment surgery (p=.008), revision surgery (p=.001), follow-up time (p<.001), and thoracolumbar surgery compared to cervical surgery (p=.004). The CAPPS score was also found to be independently predictive of poor surgical outcome. CONCLUSION: Poor pain control in the first 24-hours after elective spine surgery was an independent risk factor for poor surgical outcome. Perioperative treatment strategies to improve postoperative pain control may lead to improved patient-reported surgical outcomes.
Asunto(s)
Procedimientos Quirúrgicos Electivos , Dolor Postoperatorio , Medición de Resultados Informados por el Paciente , Humanos , Femenino , Masculino , Persona de Mediana Edad , Dolor Postoperatorio/etiología , Procedimientos Quirúrgicos Electivos/efectos adversos , Adulto , Anciano , Estudios de Cohortes , Dimensión del Dolor , Columna Vertebral/cirugía , Resultado del TratamientoRESUMEN
BACKGROUND: Adjacent segment disease (ASD) is a known sequela of thoracolumbar instrumented fusions. Various surgical options are available to address ASD in patients with intractable symptoms who have failed conservative measures. However, the optimal treatment strategy for symptomatic ASD has not been established. We examined several clinical outcomes utilizing different surgical interventions for symptomatic ASD. METHODS: A retrospective review was performed for a consecutive series of patients undergoing revision surgery for thoracolumbar ASD between October 2011 and February 2022. Patients were treated with endoscopic decompression (N = 17), microdiscectomy (N = 9), lateral lumbar interbody fusion (LLIF; N = 26), or open laminectomy and fusion (LF; N = 55). The primary outcomes compared between groups were re-operation rates and numeric pain scores for leg and back at 2 weeks, 10 weeks, 6 months, and 12 months postoperation. Secondary outcomes included time to re-operation, estimated blood loss, and length of stay. RESULTS: Of the 257 patients who underwent revision surgery for symptomatic ASD, 107 patients met inclusion criteria with a minimum of 1-year follow-up. The mean age of all patients was 67.90 ± 10.51 years. There was no statistically significant difference between groups in age, gender, preoperative American Society of Anesthesiologists scoring, number of previously fused levels, or preoperative numeric leg and back pain scores. The re-operation rates were significantly lower in LF (12.7%) and LLIF cohorts (19.2%) compared with microdiscectomy (33%) and endoscopic decompression (52.9%; P = 0.005). Only LF and LLIF cohorts experienced significantly decreased pain scores at all 4 follow-up visits (2 weeks, 10 weeks, 6 months, and 12 months; P < 0.001 and P < 0.05, respectively) relative to preoperative scores. CONCLUSION: Symptomatic ASD often requires treatment with revision surgery. Fusion surgeries (either stand-alone lateral interbody or posterolateral with instrumentation) were most effective and durable with respect to alleviating pain and avoiding additional revisions within the first 12 months following revision surgery. CLINICAL RELEVANCE: This study emphasizes the importance of risk-stratifying patients to identify the least invasive approach that treats their symptoms and reduces the risk of future surgeries.
RESUMEN
Degenerative Cervical Myelopathy (DCM) is the functional derangement of the spinal cord resulting from vertebral column spondylotic degeneration. Typical neurological symptoms of DCM include gait imbalance, hand/arm numbness, and upper extremity dexterity loss. Greater spinal cord compression is believed to lead to a higher rate of neurological deterioration, although clinical experience suggests a more complex mechanism involving spinal canal diameter (SCD). In this study, we utilized machine learning clustering to understand the relationship between SCD and different patterns of cord compression (i.e. compression at one disc level, two disc levels, etc.) to identify patient groups at risk of neurological deterioration. 124 MRI scans from 51 non-operative DCM patients were assessed through manual scoring of cord compression and SCD measurements. Dimensionality reduction techniques and k-means clustering established patient groups that were then defined with their unique risk criteria. We found that the compression pattern is unimportant at SCD extremes (≤14.5 mm or > 15.75 mm). Otherwise, severe spinal cord compression at two disc levels increases deterioration likelihood. Notably, if SCD is normal and cord compression is not severe at multiple levels, deterioration likelihood is relatively reduced, even if the spinal cord is experiencing compression. We elucidated five patient groups with their associated risks of deterioration, according to both SCD range and cord compression pattern. Overall, SCD and focal cord compression alone do not reliably predict an increased risk of neurological deterioration. Instead, the specific combination of narrow SCD with multi-level focal cord compression increases the likelihood of neurological deterioration in mild DCM patients.
Asunto(s)
Vértebras Cervicales , Imagen por Resonancia Magnética , Compresión de la Médula Espinal , Humanos , Compresión de la Médula Espinal/diagnóstico por imagen , Compresión de la Médula Espinal/etiología , Masculino , Femenino , Persona de Mediana Edad , Anciano , Vértebras Cervicales/diagnóstico por imagen , Médula Cervical/diagnóstico por imagen , Espondilosis/diagnóstico por imagen , Espondilosis/complicaciones , Progresión de la Enfermedad , Aprendizaje Automático , AdultoRESUMEN
BACKGROUND CONTEXT: Degenerative cervical myelopathy (DCM) is the most common form of atraumatic spinal cord injury globally. Degeneration of spinal discs, bony osteophyte growth and ligament pathology results in physical compression of the spinal cord contributing to damage of white matter tracts and grey matter cellular populations. This results in an insidious neurological and functional decline in patients which can lead to paralysis. Magnetic resonance imaging (MRI) confirms the diagnosis of DCM and is a prerequisite to surgical intervention, the only known treatment for this disorder. Unfortunately, there is a weak correlation between features of current commonly acquired MRI scans ("community MRI, cMRI") and the degree of disability experienced by a patient. PURPOSE: This study examines the predictive ability of current MRI sequences relative to "advanced MRI" (aMRI) metrics designed to detect evidence of spinal cord injury secondary to degenerative myelopathy. We hypothesize that the utilization of higher fidelity aMRI scans will increase the effectiveness of machine learning models predicting DCM severity and may ultimately lead to a more efficient protocol for identifying patients in need of surgical intervention. STUDY DESIGN/SETTING: Single institution analysis of imaging registry of patients with DCM. PATIENT SAMPLE: A total of 296 patients in the cMRI group and 228 patients in the aMRI group. OUTCOME MEASURES: Physiologic measures: accuracy of machine learning algorithms to detect severity of DCM assessed clinically based on the modified Japanese Orthopedic Association (mJOA) scale. METHODS: Patients enrolled in the Canadian Spine Outcomes Research Network registry with DCM were screened and 296 cervical spine MRIs acquired in cMRI were compared with 228 aMRI acquisitions. aMRI acquisitions consisted of diffusion tensor imaging, magnetization transfer, T2-weighted, and T2*-weighted images. The cMRI group consisted of only T2-weighted MRI scans. Various machine learning models were applied to both MRI groups to assess accuracy of prediction of baseline disease severity assessed clinically using the mJOA scale for cervical myelopathy. RESULTS: Through the utilization of Random Forest Classifiers, disease severity was predicted with 41.8% accuracy in cMRI scans and 73.3% in the aMRI scans. Across different predictive model variations tested, the aMRI scans consistently produced higher prediction accuracies compared to the cMRI counterparts. CONCLUSIONS: aMRI metrics perform better in machine learning models at predicting disease severity of patients with DCM. Continued work is needed to refine these models and address DCM severity class imbalance concerns, ultimately improving model confidence for clinical implementation.
Asunto(s)
Vértebras Cervicales , Imagen por Resonancia Magnética , Humanos , Persona de Mediana Edad , Masculino , Vértebras Cervicales/diagnóstico por imagen , Vértebras Cervicales/cirugía , Femenino , Anciano , Índice de Severidad de la Enfermedad , Enfermedades de la Médula Espinal/diagnóstico por imagen , Enfermedades de la Médula Espinal/cirugía , Adulto , Aprendizaje AutomáticoRESUMEN
INTRODUCTION: Serial change in ventricular size is recognized as an imperfect indicator of ongoing hydrocephalus in children. Potentially, other radiographic features may be useful in determining the success of hydrocephalus interventions. In this study, optic nerve sheath diameter (ONSD), optic nerve tortuosity, and optic disk bulging were assessed as indicators of hydrocephalus control in children who underwent endoscopic third ventriculostomy (ETV) or posterior fossa tumor resection. METHODS: Sixteen children underwent ETV or tumor resection for treatment of hydrocephalus. T2-weighted axial magnetic resonance images of the orbit were obtained, and the ONSD was measured posterior to the optic globe, pre- and post-intervention. Evidence of optic disk bulging and optic nerve tortuosity was also assessed. Ventricular size was estimated using the frontal and occipital horn ratio (FOR). RESULTS: There was a significant reduction in the ONSD post-ETV (n = 9) and after tumor resection (n = 7). Average preoperative ONSD was 6.21 versus 5.71 mm postoperatively (p = 0.0017).There was also an 88% (p = 0.011) and 60% (p = 0.23) reduction in optic disk bulging and tortuosity, respectively. The FOR normalized in the tumor resection group but not the ETV group. After intervention, all patients showed improvement in signs and symptoms of hydrocephalus. CONCLUSION: In our study population, ONSD decreased in response to measures to reduce hydrocephalus. Optic disk bulging also appears to resolve. Serial reduction in ONSD, and optic disk bulging may be indicators of improved hydrocephalus following pediatric neurosurgical interventions.
Asunto(s)
Hidrocefalia/diagnóstico , Hidrocefalia/cirugía , Imagen por Resonancia Magnética , Nervio Óptico/patología , Nervio Óptico/cirugía , Niño , Preescolar , Femenino , Humanos , Lactante , Imagen por Resonancia Magnética/métodos , Masculino , Estudios Prospectivos , Tercer Ventrículo/patología , Tercer Ventrículo/cirugía , Resultado del Tratamiento , Ventriculostomía/métodosRESUMEN
Background: Approximately 30% to 64% of patients experience inadequate pain control following spine surgery. The Calgary postoperative pain after spine surgery (CAPPS) score was developed to identify this subset of patients. The impact of preoperative insomnia on postoperative pain control is unknown. This study aimed to investigate the relationship between preoperative insomnia and poor pain control after spine surgery, as well as improve the predictive accuracy of the CAPPS score. Methods: A prospective cohort study was conducted in patients undergoing elective spine surgery. Poor pain control was defined as a mean numeric rating scale pain score >4 at rest within the first 24-hours after surgery. Patients were evaluated using the CAPPS score, which included 7 prognostic factors. A multivariable logistic regression model was used to examine the association between preoperative insomnia severity index (ISI) and poor pain control, adjusting for the CAPPS score. The Modified CAPPS score was derived from this model. Results: Of 219 patients, 49.7% experienced poorly controlled pain. Prevalence of clinical insomnia (ISI≥15) was 26.9%. Preoperative ISI was independently associated with poor pain control (odds ratio [OR] 1.09, [95%CI=1.03-1.16], p=.004), after adjusting for the CAPPS score (OR 1.61, [95%CI=1.38-1.89], p<.001). The model exhibited good discrimination (c-statistics 0.80, [95%CI=0.74-0.86]) and calibration (Hosmer-Lemeshow chi-square=8.95, p=.35). The Modified CAPPS score also demonstrated good discrimination (c-statistic 0.78, [95%CI=0.72-0.84]) and calibration (Hosmer-Lemeshow chi-square=2.92, p=.57). Low-, high-, and extreme-risk groups stratified by the Modified CAPPS score had 17.3%, 49.1%, and 80.7% predicted probability of experiencing inadequate pain control compared to 32.0%, 64.0%, and 85.1% in the CAPPS score. Conclusions: Preoperative insomnia is prevalent and is a modifiable risk factor for poor pain control following spine surgery. Early identification and management of preoperative insomnia may lead to improved postoperative pain outcomes. Future external validation is needed to confirm the accuracy of the Modified CAPPS score.
RESUMEN
OBJECTIVE: Endoscopic third ventriculostomy and choroid plexus cauterization (ETV+CPC) is a novel procedure for infant hydrocephalus that was developed in sub-Saharan Africa to mitigate the risks associated with permanent implanted shunt hardware. This study summarizes the hydrocephalus literature surrounding the ETV+CPC intraoperative abandonment rate, perioperative mortality rate, cerebrospinal fluid infection rate, and failure rate. METHODS: This systematic review and meta-analysis followed a prespecified protocol and abides by Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A comprehensive search strategy using MEDLINE, EMBASE, PsychInfo, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Scopus, and Web of Science was conducted from database inception to October 2019. Studies included controlled trials, cohort studies, and case-control studies of patients with hydrocephalus younger than 18 years of age treated with ETV+CPC. Pooled estimates were calculated using DerSimonian and Laird random-effects modeling, and the significance of subgroup analyses was tested using meta-regression. The quality of the pooled outcomes was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. RESULTS: After screening and reviewing 12,321 citations, the authors found 16 articles that met the inclusion criteria. The pooled estimate for the ETV+CPC failure rate was 0.44 (95% CI 0.37-0.51). Subgroup analysis by geographic income level showed statistical significance (p < 0.01), with lower-middle-income countries having a lower failure rate (0.32, 95% CI 0.28-0.36) than high-income countries (0.53, 95% CI 0.47-0.60). No difference in failure rate was found between hydrocephalus etiology (p = 0.09) or definition of failure (p = 0.24). The pooled estimate for perioperative mortality rate (n = 7 studies) was 0.001 (95% CI 0.00-0.004), the intraoperative abandonment rate (n = 5 studies) was 0.04 (95% CI 0.01-0.08), and the postoperative CSF infection rate (n = 5 studies) was 0.0004 (95% CI 0.00-0.003). All pooled outcomes were found to be low-quality evidence. CONCLUSIONS: This systematic review and meta-analysis provides the most comprehensive pooled estimate for the ETV+CPC failure rate to date and demonstrates, for the first time, a statistically significant difference in failure rate by geographic income level. It also provides the first reported pooled estimates for the risk of ETV+CPC perioperative mortality, intraoperative abandonment, and CSF infection. The low quality of this evidence highlights the need for further research to improve the understanding of these critical clinical outcomes and their relevant explanatory variables and thus to appreciate which patients may benefit most from an ETV+CPC. Systematic review registration no.: CRD42020160149 (https://www.crd.york.ac.uk/prospero/).
RESUMEN
BACKGROUND: Enhanced Recovery After Surgery (ERAS) evidence-based protocols for perioperative care have led to improvements in outcomes in numerous surgical areas, through multimodal optimization of patient pathway, reduction of complications, improved patient experience and reduction in the length of stay. ERAS represent a relatively new paradigm in spine surgery. PURPOSE: This multidisciplinary consensus review summarizes the literature and proposes recommendations for the perioperative care of patients undergoing lumbar fusion surgery with an ERAS program. STUDY DESIGN: This is a review article. METHODS: Under the impetus of the ERAS® society, a multidisciplinary guideline development group was constituted by bringing together international experts involved in the practice of ERAS and spine surgery. This group identified 22 ERAS items for lumbar fusion. A systematic search in the English language was performed in MEDLINE, Embase, and Cochrane Central Register of Controlled Trials. Systematic reviews, randomized controlled trials, and cohort studies were included, and the evidence was graded according to the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) system. Consensus recommendation was reached by the group after a critical appraisal of the literature. RESULTS: Two hundred fifty-six articles were included to develop the consensus statements for 22 ERAS items; one ERAS item (prehabilitation) was excluded from the final summary due to very poor quality and conflicting evidence in lumbar spinal fusion. From these remaining 21 ERAS items, 28 recommendations were included. All recommendations on ERAS protocol items are based on the best available evidence. These included nine preoperative, eleven intraoperative, and six postoperative recommendations. They span topics from preoperative patient education and nutritional evaluation, intraoperative anesthetic and surgical techniques, and postoperative multimodal analgesic strategies. The level of evidence for the use of each recommendation is presented. CONCLUSION: Based on the best evidence available for each ERAS item within the multidisciplinary perioperative care pathways, the ERAS® Society presents this comprehensive consensus review for perioperative care in lumbar fusion.
Asunto(s)
Recuperación Mejorada Después de la Cirugía , Fusión Vertebral , Consenso , Humanos , Atención Perioperativa , Cuidados Preoperatorios , Fusión Vertebral/efectos adversosRESUMEN
OBJECTIVE: Thirty percent to sixty-four percent of patients experience poorly controlled pain following spine surgery, leading to patient dissatisfaction and poor outcomes. Identification of at-risk patients before surgery could facilitate patient education and personalized clinical care pathways to improve postoperative pain management. Accordingly, the aim of this study was to develop and internally validate a prediction score for poorly controlled postoperative pain in patients undergoing elective spine surgery. METHODS: A retrospective cohort study was performed in adult patients (≥ 18 years old) consecutively enrolled in the Canadian Spine Outcomes and Research Network registry. All patients underwent elective cervical or thoracolumbar spine surgery and were admitted to the hospital. Poorly controlled postoperative pain was defined as a mean numeric rating scale score for pain at rest of > 4 during the first 24 hours after surgery. Univariable analysis followed by multivariable logistic regression on 25 candidate variables, selected through a systematic review and expert consensus, was used to develop a prediction model using a random 70% sample of the data. The model was transformed into an eight-tier risk-based score that was further simplified into the three-tier Calgary Postoperative Pain After Spine Surgery (CAPPS) score to maximize clinical utility. The CAPPS score was validated using the remaining 30% of the data. RESULTS: Overall, 57% of 1300 spine surgery patients experienced poorly controlled pain during the first 24 hours after surgery. Seven significant variables associated with poor pain control were incorporated into a prediction model: younger age, female sex, preoperative daily use of opioid medication, higher preoperative neck or back pain intensity, higher Patient Health Questionnaire-9 depression score, surgery involving ≥ 3 motion segments, and fusion surgery. Notably, minimally invasive surgery, body mass index, and revision surgery were not associated with poorly controlled pain. The model was discriminative (C-statistic 0.74, 95% CI 0.71-0.77) and calibrated (Hosmer-Lemeshow goodness-of-fit, p = 0.99) at predicting the outcome. Low-, high-, and extreme-risk groups stratified using the CAPPS score had 32%, 63%, and 85% predicted probability of experiencing poorly controlled pain, respectively, which was mirrored closely by the observed incidence of 37%, 62%, and 81% in the validation cohort. CONCLUSIONS: Inadequate pain control is common after spine surgery. The internally validated CAPPS score based on 7 easily acquired variables accurately predicted the probability of experiencing poorly controlled pain after spine surgery.
RESUMEN
OBJECTIVEThe shunt protocol developed by the Hydrocephalus Clinical Research Network (HCRN) was shown to significantly reduce shunt infections in children. However, its effectiveness had not been validated in a non-HCRN, small- to medium-volume pediatric neurosurgery center. The present study evaluated whether the 9-step Calgary Shunt Protocol, closely adapted from the HCRN shunt protocol, reduced shunt infections in children.METHODSThe Calgary Shunt Protocol was prospectively applied at Alberta Children's Hospital from May 23, 2013, to all children undergoing any shunt procedure. The control cohort consisted of children undergoing shunt surgery between January 1, 2009, and the implementation of the Calgary Shunt Protocol. The primary outcome was the strict HCRN definition of shunt infection. Univariate analyses of the protocol, individual elements within, and known confounders were performed using Student t-test for measured variables and chi-square tests for categorical variables. Multivariable logistic regression was performed using stepwise analysis.RESULTSTwo-hundred sixty-eight shunt procedures were performed. The median age of patients was 14 months (IQR 3-61), and 148 (55.2%) were male. There was a significant absolute risk reduction of 10.0% (95% CI 3.9%-15.9%) in shunt infections (12.7% vs 2.7%, p = 0.004) after implementation of the Calgary Shunt Protocol. In univariate analyses, chlorhexidine was associated with fewer shunt infections than iodine-based skin preparation solution (4.1% vs 12.3%, p = 0.02). Waiting ≥ 20 minutes between receiving preoperative antibiotics and skin incision was also associated with a reduction in shunt infection (4.5% vs 14.2%, p = 0.007). In the multivariable analysis, only the overall protocol independently reduced shunt infections (OR 0.19 [95% CI 0.06-0.67], p = 0.009), while age, etiology, procedure type, ventricular catheter type, skin preparation solution, and time from preoperative antibiotics to skin incision were not significant.CONCLUSIONSThis study externally validates the published HCRN protocol for reducing shunt infection in an independent, non-HCRN, and small- to medium-volume pediatric neurosurgery setting. Implementation of the Calgary Shunt Protocol independently reduced shunt infection risk. Chlorhexidine skin preparation and waiting ≥ 20 minutes between administration of preoperative antibiotic and skin incision may have contributed to the protocol's quality improvement success.
RESUMEN
OBJECTIVES: Inadequate postoperative pain control is common and is associated with poor clinical outcomes. This study aimed to identify preoperative predictors of poor postoperative pain control in adults undergoing inpatient surgery. DESIGN: Systematic review and meta-analysis DATA SOURCES: MEDLINE, Embase, CINAHL and PsycINFO were searched through October 2017. ELIGIBILITY CRITERIA: Studies in any language were included if they evaluated postoperative pain using a validated instrument in adults (≥18 years) and reported a measure of association between poor postoperative pain control (defined by study authors) and at least one preoperative predictor during the hospital stay. DATA EXTRACTION AND SYNTHESIS: Two reviewers screened articles, extracted data and assessed study quality. Measures of association for each preoperative predictor were pooled using random effects models. RESULTS: Thirty-three studies representing 53 362 patients were included in this review. Significant preoperative predictors of poor postoperative pain control included younger age (OR 1.18 [95% CI 1.05 to 1.32], number of studies, n=14), female sex (OR 1.29 [95% CI 1.17 to 1.43], n=20), smoking (OR 1.33 [95% CI 1.09 to 1.61], n=9), history of depressive symptoms (OR 1.71 [95% CI 1.32 to 2.22], n=8), history of anxiety symptoms (OR 1.22 [95% CI 1.09 to 1.36], n=10), sleep difficulties (OR 2.32 [95% CI 1.46 to 3.69], n=2), higher body mass index (OR 1.02 [95% CI 1.01 to 1.03], n=2), presence of preoperative pain (OR 1.21 [95% CI 1.10 to 1.32], n=13) and use of preoperative analgesia (OR 1.54 [95% CI 1.18 to 2.03], n=6). Pain catastrophising, American Society of Anesthesiologists status, chronic pain, marital status, socioeconomic status, education, surgical history, preoperative pressure pain tolerance and orthopaedic surgery (vs abdominal surgery) were not associated with increased odds of poor pain control. Study quality was generally high, although appropriate blinding of predictor during outcome ascertainment was often limited. CONCLUSIONS: Nine predictors of poor postoperative pain control were identified. These should be recognised as potentially important factors when developing discipline-specific clinical care pathways to improve pain outcomes and to guide future surgical pain research. PROSPERO REGISTRATION NUMBER: CRD42017080682.
Asunto(s)
Procedimientos Quirúrgicos Electivos/efectos adversos , Dolor Postoperatorio/prevención & control , Cuidados Preoperatorios , Medición de Riesgo/métodos , Adulto , Analgesia , Anestesia de Conducción , HumanosRESUMEN
OBJECTIVE: Cervical disc arthroplasty (CDA) is an accepted motion-sparing technique associated with favorable patient outcomes. However, heterotopic ossification (HO) and adjacent-segment degeneration are poorly understood adverse events that can be observed after CDA. The purpose of this study was to retrospectively examine 1) the effect of the residual exposed endplate (REE) on HO, and 2) identify risk factors predicting radiographic adjacent-segment disease (rASD) in a consecutive cohort of CDA patients. METHODS: A retrospective cohort study was performed on consecutive adult patients (≥ 18 years) who underwent 1- or 2-level CDA at the University of Calgary between 2002 and 2015 with > 1-year follow-up. REE was calculated by subtracting the anteroposterior (AP) diameter of the arthroplasty device from the native AP endplate diameter measured on lateral radiographs. HO was graded using the McAfee classification (low grade, 0-2; high grade, 3 and 4). Change in AP endplate diameter over time was measured at the index and adjacent levels to indicate progressive rASD. RESULTS: Forty-five patients (58 levels) underwent CDA during the study period. The mean age was 46 years (SD 10 years). Twenty-six patients (58%) were male. The median follow-up was 29 months (IQR 42 months). Thirty-three patients (73%) underwent 1-level CDA. High-grade HO developed at 19 levels (33%). The mean REE was 2.4 mm in the high-grade HO group and 1.6 mm in the low-grade HO group (p = 0.02). On multivariable analysis, patients with REE > 2 mm had a 4.5-times-higher odds of developing high-grade HO (p = 0.02) than patients with REE ≤ 2 mm. No significant relationship was observed between the type of artificial disc and the development of high-grade HO (p = 0.1). RASD was more likely to develop in the lower cervical spine (p = 0.001) and increased with time (p < 0.001). The presence of an artificial disc was highly protective against degenerative changes at the index level of operation (p < 0.001) but did not influence degeneration in the adjacent segments. CONCLUSIONS: In patients undergoing CDA, high-grade HO was predicted by REE. Therefore, maximizing the implant-endplate interface may help to reduce high-grade HO and preserve motion. RASD increases in an obligatory manner following CDA and is highly linked to specific levels (e.g., C6-7) rather than the presence or absence of an adjacent arthroplasty device. The presence of an artificial disc is, however, protective against further degenerative change at the index level of operation.