RESUMEN
BACKGROUND: Transcarotid artery revascularization (TCAR) is an interventional therapy for symptomatic internal carotid artery disease. Currently, the utilization of TCAR is contentious due to limited evidence. In this study, we evaluate the safety and efficacy of TCAR in patients with symptomatic internal carotid artery disease compared with carotid endarterectomy (CEA) and carotid artery stenting (CAS). METHODS: A systematic review was conducted, spanning from January 2000 to February 2023, encompassing studies that used TCAR for the treatment of symptomatic internal carotid artery disease. The primary outcomes included a 30-day stroke or transient ischemic attack, myocardial infarction, and mortality. Secondary outcomes comprised cranial nerve injury and major bleeding. Pooled odds ratios (ORs) for each outcome were calculated to compare TCAR with CEA and CAS. Furthermore, subgroup analyses were performed based on age and degree of stenosis. In addition, a sensitivity analysis was conducted by excluding the vascular quality initiative registry population. RESULTS: A total of 7 studies involving 24â 246 patients were analyzed. Within this patient cohort, 4771 individuals underwent TCAR, 12â 350 underwent CEA, and 7125 patients underwent CAS. Compared with CAS, TCAR was associated with a similar rate of stroke or transient ischemic attack (OR, 0.77 [95% CI, 0.33-1.82]) and myocardial infarction (OR, 1.29 [95% CI, 0.83-2.01]) but lower mortality (OR, 0.42 [95% CI, 0.22-0.81]). Compared with CEA, TCAR was associated with a higher rate of stroke or transient ischemic attack (OR, 1.26 [95% CI, 1.03-1.54]) but similar rates of myocardial infarction (OR, 0.9 [95% CI, 0.64-1.38]) and mortality (OR, 1.35 [95% CI, 0.87-2.10]). CONCLUSIONS: Although CEA has traditionally been considered superior to stenting for symptomatic carotid stenosis, TCAR may have some advantages over CAS. Prospective randomized trials comparing the 3 modalities are needed.
Asunto(s)
Estenosis Carotídea , Endarterectomía Carotidea , Stents , Humanos , Endarterectomía Carotidea/métodos , Endarterectomía Carotidea/efectos adversos , Estenosis Carotídea/cirugía , Arteria Carótida Interna/cirugía , Infarto del Miocardio/cirugía , Accidente Cerebrovascular/cirugía , Procedimientos Endovasculares/métodos , Ataque Isquémico Transitorio/cirugía , Revascularización Cerebral/métodos , Resultado del Tratamiento , Enfermedades de las Arterias Carótidas/cirugíaRESUMEN
BACKGROUND: It is uncertain whether antiplatelets or anticoagulants are more effective in preventing early recurrent stroke in patients with cervical artery dissection. Following the publication of the observational Antithrombotic for STOP-CAD (Stroke Prevention in Cervical Artery Dissection) study, which has more than doubled available data, we performed an updated systematic review and meta-analysis comparing antiplatelets versus anticoagulation in cervical artery dissection. METHODS: The systematic review was registered in PROSPERO (CRD42023468063). We searched 5 databases using a combination of keywords that encompass different antiplatelets and anticoagulants, as well as cervical artery dissection. We included relevant randomized trials and included observational studies of dissection unrelated to major trauma. Where studies were sufficiently similar, we performed meta-analyses for efficacy (ischemic stroke) and safety (major hemorrhage, symptomatic intracranial hemorrhage, and death) outcomes using relative risks. RESULTS: We identified 11 studies (2 randomized trials and 9 observational studies) that met the inclusion criteria. These included 5039 patients (30% [1512] treated with anticoagulation and 70% [3527]) treated with antiplatelets]. In meta-analysis, anticoagulation was associated with a lower ischemic stroke risk (relative risk, 0.63 [95% CI, 0.43 to 0.94]; P=0.02; I2=0%) but higher major bleeding risk (relative risk, 2.25 [95% CI, 1.07 to 4.72]; P=0.03, I2=0%). The risks of death and symptomatic intracranial hemorrhage were similar between the 2 treatments. Effect sizes were larger in randomized trials. There are insufficient data on the efficacy and safety of dual antiplatelet therapy or direct oral anticoagulants. CONCLUSIONS: In this study of patients with cervical artery dissection, anticoagulation was superior to antiplatelet therapy in reducing ischemic stroke but carried a higher major bleeding risk. This argues for an individualized therapeutic approach incorporating the net clinical benefit of ischemic stroke reduction and bleeding risks. Large randomized clinical trials are required to clarify optimal antithrombotic strategies for management of cervical artery dissection.
Asunto(s)
Anticoagulantes , Inhibidores de Agregación Plaquetaria , Humanos , Inhibidores de Agregación Plaquetaria/uso terapéutico , Anticoagulantes/uso terapéutico , Anticoagulantes/efectos adversos , Disección de la Arteria Vertebral/tratamiento farmacológico , Accidente Cerebrovascular Isquémico/tratamiento farmacológico , Accidente Cerebrovascular Isquémico/prevención & control , Accidente Cerebrovascular/prevención & control , Accidente Cerebrovascular/tratamiento farmacológico , Disección de la Arteria Carótida Interna/tratamiento farmacológicoRESUMEN
BACKGROUND CONTEXT: Clinical guidelines, developed in concordance with the literature, are often used to guide surgeons' clinical decision making. Recent advancements of large language models and artificial intelligence (AI) in the medical field come with exciting potential. OpenAI's generative AI model, known as ChatGPT, can quickly synthesize information and generate responses grounded in medical literature, which may prove to be a useful tool in clinical decision-making for spine care. The current literature has yet to investigate the ability of ChatGPT to assist clinical decision making with regard to degenerative spondylolisthesis. PURPOSE: The study aimed to compare ChatGPT's concordance with the recommendations set forth by The North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and assess ChatGPT's accuracy within the context of the most recent literature. METHODS: ChatGPT-3.5 and 4.0 was prompted with questions from the NASS Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and graded its recommendations as "concordant" or "nonconcordant" relative to those put forth by NASS. A response was considered "concordant" when ChatGPT generated a recommendation that accurately reproduced all major points made in the NASS recommendation. Any responses with a grading of "nonconcordant" were further stratified into two subcategories: "Insufficient" or "Over-conclusive," to provide further insight into grading rationale. Responses between GPT-3.5 and 4.0 were compared using Chi-squared tests. RESULTS: ChatGPT-3.5 answered 13 of NASS's 28 total clinical questions in concordance with NASS's guidelines (46.4%). Categorical breakdown is as follows: Definitions and Natural History (1/1, 100%), Diagnosis and Imaging (1/4, 25%), Outcome Measures for Medical Intervention and Surgical Treatment (0/1, 0%), Medical and Interventional Treatment (4/6, 66.7%), Surgical Treatment (7/14, 50%), and Value of Spine Care (0/2, 0%). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-3.5 generated a concordant response 66.7% of the time (6/9). However, ChatGPT-3.5's concordance dropped to 36.8% when asked clinical questions that NASS did not provide a clear recommendation on (7/19). A further breakdown of ChatGPT-3.5's nonconcordance with the guidelines revealed that a vast majority of its inaccurate recommendations were due to them being "over-conclusive" (12/15, 80%), rather than "insufficient" (3/15, 20%). ChatGPT-4.0 answered 19 (67.9%) of the 28 total questions in concordance with NASS guidelines (P = 0.177). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-4.0 generated a concordant response 66.7% of the time (6/9). ChatGPT-4.0's concordance held up at 68.4% when asked clinical questions that NASS did not provide a clear recommendation on (13/19, P = 0.104). CONCLUSIONS: This study sheds light on the duality of LLM applications within clinical settings: one of accuracy and utility in some contexts versus inaccuracy and risk in others. ChatGPT was concordant for most clinical questions NASS offered recommendations for. However, for questions NASS did not offer best practices, ChatGPT generated answers that were either too general or inconsistent with the literature, and even fabricated data/citations. Thus, clinicians should exercise extreme caution when attempting to consult ChatGPT for clinical recommendations, taking care to ensure its reliability within the context of recent literature.
RESUMEN
PURPOSE: To categorize and trend annual out-of-pocket expenditures for arthroscopic rotator cuff repair (RCR) patients relative to total healthcare utilization (THU) reimbursement and compare drivers of patient out-of-pocket expenditures (POPE) in a granular fashion via analyses by insurance type and surgical setting. METHODS: Patients who underwent outpatient arthroscopic RCR in the United States from 2013 to 2018 were identified from the IBM MarketScan Database. Primary outcome variables were total POPE and THU reimbursement, which were calculated for all claims in the 9-month perioperative period. Trends in outcome variables over time and differences across insurance types were analyzed. Multivariable analysis was performed to investigate drivers of POPE. RESULTS: A total of 52,330 arthroscopic RCR patients were identified. Between 2013 and 2018, median POPE increased by 47.5% ($917 to $1,353), and median THU increased by 9.3% ($11,964 to $13,076). Patients with high deductible insurance plans paid $1,910 toward their THU, 52.5% more than patients with preferred provider plans ($1,253, P = .001) and 280.5% more than patients with managed care plans ($502, P = .001). All components of POPE increased over the study period, with the largest observed increase being POPE for the immediate procedure (P = .001). On multivariable analysis, out-of-network facility, out-of-network surgeon, and high-deductible insurance most significantly increased POPE. CONCLUSIONS: POPE for arthroscopic RCR increased at a higher rate than THU over the study period, demonstrating that patients are paying an increasing proportion of RCR costs. A large percentage of this increase comes from increasing POPE for the immediate procedure. Out-of-network facility status increased POPE 3 times more than out-of-network surgeon status, and future cost-optimization strategies should focus on facility-specific reimbursements in particular. Last, ambulatory surgery centers (ASCs) significantly reduced POPE, so performing arthroscopic RCRs at ASCs is beneficial to cost-minimization efforts. CLINICAL RELEVANCE: This study highlights that although payers have increased reimbursement for RCR, patient out-of-pocket expenditures have increased at a much higher rate. Furthermore, this study elucidates trends in and drivers of patient out-of-pocket payments for RCR, providing evidence for development of cost-optimization strategies and counseling of patients undergoing RCR.
Asunto(s)
Artroscopía , Gastos en Salud , Lesiones del Manguito de los Rotadores , Humanos , Artroscopía/economía , Masculino , Femenino , Gastos en Salud/estadística & datos numéricos , Persona de Mediana Edad , Estados Unidos , Lesiones del Manguito de los Rotadores/cirugía , Lesiones del Manguito de los Rotadores/economía , Procedimientos Quirúrgicos Ambulatorios/economía , Reembolso de Seguro de Salud , Aceptación de la Atención de Salud/estadística & datos numéricos , Anciano , Manguito de los Rotadores/cirugíaRESUMEN
PURPOSE: Predict nonhome discharge (NHD) following elective anterior cervical discectomy and fusion (ACDF) using an explainable machine learning model. METHODS: 2227 patients undergoing elective ACDF from 2008 to 2019 were identified from a single institutional database. A machine learning model was trained on preoperative variables, including demographics, comorbidity indices, and levels fused. The validation technique was repeated stratified K-Fold cross validation with the area under the receiver operating curve (AUROC) statistic as the performance metric. Shapley Additive Explanation (SHAP) values were calculated to provide further explainability regarding the model's decision making. RESULTS: The preoperative model performed with an AUROC of 0.83 ± 0.05. SHAP scores revealed the most pertinent risk factors to be age, medicare insurance, and American Society of Anesthesiology (ASA) score. Interaction analysis demonstrated that female patients over 65 with greater fusion levels were more likely to undergo NHD. Likewise, ASA demonstrated positive interaction effects with female sex, levels fused and BMI. CONCLUSION: We validated an explainable machine learning model for the prediction of NHD using common preoperative variables. Adding transparency is a key step towards clinical application because it demonstrates that our model's "thinking" aligns with clinical reasoning. Interactive analysis demonstrated that those of age over 65, female sex, higher ASA score, and greater fusion levels were more predisposed to NHD. Age and ASA score were similar in their predictive ability. Machine learning may be used to predict NHD, and can assist surgeons with patient counseling or early discharge planning.
Asunto(s)
Alta del Paciente , Fusión Vertebral , Humanos , Femenino , Anciano , Estados Unidos , Fusión Vertebral/métodos , Medicare , Discectomía/métodos , Aprendizaje Automático , Estudios RetrospectivosRESUMEN
BACKGROUND: High level evidence for direct oral anticoagulants (DOACs) in patients with cerebral venous thrombosis is lacking. We performed a systematic review and meta-analysis to assess the efficacy and safety of DOACs versus vitamin K antagonists in patients with cerebral venous thrombosis. METHODS: This systematic review was registered in PROSPERO (CRD42021228800). We searched MEDLINE (via Ovid), EMBASE, CINAHL, and the Web of Science Core Collection between January 1, 2007 and Feb 22, 2022. Search terms included a combination of keywords and controlled vocabulary terms for cerebral venous thrombosis, vitamin K antagonists/warfarin, and DOACs. We included both randomized and nonrandomized studies that compared vitamin K antagonists and DOACs in 5 or more patients with cerebral venous thrombosis. Where studies were sufficiently similar, we performed meta-analyses for efficacy (recurrent venous thromboembolism and complete recanalization) and safety (major hemorrhage) outcomes, using relative risks (RRs). RESULTS: Out of 10 665 records identified, we screened 254 as potentially eligible. Nineteen studies (16 observational studies [n=1735] and 3 randomized controlled trials [n=215]) met the inclusion criteria. All 3 randomized controlled trials had some concerns, and all 16 observational studies had at least moderate risk of bias. When compared with vitamin K antagonist treatment, DOAC had comparable risks of recurrent venous thromboembolism (relative risk [RR], 0.85 [95% CI, 0.52-1.37], I2=0%), major hemorrhage (RR, 0.70 [95% CI, 0.40-1.21], I2=0%), intracranial hemorrhage (RR, 0.58 [95% CI, 0.30-1.12]; I2=0%), death (RR, 1.14 [95% CI, 0.54-2.43], I2=1%), and complete venous recanalization (RR, 0.98 [95% CI, 0.87-1.11]; I2=0%). CONCLUSIONS: This systematic review and meta-analysis suggest that in patients with cerebral venous thrombosis, DOACs, and warfarin may have comparable efficacy and safety. Given the limitations of the studies included (low number of randomized controlled trials, modest total sample size, rare outcome events), our findings should be interpreted with caution pending confirmation by ongoing randomized controlled trials and large, prospective, observational studies.
Asunto(s)
Trombosis Intracraneal , Tromboembolia Venosa , Trombosis de la Vena , Administración Oral , Anticoagulantes/efectos adversos , Fibrinolíticos/uso terapéutico , Hemorragia/tratamiento farmacológico , Humanos , Trombosis Intracraneal/tratamiento farmacológico , Estudios Prospectivos , Trombosis de la Vena/tratamiento farmacológico , Vitamina K , Warfarina/uso terapéuticoRESUMEN
STUDY DESIGN: Retrospective cohort study. OBJECTIVES: Prolonged ICU stay is a driver of higher costs and inferior outcomes in Adult Spinal Deformity (ASD) patients. Machine learning (ML) models have recently been seen as a viable method of predicting pre-operative risk but are often 'black boxes' that do not fully explain the decision-making process. This study aims to demonstrate ML can achieve similar or greater predictive power as traditional statistical methods and follows traditional clinical decision-making processes. METHODS: Five ML models (Decision Tree, Random Forest, Support Vector Classifier, GradBoost, and a CNN) were trained on data collected from a large urban academic center to predict whether prolonged ICU stay would be required post-operatively. 535 patients who underwent posterior fusion or combined fusion for treatment of ASD were included in each model with a 70-20-10 train-test-validation split. Further analysis was performed using Shapley Additive Explanation (SHAP) values to provide insight into each model's decision-making process. RESULTS: The model's Area Under the Receiver Operating Curve (AUROC) ranged from 0.67 to 0.83. The Random Forest model achieved the highest score. The model considered length of surgery, complications, and estimated blood loss to be the greatest predictors of prolonged ICU stay based on SHAP values. CONCLUSIONS: We developed a ML model that was able to predict whether prolonged ICU stay was required in ASD patients. Further SHAP analysis demonstrated our model aligned with traditional clinical thinking. Thus, ML models have strong potential to assist with risk stratification and more effective and cost-efficient care.
RESUMEN
STUDY DESIGN: Comparative Analysis and Narrative Review. OBJECTIVE: To assess and compare ChatGPT's responses to the clinical questions and recommendations proposed by The 2011 North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Lumbar Spinal Stenosis (LSS). We explore the advantages and disadvantages of ChatGPT's responses through an updated literature review on spinal stenosis. METHODS: We prompted ChatGPT with questions from the NASS Evidence-based Clinical Guidelines for LSS and compared its generated responses with the recommendations provided by the guidelines. A review of the literature was performed via PubMed, OVID, and Cochrane on the diagnosis and treatment of lumbar spinal stenosis between January 2012 and April 2023. RESULTS: 14 questions proposed by the NASS guidelines for LSS were uploaded into ChatGPT and directly compared to the responses offered by NASS. Three questions were on the definition and history of LSS, one on diagnostic tests, seven on non-surgical interventions and three on surgical interventions. The review process found 40 articles that were selected for inclusion that helped corroborate or contradict the responses that were generated by ChatGPT. CONCLUSIONS: ChatGPT's responses were similar to findings in the current literature on LSS. These results demonstrate the potential for implementing ChatGPT into the spine surgeon's workplace as a means of supporting the decision-making process for LSS diagnosis and treatment. However, our narrative summary only provides a limited literature review and additional research is needed to standardize our findings as means of validating ChatGPT's use in the clinical space.
RESUMEN
Background: Resident training programs in the US use the Orthopaedic In-Training Examination (OITE) developed by the American Academy of Orthopaedic Surgeons (AAOS) to assess the current knowledge of their residents and to identify the residents at risk of failing the Amerian Board of Orthopaedic Surgery (ABOS) examination. Optimal strategies for OITE preparation are constantly being explored. There may be a role for Large Language Models (LLMs) in orthopaedic resident education. ChatGPT, an LLM launched in late 2022 has demonstrated the ability to produce accurate, detailed answers, potentially enabling it to aid in medical education and clinical decision-making. The purpose of this study is to evaluate the performance of ChatGPT on Orthopaedic In-Training Examinations using Self-Assessment Exams from the AAOS database and approved literature as a proxy for the Orthopaedic Board Examination. Methods: 301 SAE questions from the AAOS database and associated AAOS literature were input into ChatGPT's interface in a question and multiple-choice format and the answers were then analyzed to determine which answer choice was selected. A new chat was used for every question. All answers were recorded, categorized, and compared to the answer given by the OITE and SAE exams, noting whether the answer was right or wrong. Results: Of the 301 questions asked, ChatGPT was able to correctly answer 183 (60.8%) of them. The subjects with the highest percentage of correct questions were basic science (81%), oncology (72.7%, shoulder and elbow (71.9%), and sports (71.4%). The questions were further subdivided into 3 groups: those about management, diagnosis, or knowledge recall. There were 86 management questions and 47 were correct (54.7%), 45 diagnosis questions with 32 correct (71.7%), and 168 knowledge recall questions with 102 correct (60.7%). Conclusions: ChatGPT has the potential to provide orthopedic educators and trainees with accurate clinical conclusions for the majority of board-style questions, although its reasoning should be carefully analyzed for accuracy and clinical validity. As such, its usefulness in a clinical educational context is currently limited but rapidly evolving. Clinical relevance: ChatGPT can access a multitude of medical data and may help provide accurate answers to clinical questions.
RESUMEN
OBJECTIVE: Large language models like chat generative pre-trained transformer (ChatGPT) have found success in various sectors, but their application in the medical field remains limited. This study aimed to assess the feasibility of using ChatGPT to provide accurate medical information to patients, specifically evaluating how well ChatGPT versions 3.5 and 4 aligned with the 2012 North American Spine Society (NASS) guidelines for lumbar disk herniation with radiculopathy. METHODS: ChatGPT's responses to questions based on the NASS guidelines were analyzed for accuracy. Three new categories-overconclusiveness, supplementary information, and incompleteness-were introduced to deepen the analysis. Overconclusiveness referred to recommendations not mentioned in the NASS guidelines, supplementary information denoted additional relevant details, and incompleteness indicated omitted crucial information from the NASS guidelines. RESULTS: Out of 29 clinical guidelines evaluated, ChatGPT-3.5 demonstrated accuracy in 15 responses (52%), while ChatGPT-4 achieved accuracy in 17 responses (59%). ChatGPT-3.5 was overconclusive in 14 responses (48%), while ChatGPT-4 exhibited overconclusiveness in 13 responses (45%). Additionally, ChatGPT-3.5 provided supplementary information in 24 responses (83%), and ChatGPT-4 provided supplemental information in 27 responses (93%). In terms of incompleteness, ChatGPT-3.5 displayed this in 11 responses (38%), while ChatGPT-4 showed incompleteness in 8 responses (23%). CONCLUSION: ChatGPT shows promise for clinical decision-making, but both patients and healthcare providers should exercise caution to ensure safety and quality of care. While these results are encouraging, further research is necessary to validate the use of large language models in clinical settings.
RESUMEN
STUDY DESIGN: This study analyzes patents associated with minimally invasive spine surgery (MISS) found on the Lens open online platform. OBJECTIVE: The goal of this research was to provide an overview of the most referenced patents in the field of MISS and to uncover patterns in the evolution and categorization of these patents. SUMMARY OF BACKGROUND DATA: MISS has rapidly progressed, with a core focus on minimizing surgical damage, preserving the natural anatomy, and enabling swift recovery, all while achieving outcomes that rival traditional open surgery. While prior studies have primarily concentrated on MISS outcomes, the analysis of MISS patents has been limited. METHODS: To conduct this study, we used the Lens platform to search for patents that included the terms "minimally invasive" and "spine" in their titles, abstracts, or claims. We then categorized these patents and identified the top 100 with the most forward citations. We further classified these patents into 4 categories: Spinal Stabilization Systems, Joint Implants or Procedures, Screw Delivery System or Method, and Access and Surgical Pathway Formation. RESULTS: Five hundred two MISS patents were identified initially, and 276 were retained following a screening process. Among the top 100 patents, the majority had active legal status. The largest category within the top 100 patents was Access and Surgical Pathway Formation, closely followed by Spinal Stabilization Systems and Joint Implants or Procedures. The smallest category was Screw Delivery System or Method. Notably, the majority of the top 100 patents had priority years falling between 2000 and 2009, indicating a moderate positive correlation between patent rank and priority year. CONCLUSIONS: Thus far, patents related to Access and Surgical Pathway Formation have laid the foundation for subsequent innovations in Spinal Stabilization Systems and Screw Technology. This study serves as a valuable resource for guiding future innovations in this rapidly evolving field.
RESUMEN
OBJECTIVE: The objective of this study was to assess the safety and accuracy of ChatGPT recommendations in comparison to the evidence-based guidelines from the North American Spine Society (NASS) for the diagnosis and treatment of cervical radiculopathy. METHODS: ChatGPT was prompted with questions from the 2011 NASS clinical guidelines for cervical radiculopathy and evaluated for concordance. Selected key phrases within the NASS guidelines were identified. Completeness was measured as the number of overlapping key phrases between ChatGPT responses and NASS guidelines divided by the total number of key phrases. A senior spine surgeon evaluated the ChatGPT responses for safety and accuracy. ChatGPT responses were further evaluated on their readability, similarity, and consistency. Flesch Reading Ease scores and Flesch-Kincaid reading levels were measured to assess readability. The Jaccard Similarity Index was used to assess agreement between ChatGPT responses and NASS clinical guidelines. RESULTS: A total of 100 key phrases were identified across 14 NASS clinical guidelines. The mean completeness of ChatGPT-4 was 46%. ChatGPT-3.5 yielded a completeness of 34%. ChatGPT-4 outperformed ChatGPT-3.5 by a margin of 12%. ChatGPT-4.0 outputs had a mean Flesch reading score of 15.24, which is very difficult to read, requiring a college graduate education to understand. ChatGPT-3.5 outputs had a lower mean Flesch reading score of 8.73, indicating that they are even more difficult to read and require a professional education level to do so. However, both versions of ChatGPT were more accessible than NASS guidelines, which had a mean Flesch reading score of 4.58. Furthermore, with NASS guidelines as a reference, ChatGPT-3.5 registered a mean ± SD Jaccard Similarity Index score of 0.20 ± 0.078 while ChatGPT-4 had a mean of 0.18 ± 0.068. Based on physician evaluation, outputs from ChatGPT-3.5 and ChatGPT-4.0 were safe 100% of the time. Thirteen of 14 (92.8%) ChatGPT-3.5 responses and 14 of 14 (100%) ChatGPT-4.0 responses were in agreement with current best clinical practices for cervical radiculopathy according to a senior spine surgeon. CONCLUSIONS: ChatGPT models were able to provide safe and accurate but incomplete responses to NASS clinical guideline questions about cervical radiculopathy. Although the authors' results suggest that improvements are required before ChatGPT can be reliably deployed in a clinical setting, future versions of the LLM hold promise as an updated reference for guidelines on cervical radiculopathy. Future versions must prioritize accessibility and comprehensibility for a diverse audience.
Asunto(s)
Radiculopatía , Humanos , Radiculopatía/diagnóstico , Guías de Práctica Clínica como Asunto/normas , Vértebras Cervicales/cirugía , Sociedades MédicasRESUMEN
STUDY DESIGN: Comparative analysis. OBJECTIVE: To evaluate Chat Generative Pre-trained Transformer (ChatGPT's) ability to predict appropriate clinical recommendations based on the most recent clinical guidelines for the diagnosis and treatment of low back pain. BACKGROUND: Low back pain is a very common and often debilitating condition that affects many people globally. ChatGPT is an artificial intelligence model that may be able to generate recommendations for low back pain. MATERIALS AND METHODS: Using the North American Spine Society Evidence-Based Clinical Guidelines as the gold standard, 82 clinical questions relating to low back pain were entered into ChatGPT (GPT-3.5) independently. For each question, we recorded ChatGPT's answer, then used a point-answer system-the point being the guideline recommendation and the answer being ChatGPT's response-and asked ChatGPT if the point was mentioned in the answer to assess for accuracy. This response accuracy was repeated with one caveat-a prior prompt is given in ChatGPT to answer as an experienced orthopedic surgeon-for each question by guideline category. A two-sample proportion z test was used to assess any differences between the preprompt and postprompt scenarios with alpha=0.05. RESULTS: ChatGPT's response was accurate 65% (72% postprompt, P =0.41) for guidelines with clinical recommendations, 46% (58% postprompt, P =0.11) for guidelines with insufficient or conflicting data, and 49% (16% postprompt, P =0.003*) for guidelines with no adequate study to address the clinical question. For guidelines with insufficient or conflicting data, 44% (25% postprompt, P =0.01*) of ChatGPT responses wrongly suggested that sufficient evidence existed. CONCLUSION: ChatGPT was able to produce a sufficient clinical guideline recommendation for low back pain, with overall improvements if initially prompted. However, it tended to wrongly suggest evidence and often failed to mention, especially postprompt, when there is not enough evidence to adequately give an accurate recommendation.
Asunto(s)
Dolor de la Región Lumbar , Cirujanos Ortopédicos , Humanos , Dolor de la Región Lumbar/diagnóstico , Dolor de la Región Lumbar/terapia , Inteligencia Artificial , Columna VertebralRESUMEN
OBJECTIVE: Large language models, such as chat generative pre-trained transformer (ChatGPT), have great potential for streamlining medical processes and assisting physicians in clinical decision-making. This study aimed to assess the potential of ChatGPT's 2 models (GPT-3.5 and GPT-4.0) to support clinical decision-making by comparing its responses for antibiotic prophylaxis in spine surgery to accepted clinical guidelines. METHODS: ChatGPT models were prompted with questions from the North American Spine Society (NASS) Evidence-based Clinical Guidelines for Multidisciplinary Spine Care for Antibiotic Prophylaxis in Spine Surgery (2013). Its responses were then compared and assessed for accuracy. RESULTS: Of the 16 NASS guideline questions concerning antibiotic prophylaxis, 10 responses (62.5%) were accurate in ChatGPT's GPT-3.5 model and 13 (81%) were accurate in GPT-4.0. Twenty-five percent of GPT-3.5 answers were deemed as overly confident while 62.5% of GPT-4.0 answers directly used the NASS guideline as evidence for its response. CONCLUSION: ChatGPT demonstrated an impressive ability to accurately answer clinical questions. GPT-3.5 model's performance was limited by its tendency to give overly confident responses and its inability to identify the most significant elements in its responses. GPT-4.0 model's responses had higher accuracy and cited the NASS guideline as direct evidence many times. While GPT-4.0 is still far from perfect, it has shown an exceptional ability to extract the most relevant research available compared to GPT-3.5. Thus, while ChatGPT has shown far-reaching potential, scrutiny should still be exercised regarding its clinical use at this time.
RESUMEN
STUDY DESIGN: Retrospective cohort. OBJECTIVE: The purpose of this study was to evaluate the effect of overdistraction on interbody cage subsidence. BACKGROUND: Vertebral overdistraction due to the use of large intervertebral cage sizes may increase the risk of postoperative subsidence. METHODS: Patients who underwent anterior cervical discectomy and fusion between 2016 and 2021 were included. All measurements were performed using lateral cervical radiographs at 3 time points - preoperative, immediate postoperative, and final follow-up >6 months postoperatively. Anterior and posterior distraction were calculated by subtracting the preoperative disc height from the immediate postoperative disc height. Cage subsidence was calculated by subtracting the final follow-up postoperative disc height from the immediate postoperative disc height. Associations between anterior and posterior subsidence and distraction were determined using multivariable linear regression models. The analyses controlled for cage type, cervical level, sex, age, smoking status, and osteopenia. RESULTS: Sixty-eight patients and 125 fused levels were included in the study. Of the 68 fusions, 22 were single-level fusions, 35 were 2-level, and 11 were 3-level. The median final follow-up interval was 368 days (range: 181-1257 d). Anterior disc space subsidence was positively associated with anterior distraction (beta = 0.23; 95% CI: 0.08, 0.38; P = 0.004), and posterior disc space subsidence was positively associated with posterior distraction (beta = 0.29; 95% CI: 0.13, 0.45; P < 0.001). No significant associations between anterior distraction and posterior subsidence (beta = 0.07; 95% CI: -0.06, 0.20; P = 0.270) or posterior distraction and anterior subsidence (beta = 0.06; 95% CI: -0.14, 0.27; P = 0.541) were observed. CONCLUSIONS: We found that overdistraction of the disc space was associated with increased postoperative subsidence after anterior cervical discectomy and fusion. Surgeons should consider choosing a smaller cage size to avoid overdistraction and minimize postoperative subsidence.
RESUMEN
BACKGROUND: Acute hip fractures are a public health problem affecting primarily older adults. Chat Generative Pretrained Transformer may be useful in providing appropriate clinical recommendations for beneficial treatment. OBJECTIVE: To evaluate the accuracy of Chat Generative Pretrained Transformer (ChatGPT)-4.0 by comparing its appropriateness scores for acute hip fractures with the American Academy of Orthopaedic Surgeons (AAOS) Appropriate Use Criteria given 30 patient scenarios. "Appropriateness" indicates the unexpected health benefits of treatment exceed the expected negative consequences by a wide margin. METHODS: Using the AAOS Appropriate Use Criteria as the benchmark, numerical scores from 1 to 9 assessed appropriateness. For each patient scenario, ChatGPT-4.0 was asked to assign an appropriate score for six treatments to manage acute hip fractures. RESULTS: Thirty patient scenarios were evaluated for 180 paired scores. Comparing ChatGPT-4.0 with AAOS scores, there was a positive correlation for multiple cannulated screw fixation, total hip arthroplasty, hemiarthroplasty, and long cephalomedullary nails. Statistically significant differences were observed only between scores for long cephalomedullary nails. CONCLUSION: ChatGPT-4.0 scores were not concordant with AAOS scores, overestimating the appropriateness of total hip arthroplasty, hemiarthroplasty, and long cephalomedullary nails, and underestimating the other three. ChatGPT-4.0 was inadequate in selecting an appropriate treatment deemed acceptable, most reasonable, and most likely to improve patient outcomes.
Asunto(s)
Fracturas de Cadera , Humanos , Fracturas de Cadera/cirugía , Anciano , Femenino , Masculino , Anciano de 80 o más Años , Artroplastia de Reemplazo de Cadera , Hemiartroplastia , Guías de Práctica Clínica como Asunto , Enfermedad Aguda , LenguajeRESUMEN
Miniaturized electrical stimulation (ES) implants show great promise in practice, but their real-time control by means of biophysical mechanistic algorithms is not feasible due to computational complexity. Here, we study the feasibility of more computationally efficient machine learning methods to control ES implants. For this, we estimate the normalized twitch force of the stimulated extensor digitorum longus muscle on n = 11 Wistar rats with intra- and cross-subject calibration. After 2000 training stimulations, we reach a mean absolute error of 0.03 in an intra-subject setting and 0.2 in a cross-subject setting with a random forest regressor. To the best of our knowledge, this work is the first experiment showing the feasibility of AI to simulate complex ES mechanistic models. However, the results of cross-subject training motivate more research on error reduction methods for this setting.
Asunto(s)
Inteligencia Artificial , Músculo Esquelético , Ratas , Animales , Ratas Wistar , Estudios de Factibilidad , Músculo Esquelético/fisiología , Estimulación Eléctrica/métodos , Contracción MuscularRESUMEN
STUDY DESIGN: Retrospective cohort study. OBJECTIVES: This study assessed the effectiveness of a popular large language model, ChatGPT-4, in predicting Current Procedural Terminology (CPT) codes from surgical operative notes. By employing a combination of prompt engineering, natural language processing (NLP), and machine learning techniques on standard operative notes, the study sought to enhance billing efficiency, optimize revenue collection, and reduce coding errors. METHODS: The model was given 3 different types of prompts for 50 surgical operative notes from 2 spine surgeons. The first trial was simply asking the model to generate CPT codes for a given OP note. The second trial included 3 OP notes and associated CPT codes to, and the third trial included a list of every possible CPT code in the dataset to prime the model. CPT codes generated by the model were compared to those generated by the billing department. Model evaluation was performed in the form of calculating the area under the ROC (AUROC), and area under precision-recall curves (AUPRC). RESULTS: The trial that involved priming ChatGPT with a list of every possible CPT code performed the best, with an AUROC of .87 and an AUPRC of .67, and an AUROC of .81 and AUPRC of .76 when examining only the most common CPT codes. CONCLUSIONS: ChatGPT-4 can aid in automating CPT billing from orthopedic surgery operative notes, driving down healthcare expenditures and enhancing billing code precision as the model evolves and fine-tuning becomes available.