Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Eur Spine J ; 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38489044

RESUMO

BACKGROUND CONTEXT: Clinical guidelines, developed in concordance with the literature, are often used to guide surgeons' clinical decision making. Recent advancements of large language models and artificial intelligence (AI) in the medical field come with exciting potential. OpenAI's generative AI model, known as ChatGPT, can quickly synthesize information and generate responses grounded in medical literature, which may prove to be a useful tool in clinical decision-making for spine care. The current literature has yet to investigate the ability of ChatGPT to assist clinical decision making with regard to degenerative spondylolisthesis. PURPOSE: The study aimed to compare ChatGPT's concordance with the recommendations set forth by The North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and assess ChatGPT's accuracy within the context of the most recent literature. METHODS: ChatGPT-3.5 and 4.0 was prompted with questions from the NASS Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and graded its recommendations as "concordant" or "nonconcordant" relative to those put forth by NASS. A response was considered "concordant" when ChatGPT generated a recommendation that accurately reproduced all major points made in the NASS recommendation. Any responses with a grading of "nonconcordant" were further stratified into two subcategories: "Insufficient" or "Over-conclusive," to provide further insight into grading rationale. Responses between GPT-3.5 and 4.0 were compared using Chi-squared tests. RESULTS: ChatGPT-3.5 answered 13 of NASS's 28 total clinical questions in concordance with NASS's guidelines (46.4%). Categorical breakdown is as follows: Definitions and Natural History (1/1, 100%), Diagnosis and Imaging (1/4, 25%), Outcome Measures for Medical Intervention and Surgical Treatment (0/1, 0%), Medical and Interventional Treatment (4/6, 66.7%), Surgical Treatment (7/14, 50%), and Value of Spine Care (0/2, 0%). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-3.5 generated a concordant response 66.7% of the time (6/9). However, ChatGPT-3.5's concordance dropped to 36.8% when asked clinical questions that NASS did not provide a clear recommendation on (7/19). A further breakdown of ChatGPT-3.5's nonconcordance with the guidelines revealed that a vast majority of its inaccurate recommendations were due to them being "over-conclusive" (12/15, 80%), rather than "insufficient" (3/15, 20%). ChatGPT-4.0 answered 19 (67.9%) of the 28 total questions in concordance with NASS guidelines (P = 0.177). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-4.0 generated a concordant response 66.7% of the time (6/9). ChatGPT-4.0's concordance held up at 68.4% when asked clinical questions that NASS did not provide a clear recommendation on (13/19, P = 0.104). CONCLUSIONS: This study sheds light on the duality of LLM applications within clinical settings: one of accuracy and utility in some contexts versus inaccuracy and risk in others. ChatGPT was concordant for most clinical questions NASS offered recommendations for. However, for questions NASS did not offer best practices, ChatGPT generated answers that were either too general or inconsistent with the literature, and even fabricated data/citations. Thus, clinicians should exercise extreme caution when attempting to consult ChatGPT for clinical recommendations, taking care to ensure its reliability within the context of recent literature.

2.
J Neurosurg Spine ; : 1-11, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38941643

RESUMO

OBJECTIVE: The objective of this study was to assess the safety and accuracy of ChatGPT recommendations in comparison to the evidence-based guidelines from the North American Spine Society (NASS) for the diagnosis and treatment of cervical radiculopathy. METHODS: ChatGPT was prompted with questions from the 2011 NASS clinical guidelines for cervical radiculopathy and evaluated for concordance. Selected key phrases within the NASS guidelines were identified. Completeness was measured as the number of overlapping key phrases between ChatGPT responses and NASS guidelines divided by the total number of key phrases. A senior spine surgeon evaluated the ChatGPT responses for safety and accuracy. ChatGPT responses were further evaluated on their readability, similarity, and consistency. Flesch Reading Ease scores and Flesch-Kincaid reading levels were measured to assess readability. The Jaccard Similarity Index was used to assess agreement between ChatGPT responses and NASS clinical guidelines. RESULTS: A total of 100 key phrases were identified across 14 NASS clinical guidelines. The mean completeness of ChatGPT-4 was 46%. ChatGPT-3.5 yielded a completeness of 34%. ChatGPT-4 outperformed ChatGPT-3.5 by a margin of 12%. ChatGPT-4.0 outputs had a mean Flesch reading score of 15.24, which is very difficult to read, requiring a college graduate education to understand. ChatGPT-3.5 outputs had a lower mean Flesch reading score of 8.73, indicating that they are even more difficult to read and require a professional education level to do so. However, both versions of ChatGPT were more accessible than NASS guidelines, which had a mean Flesch reading score of 4.58. Furthermore, with NASS guidelines as a reference, ChatGPT-3.5 registered a mean ± SD Jaccard Similarity Index score of 0.20 ± 0.078 while ChatGPT-4 had a mean of 0.18 ± 0.068. Based on physician evaluation, outputs from ChatGPT-3.5 and ChatGPT-4.0 were safe 100% of the time. Thirteen of 14 (92.8%) ChatGPT-3.5 responses and 14 of 14 (100%) ChatGPT-4.0 responses were in agreement with current best clinical practices for cervical radiculopathy according to a senior spine surgeon. CONCLUSIONS: ChatGPT models were able to provide safe and accurate but incomplete responses to NASS clinical guideline questions about cervical radiculopathy. Although the authors' results suggest that improvements are required before ChatGPT can be reliably deployed in a clinical setting, future versions of the LLM hold promise as an updated reference for guidelines on cervical radiculopathy. Future versions must prioritize accessibility and comprehensibility for a diverse audience.

3.
Clin Spine Surg ; 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39092883

RESUMO

STUDY DESIGN: This study analyzes patents associated with minimally invasive spine surgery (MISS) found on the Lens open online platform. OBJECTIVE: The goal of this research was to provide an overview of the most referenced patents in the field of MISS and to uncover patterns in the evolution and categorization of these patents. SUMMARY OF BACKGROUND DATA: MISS has rapidly progressed, with a core focus on minimizing surgical damage, preserving the natural anatomy, and enabling swift recovery, all while achieving outcomes that rival traditional open surgery. While prior studies have primarily concentrated on MISS outcomes, the analysis of MISS patents has been limited. METHODS: To conduct this study, we used the Lens platform to search for patents that included the terms "minimally invasive" and "spine" in their titles, abstracts, or claims. We then categorized these patents and identified the top 100 with the most forward citations. We further classified these patents into 4 categories: Spinal Stabilization Systems, Joint Implants or Procedures, Screw Delivery System or Method, and Access and Surgical Pathway Formation. RESULTS: Five hundred two MISS patents were identified initially, and 276 were retained following a screening process. Among the top 100 patents, the majority had active legal status. The largest category within the top 100 patents was Access and Surgical Pathway Formation, closely followed by Spinal Stabilization Systems and Joint Implants or Procedures. The smallest category was Screw Delivery System or Method. Notably, the majority of the top 100 patents had priority years falling between 2000 and 2009, indicating a moderate positive correlation between patent rank and priority year. CONCLUSIONS: Thus far, patents related to Access and Surgical Pathway Formation have laid the foundation for subsequent innovations in Spinal Stabilization Systems and Screw Technology. This study serves as a valuable resource for guiding future innovations in this rapidly evolving field.

4.
Artigo em Inglês | MEDLINE | ID: mdl-39137403

RESUMO

BACKGROUND: Acute hip fractures are a public health problem affecting primarily older adults. Chat Generative Pretrained Transformer may be useful in providing appropriate clinical recommendations for beneficial treatment. OBJECTIVE: To evaluate the accuracy of Chat Generative Pretrained Transformer (ChatGPT)-4.0 by comparing its appropriateness scores for acute hip fractures with the American Academy of Orthopaedic Surgeons (AAOS) Appropriate Use Criteria given 30 patient scenarios. "Appropriateness" indicates the unexpected health benefits of treatment exceed the expected negative consequences by a wide margin. METHODS: Using the AAOS Appropriate Use Criteria as the benchmark, numerical scores from 1 to 9 assessed appropriateness. For each patient scenario, ChatGPT-4.0 was asked to assign an appropriate score for six treatments to manage acute hip fractures. RESULTS: Thirty patient scenarios were evaluated for 180 paired scores. Comparing ChatGPT-4.0 with AAOS scores, there was a positive correlation for multiple cannulated screw fixation, total hip arthroplasty, hemiarthroplasty, and long cephalomedullary nails. Statistically significant differences were observed only between scores for long cephalomedullary nails. CONCLUSION: ChatGPT-4.0 scores were not concordant with AAOS scores, overestimating the appropriateness of total hip arthroplasty, hemiarthroplasty, and long cephalomedullary nails, and underestimating the other three. ChatGPT-4.0 was inadequate in selecting an appropriate treatment deemed acceptable, most reasonable, and most likely to improve patient outcomes.


Assuntos
Fraturas do Quadril , Humanos , Fraturas do Quadril/cirurgia , Idoso , Feminino , Masculino , Idoso de 80 Anos ou mais , Artroplastia de Quadril , Hemiartroplastia , Guias de Prática Clínica como Assunto , Doença Aguda , Idioma
5.
Neurospine ; 21(1): 128-146, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38569639

RESUMO

OBJECTIVE: Large language models, such as chat generative pre-trained transformer (ChatGPT), have great potential for streamlining medical processes and assisting physicians in clinical decision-making. This study aimed to assess the potential of ChatGPT's 2 models (GPT-3.5 and GPT-4.0) to support clinical decision-making by comparing its responses for antibiotic prophylaxis in spine surgery to accepted clinical guidelines. METHODS: ChatGPT models were prompted with questions from the North American Spine Society (NASS) Evidence-based Clinical Guidelines for Multidisciplinary Spine Care for Antibiotic Prophylaxis in Spine Surgery (2013). Its responses were then compared and assessed for accuracy. RESULTS: Of the 16 NASS guideline questions concerning antibiotic prophylaxis, 10 responses (62.5%) were accurate in ChatGPT's GPT-3.5 model and 13 (81%) were accurate in GPT-4.0. Twenty-five percent of GPT-3.5 answers were deemed as overly confident while 62.5% of GPT-4.0 answers directly used the NASS guideline as evidence for its response. CONCLUSION: ChatGPT demonstrated an impressive ability to accurately answer clinical questions. GPT-3.5 model's performance was limited by its tendency to give overly confident responses and its inability to identify the most significant elements in its responses. GPT-4.0 model's responses had higher accuracy and cited the NASS guideline as direct evidence many times. While GPT-4.0 is still far from perfect, it has shown an exceptional ability to extract the most relevant research available compared to GPT-3.5. Thus, while ChatGPT has shown far-reaching potential, scrutiny should still be exercised regarding its clinical use at this time.

6.
Spine (Phila Pa 1976) ; 49(9): 640-651, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38213186

RESUMO

STUDY DESIGN: Comparative analysis. OBJECTIVE: To evaluate Chat Generative Pre-trained Transformer (ChatGPT's) ability to predict appropriate clinical recommendations based on the most recent clinical guidelines for the diagnosis and treatment of low back pain. BACKGROUND: Low back pain is a very common and often debilitating condition that affects many people globally. ChatGPT is an artificial intelligence model that may be able to generate recommendations for low back pain. MATERIALS AND METHODS: Using the North American Spine Society Evidence-Based Clinical Guidelines as the gold standard, 82 clinical questions relating to low back pain were entered into ChatGPT (GPT-3.5) independently. For each question, we recorded ChatGPT's answer, then used a point-answer system-the point being the guideline recommendation and the answer being ChatGPT's response-and asked ChatGPT if the point was mentioned in the answer to assess for accuracy. This response accuracy was repeated with one caveat-a prior prompt is given in ChatGPT to answer as an experienced orthopedic surgeon-for each question by guideline category. A two-sample proportion z test was used to assess any differences between the preprompt and postprompt scenarios with alpha=0.05. RESULTS: ChatGPT's response was accurate 65% (72% postprompt, P =0.41) for guidelines with clinical recommendations, 46% (58% postprompt, P =0.11) for guidelines with insufficient or conflicting data, and 49% (16% postprompt, P =0.003*) for guidelines with no adequate study to address the clinical question. For guidelines with insufficient or conflicting data, 44% (25% postprompt, P =0.01*) of ChatGPT responses wrongly suggested that sufficient evidence existed. CONCLUSION: ChatGPT was able to produce a sufficient clinical guideline recommendation for low back pain, with overall improvements if initially prompted. However, it tended to wrongly suggest evidence and often failed to mention, especially postprompt, when there is not enough evidence to adequately give an accurate recommendation.


Assuntos
Dor Lombar , Cirurgiões Ortopédicos , Humanos , Dor Lombar/diagnóstico , Dor Lombar/terapia , Inteligência Artificial , Coluna Vertebral
7.
Spine J ; 23(11): 1684-1691, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37499880

RESUMO

BACKGROUND CONTEXT: Venous thromboembolism is a negative outcome of elective spine surgery. However, the use of thromboembolic chemoprophylaxis in this patient population is controversial due to the possible increased risk of epidural hematoma. ChatGPT is an artificial intelligence model which may be able to generate recommendations for thromboembolic prophylaxis in spine surgery. PURPOSE: To evaluate the accuracy of ChatGPT recommendations for thromboembolic prophylaxis in spine surgery. STUDY DESIGN/SETTING: Comparative analysis. PATIENT SAMPLE: None. OUTCOME MEASURES: Accuracy, over-conclusiveness, supplemental, and incompleteness of ChatGPT responses compared to the North American Spine Society (NASS) clinical guidelines. METHODS: ChatGPT was prompted with questions from the 2009 NASS clinical guidelines for antithrombotic therapies and evaluated for concordance with the clinical guidelines. ChatGPT-3.5 responses were obtained on March 5, 2023, and ChatGPT-4.0 responses were obtained on April 7, 2023. A ChatGPT response was classified as accurate if it did not contradict the clinical guideline. Three additional categories were created to further evaluate the ChatGPT responses in comparison to the NASS guidelines: over-conclusiveness, supplementary, and incompleteness. ChatGPT was classified as over-conclusive if it made a recommendation where the NASS guideline did not provide one. ChatGPT was classified as supplementary if it included additional relevant information not specified by the NASS guideline. ChatGPT was classified as incomplete if it failed to provide relevant information included in the NASS guideline. RESULTS: Twelve clinical guidelines were evaluated in total. Compared to the NASS clinical guidelines, ChatGPT-3.5 was accurate in 4 (33%) of its responses while ChatGPT-4.0 was accurate in 11 (92%) responses. ChatGPT-3.5 was over-conclusive in 6 (50%) of its responses while ChatGPT-4.0 was over-conclusive in 1 (8%) response. ChatGPT-3.5 provided supplemental information in 8 (67%) of its responses, and ChatGPT-4.0 provided supplemental information in 11 (92%) responses. Four (33%) responses from ChatGPT-3.5 were incomplete, and 4 (33%) responses from ChatGPT-4.0 were incomplete. CONCLUSIONS: ChatGPT was able to provide recommendations for thromboembolic prophylaxis with reasonable accuracy. ChatGPT-3.5 tended to cite nonexistent sources and was more likely to give specific recommendations while ChatGPT-4.0 was more conservative in its answers. As ChatGPT is continuously updated, further validation is needed before it can be used as a guideline for clinical practice.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA