Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Vasc Interv Radiol ; 35(7): 1066-1071, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38513754

RESUMO

PURPOSE: To evaluate conflicts of interest (COIs) among interventional radiologists and related specialties who mention specific devices or companies on the social media (SoMe) platform X, formerly Twitter. MATERIALS AND METHODS: In total, 13,809 posts between October 7, 2021, and December 31, 2021, on X were evaluated. Posts by U.S. interventional radiologists and related specialties who mentioned a specific device or company were identified. A positive COI was defined as receiving a payment from the device manufacturer or company within 36 months prior to posting. The Center for Medicare & Medicaid Services Open Payment database was used to identify financial payments. The prevalence and value of COIs were assessed and compared between posts mentioning a device or company and a paired control group using descriptive statistics and chi-squared tests and independent t tests. RESULTS: Eighty posts containing the mention of 100 specific devices or companies were evaluated. COIs were present in 53% (53/100). When mentioning a specific device or product, 40% interventional radiologists had a COI, compared with 62% neurosurgeons. Physicians who mentioned a specific device or company were 3.7 times more likely to have a positive COI relative to the paired control group (53/100 vs 14/100; P < .001). Of the 31 physicians with a COI, the median physician received $2,270. None of the positive COIs were disclosed. CONCLUSIONS: Physicians posting on SoMe about a specific device or company were more likely to have a financial COI than authors of posts not mentioning a specific device or company. No disclosure of any COI was present in the posts, limiting followers' ability to weigh potential bias.


Assuntos
Conflito de Interesses , Procedimentos Endovasculares , Radiologistas , Mídias Sociais , Conflito de Interesses/economia , Humanos , Radiologistas/economia , Radiologistas/ética , Procedimentos Endovasculares/economia , Estados Unidos , Neurocirurgiões/economia , Neurocirurgiões/ética , Revelação , Especialização/economia , Setor de Assistência à Saúde/economia , Setor de Assistência à Saúde/ética
2.
AJR Am J Roentgenol ; 2024 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-39320355

RESUMO

Background: Many patients with symptomatic knee osteoarthritis (KOA) are refractory to traditional nonsurgical treatments such as intraarticular corticosteroid (CS) injection but are not yet eligible for or decline surgery. Genicular artery embolization (GAE) and radiofrequency ablation (RFA) are emerging adjunctive or alternative minimally invasive treatments. Objective: To perform a cost-effectiveness analysis (CEA) comparing CS, GAE, and RFA, for treatment of symptomatic KOA using a Markov model based on a de novo network meta-analysis (NMA) of randomized control trials. Methods: CEA was conducted to compare GAE and RFA to CS using a Markov cohort state-transition model from a U.S. Medicare payer's perspective over a 4-year time horizon. The model incorporated each treatment's success and attrition rates, costs, and utility benefit. Utility benefit values were derived at short-term (0.5-3 months) and long-term (6-12 months) posttreatment follow-up from NMA of published RCTs using an outcome of improved knee pain and/or function. Analyses were conducted at a willingness-to-pay threshold of $100,000 per quality-adjusted life year (QALY). Sensitivity analyses were performed, including when simulating various cost setting scenarios (i.e., office vs hospital outpatient treatment). Results: RFA demonstrated larger treatment effect than GAE, more pronounced at short-term [standardized mean difference (SMD), -1.6688, 95% CI [-2.7806; -0.5571], p=.003] than long-term (SMD -0.3822, 95% CI [-1.9743; 1.2100], p=.64) follow-up. Across cost setting scenarios, incremental cost-effectiveness ratios relative to CS were $561-1563/QALY for GAE versus $76-429/QALY for RFA (not counting scenarios in which RFA was dominated by CS). GAE demonstrated higher cost-effectiveness probability compared to RFA (41.6-54.8% vs. 18.4-29.2%, respectively). GAE was more cost-effective than RFA when the GAE clinical success rate and post-GAE utility value exceeded 32.1-51.0% and 0.562-0.617, respectively, and when the GAE quarterly attrition rate was less than 8.8-17.4%. RFA was more cost-effective when baseline pre-treatment utility values exceeded 0.695-0.713. Neither GAE costs nor RFA costs were sensitive parameters. Conclusion: Across scenarios, GAE was consistently the most likely cost-effective treatment option compared to RFA and CS, although clinical success rates, attrition rates, and utility values impact its cost-effectiveness. Clinical Impact: GAE is likely to be more cost-effective than RFA or CS for treatment of symptomatic KOA.

3.
Arthroscopy ; 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39209078

RESUMO

PURPOSE: To assess ChatGPT, Bard, and BingChat's ability to generate accurate orthopaedic diagnosis or corresponding treatments by comparing their performance on the Orthopaedic In-Training Examination (OITE) to orthopaedic trainees. METHODS: OITE question sets from 2021 and 2022 were compiled to form a large set of 420 questions. ChatGPT (GPT3.5), Bard, and BingChat were instructed to select one of the provided responses to each question. The accuracy of composite questions was recorded and comparatively analyzed to human cohorts including medical students and orthopaedic residents, stratified by post-graduate year. RESULTS: ChatGPT correctly answered 46.3% of composite questions whereas BingChat correctly answered 52.4% and Bard correctly answered 51.4% of questions on the OITE. Upon excluding image-associated questions, ChatGPT, BingChat, and Bard's overall accuracies improved to 49.1%, 53.5%, and 56.8%, respectively. Medical students and orthopaedic residents (PGY1-5) correctly answered 30.8%, 53.1%, 60.4%, 66.6%, 70.0%, and 71.9%, respectively. CONCLUSION: ChatGPT, Bard, and BingChat are AI models that answered OITE questions with an accuracy similar to that of first-year orthopaedic surgery residents. ChatGPT, Bard, and BingChat achieved this result without using images or other supplementary media that human test takers are provided.

4.
Orthopedics ; 47(2): e85-e89, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37757748

RESUMO

Advances in artificial intelligence and machine learning models, like Chat Generative Pre-trained Transformer (ChatGPT), have occurred at a remarkably fast rate. OpenAI released its newest model of ChatGPT, GPT-4, in March 2023. It offers a wide range of medical applications. The model has demonstrated notable proficiency on many medical board examinations. This study sought to assess GPT-4's performance on the Orthopaedic In-Training Examination (OITE) used to prepare residents for the American Board of Orthopaedic Surgery (ABOS) Part I Examination. The data gathered from GPT-4's performance were additionally compared with the data of the previous iteration of ChatGPT, GPT-3.5, which was released 4 months before GPT-4. GPT-4 correctly answered 251 of the 396 attempted questions (63.4%), whereas GPT-3.5 correctly answered 46.3% of 410 attempted questions. GPT-4 was significantly more accurate than GPT-3.5 on orthopedic board-style questions (P<.00001). GPT-4's performance is most comparable to that of an average third-year orthopedic surgery resident, while GPT-3.5 performed below an average orthopedic intern. GPT-4's overall accuracy was just below the approximate threshold that indicates a likely pass on the ABOS Part I Examination. Our results demonstrate significant improvements in OpenAI's newest model, GPT-4. Future studies should assess potential clinical applications as AI models continue to be trained on larger data sets and offer more capabilities. [Orthopedics. 2024;47(2):e85-e89.].


Assuntos
Internato e Residência , Procedimentos Ortopédicos , Ortopedia , Humanos , Ortopedia/educação , Inteligência Artificial , Avaliação Educacional , Competência Clínica
5.
World Neurosurg ; 179: e160-e165, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37597659

RESUMO

BACKGROUND: Artificial intelligence (AI) and machine learning have transformed health care with applications in various specialized fields. Neurosurgery can benefit from artificial intelligence in surgical planning, predicting patient outcomes, and analyzing neuroimaging data. GPT-4, an updated language model with additional training parameters, has exhibited exceptional performance on standardized exams. This study examines GPT-4's competence on neurosurgical board-style questions, comparing its performance with medical students and residents, to explore its potential in medical education and clinical decision-making. METHODS: GPT-4's performance was examined on 643 Congress of Neurological Surgeons Self-Assessment Neurosurgery Exam (SANS) board-style questions from various neurosurgery subspecialties. Of these, 477 were text-based and 166 contained images. GPT-4 refused to answer 52 questions that contained no text. The remaining 591 questions were inputted into GPT-4, and its performance was evaluated based on first-time responses. Raw scores were analyzed across subspecialties and question types, and then compared to previous findings on Chat Generative pre-trained transformer performance against SANS users, medical students, and neurosurgery residents. RESULTS: GPT-4 attempted 91.9% of Congress of Neurological Surgeons SANS questions and achieved 76.6% accuracy. The model's accuracy increased to 79.0% for text-only questions. GPT-4 outperformed Chat Generative pre-trained transformer (P < 0.001) and scored highest in pain/peripheral nerve (84%) and lowest in spine (73%) categories. It exceeded the performance of medical students (26.3%), neurosurgery residents (61.5%), and the national average of SANS users (69.3%) across all categories. CONCLUSIONS: GPT-4 significantly outperformed medical students, neurosurgery residents, and the national average of SANS users. The mode's accuracy suggests potential applications in educational settings and clinical decision-making, enhancing provider efficiency, and improving patient care.


Assuntos
Neuralgia , Neurocirurgia , Estudantes de Medicina , Humanos , Inteligência Artificial , Procedimentos Neurocirúrgicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA