Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Vasc Interv Radiol ; 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38513754

RESUMO

PURPOSE: To evaluate conflicts of interest (COIs) among interventional radiologists and related specialties who mention specific devices or companies on the social media (SoMe) platform X, formerly Twitter. MATERIALS AND METHODS: In total, 13,809 posts between October 7, 2021, and December 31, 2021, on X were evaluated. Posts by U.S. interventional radiologists and related specialties who mentioned a specific device or company were identified. A positive COI was defined as receiving a payment from the device manufacturer or company within 36 months prior to posting. The Center for Medicare & Medicaid Services Open Payment database was used to identify financial payments. The prevalence and value of COIs were assessed and compared between posts mentioning a device or company and a paired control group using descriptive statistics and chi-squared tests and independent t tests. RESULTS: Eighty posts containing the mention of 100 specific devices or companies were evaluated. COIs were present in 53% (53/100). When mentioning a specific device or product, 40% interventional radiologists had a COI, compared with 62% neurosurgeons. Physicians who mentioned a specific device or company were 3.7 times more likely to have a positive COI relative to the paired control group (53/100 vs 14/100; P < .001). Of the 31 physicians with a COI, the median physician received $2,270. None of the positive COIs were disclosed. CONCLUSIONS: Physicians posting on SoMe about a specific device or company are more likely to have a financial COI than authors of posts not mentioning a specific device or company. No disclosure of any COI was present in the posts, limiting followers' ability to weigh potential bias.

2.
Orthopedics ; 47(2): e85-e89, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37757748

RESUMO

Advances in artificial intelligence and machine learning models, like Chat Generative Pre-trained Transformer (ChatGPT), have occurred at a remarkably fast rate. OpenAI released its newest model of ChatGPT, GPT-4, in March 2023. It offers a wide range of medical applications. The model has demonstrated notable proficiency on many medical board examinations. This study sought to assess GPT-4's performance on the Orthopaedic In-Training Examination (OITE) used to prepare residents for the American Board of Orthopaedic Surgery (ABOS) Part I Examination. The data gathered from GPT-4's performance were additionally compared with the data of the previous iteration of ChatGPT, GPT-3.5, which was released 4 months before GPT-4. GPT-4 correctly answered 251 of the 396 attempted questions (63.4%), whereas GPT-3.5 correctly answered 46.3% of 410 attempted questions. GPT-4 was significantly more accurate than GPT-3.5 on orthopedic board-style questions (P<.00001). GPT-4's performance is most comparable to that of an average third-year orthopedic surgery resident, while GPT-3.5 performed below an average orthopedic intern. GPT-4's overall accuracy was just below the approximate threshold that indicates a likely pass on the ABOS Part I Examination. Our results demonstrate significant improvements in OpenAI's newest model, GPT-4. Future studies should assess potential clinical applications as AI models continue to be trained on larger data sets and offer more capabilities. [Orthopedics. 2024;47(2):e85-e89.].


Assuntos
Internato e Residência , Procedimentos Ortopédicos , Ortopedia , Humanos , Ortopedia/educação , Inteligência Artificial , Avaliação Educacional , Competência Clínica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...