Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Vasc Interv Radiol ; 35(7): 1066-1071, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38513754

RESUMO

PURPOSE: To evaluate conflicts of interest (COIs) among interventional radiologists and related specialties who mention specific devices or companies on the social media (SoMe) platform X, formerly Twitter. MATERIALS AND METHODS: In total, 13,809 posts between October 7, 2021, and December 31, 2021, on X were evaluated. Posts by U.S. interventional radiologists and related specialties who mentioned a specific device or company were identified. A positive COI was defined as receiving a payment from the device manufacturer or company within 36 months prior to posting. The Center for Medicare & Medicaid Services Open Payment database was used to identify financial payments. The prevalence and value of COIs were assessed and compared between posts mentioning a device or company and a paired control group using descriptive statistics and chi-squared tests and independent t tests. RESULTS: Eighty posts containing the mention of 100 specific devices or companies were evaluated. COIs were present in 53% (53/100). When mentioning a specific device or product, 40% interventional radiologists had a COI, compared with 62% neurosurgeons. Physicians who mentioned a specific device or company were 3.7 times more likely to have a positive COI relative to the paired control group (53/100 vs 14/100; P < .001). Of the 31 physicians with a COI, the median physician received $2,270. None of the positive COIs were disclosed. CONCLUSIONS: Physicians posting on SoMe about a specific device or company were more likely to have a financial COI than authors of posts not mentioning a specific device or company. No disclosure of any COI was present in the posts, limiting followers' ability to weigh potential bias.


Assuntos
Conflito de Interesses , Procedimentos Endovasculares , Radiologistas , Mídias Sociais , Conflito de Interesses/economia , Humanos , Radiologistas/economia , Radiologistas/ética , Procedimentos Endovasculares/economia , Estados Unidos , Neurocirurgiões/economia , Neurocirurgiões/ética , Revelação , Especialização/economia , Setor de Assistência à Saúde/economia , Setor de Assistência à Saúde/ética
2.
Arthroscopy ; 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39209078

RESUMO

PURPOSE: To assess ChatGPT, Bard, and BingChat's ability to generate accurate orthopaedic diagnosis or corresponding treatments by comparing their performance on the Orthopaedic In-Training Examination (OITE) to orthopaedic trainees. METHODS: OITE question sets from 2021 and 2022 were compiled to form a large set of 420 questions. ChatGPT (GPT3.5), Bard, and BingChat were instructed to select one of the provided responses to each question. The accuracy of composite questions was recorded and comparatively analyzed to human cohorts including medical students and orthopaedic residents, stratified by post-graduate year. RESULTS: ChatGPT correctly answered 46.3% of composite questions whereas BingChat correctly answered 52.4% and Bard correctly answered 51.4% of questions on the OITE. Upon excluding image-associated questions, ChatGPT, BingChat, and Bard's overall accuracies improved to 49.1%, 53.5%, and 56.8%, respectively. Medical students and orthopaedic residents (PGY1-5) correctly answered 30.8%, 53.1%, 60.4%, 66.6%, 70.0%, and 71.9%, respectively. CONCLUSION: ChatGPT, Bard, and BingChat are AI models that answered OITE questions with an accuracy similar to that of first-year orthopaedic surgery residents. ChatGPT, Bard, and BingChat achieved this result without using images or other supplementary media that human test takers are provided.

3.
Orthopedics ; 47(2): e85-e89, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37757748

RESUMO

Advances in artificial intelligence and machine learning models, like Chat Generative Pre-trained Transformer (ChatGPT), have occurred at a remarkably fast rate. OpenAI released its newest model of ChatGPT, GPT-4, in March 2023. It offers a wide range of medical applications. The model has demonstrated notable proficiency on many medical board examinations. This study sought to assess GPT-4's performance on the Orthopaedic In-Training Examination (OITE) used to prepare residents for the American Board of Orthopaedic Surgery (ABOS) Part I Examination. The data gathered from GPT-4's performance were additionally compared with the data of the previous iteration of ChatGPT, GPT-3.5, which was released 4 months before GPT-4. GPT-4 correctly answered 251 of the 396 attempted questions (63.4%), whereas GPT-3.5 correctly answered 46.3% of 410 attempted questions. GPT-4 was significantly more accurate than GPT-3.5 on orthopedic board-style questions (P<.00001). GPT-4's performance is most comparable to that of an average third-year orthopedic surgery resident, while GPT-3.5 performed below an average orthopedic intern. GPT-4's overall accuracy was just below the approximate threshold that indicates a likely pass on the ABOS Part I Examination. Our results demonstrate significant improvements in OpenAI's newest model, GPT-4. Future studies should assess potential clinical applications as AI models continue to be trained on larger data sets and offer more capabilities. [Orthopedics. 2024;47(2):e85-e89.].


Assuntos
Internato e Residência , Procedimentos Ortopédicos , Ortopedia , Humanos , Ortopedia/educação , Inteligência Artificial , Avaliação Educacional , Competência Clínica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA