Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 143
Filtrar
1.
J Infect Dis ; 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39136574

RESUMO

BACKGROUND: Surgical site infection (SSI) is a common and costly complication in spinal surgery. Identifying risk factors and preventive strategies is crucial for reducing SSIs. GPT-4 has evolved from a simple text-based tool to a sophisticated multimodal data expert, invaluable for clinicians. This study explored GPT-4's applications in SSI management across various clinical scenarios. METHODS: GPT-4 was employed in various clinical scenarios related to SSIs in spinal surgery. Researchers designed specific questions for GPT-4 to generate tailored responses. Six evaluators assessed these responses for logic and accuracy using a 5-point Likert scale. Inter-rater consistency was measured with Fleiss' kappa, and radar charts visualized GPT-4's performance. RESULTS: The inter-rater consistency, measured by Fleiss' kappa, ranged from 0.62 to 0.83. The overall average scores for logic and accuracy were 24.27±0.4 and 24.46±0.25 on 5-point Likert scale. Radar charts showed GPT-4's consistently high performance across various criteria. GPT-4 demonstrated high proficiency in creating personalized treatment plans tailored to diverse clinical patient records and offered interactive patient education. It significantly improved SSI management strategies, infection prediction models, and identified emerging research trends. However, it had limitations in fine-tuning antibiotic treatments and customizing patient education materials. CONCLUSIONS: GPT-4 represents a significant advancement in managing SSIs in spinal surgery, promoting patient-centered care and precision medicine. Despite some limitations in antibiotic customization and patient education, GPT-4's continuous learning, attention to data privacy and security, collaboration with healthcare professionals, and patient acceptance of AI recommendations suggest its potential to revolutionize SSI management, requiring further development and clinical integration.

2.
Neuropathol Appl Neurobiol ; 50(4): e12997, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39010256

RESUMO

AIMS: Recent advances in artificial intelligence, particularly with large language models like GPT-4Vision (GPT-4V)-a derivative feature of ChatGPT-have expanded the potential for medical image interpretation. This study evaluates the accuracy of GPT-4V in image classification tasks of histopathological images and compares its performance with a traditional convolutional neural network (CNN). METHODS: We utilised 1520 images, including haematoxylin and eosin staining and tau immunohistochemistry, from patients with various neurodegenerative diseases, such as Alzheimer's disease (AD), progressive supranuclear palsy (PSP) and corticobasal degeneration (CBD). We assessed GPT-4V's performance using multi-step prompts to determine how textual context influences image interpretation. We also employed few-shot learning to enhance improvements in GPT-4V's diagnostic performance in classifying three specific tau lesions-astrocytic plaques, neuritic plaques and tufted astrocytes-and compared the outcomes with the CNN model YOLOv8. RESULTS: GPT-4V accurately recognised staining techniques and tissue origin but struggled with specific lesion identification. The interpretation of images was notably influenced by the provided textual context, which sometimes led to diagnostic inaccuracies. For instance, when presented with images of the motor cortex, the diagnosis shifted inappropriately from AD to CBD or PSP. However, few-shot learning markedly improved GPT-4V's diagnostic capabilities, enhancing accuracy from 40% in zero-shot learning to 90% with 20-shot learning, matching the performance of YOLOv8, which required 100-shot learning to achieve the same accuracy. CONCLUSIONS: Although GPT-4V faces challenges in independently interpreting histopathological images, few-shot learning significantly improves its performance. This approach is especially promising for neuropathology, where acquiring extensive labelled datasets is often challenging.


Assuntos
Redes Neurais de Computação , Doenças Neurodegenerativas , Humanos , Doenças Neurodegenerativas/patologia , Interpretação de Imagem Assistida por Computador/métodos , Doença de Alzheimer/patologia
3.
Liver Int ; 44(7): 1578-1587, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38651924

RESUMO

BACKGROUND AND AIMS: The Liver Imaging Reporting and Data System (LI-RADS) offers a standardized approach for imaging hepatocellular carcinoma. However, the diverse styles and structures of radiology reports complicate automatic data extraction. Large language models hold the potential for structured data extraction from free-text reports. Our objective was to evaluate the performance of Generative Pre-trained Transformer (GPT)-4 in extracting LI-RADS features and categories from free-text liver magnetic resonance imaging (MRI) reports. METHODS: Three radiologists generated 160 fictitious free-text liver MRI reports written in Korean and English, simulating real-world practice. Of these, 20 were used for prompt engineering, and 140 formed the internal test cohort. Seventy-two genuine reports, authored by 17 radiologists were collected and de-identified for the external test cohort. LI-RADS features were extracted using GPT-4, with a Python script calculating categories. Accuracies in each test cohort were compared. RESULTS: On the external test, the accuracy for the extraction of major LI-RADS features, which encompass size, nonrim arterial phase hyperenhancement, nonperipheral 'washout', enhancing 'capsule' and threshold growth, ranged from .92 to .99. For the rest of the LI-RADS features, the accuracy ranged from .86 to .97. For the LI-RADS category, the model showed an accuracy of .85 (95% CI: .76, .93). CONCLUSIONS: GPT-4 shows promise in extracting LI-RADS features, yet further refinement of its prompting strategy and advancements in its neural network architecture are crucial for reliable use in processing complex real-world MRI reports.


Assuntos
Neoplasias Hepáticas , Imageamento por Ressonância Magnética , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Carcinoma Hepatocelular/diagnóstico por imagem , Processamento de Linguagem Natural , Sistemas de Informação em Radiologia , República da Coreia , Mineração de Dados , Fígado/diagnóstico por imagem
4.
BMC Med Res Methodol ; 24(1): 139, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38918736

RESUMO

BACKGROUND: Large language models (LLMs) that can efficiently screen and identify studies meeting specific criteria would streamline literature reviews. Additionally, those capable of extracting data from publications would enhance knowledge discovery by reducing the burden on human reviewers. METHODS: We created an automated pipeline utilizing OpenAI GPT-4 32 K API version "2023-05-15" to evaluate the accuracy of the LLM GPT-4 responses to queries about published papers on HIV drug resistance (HIVDR) with and without an instruction sheet. The instruction sheet contained specialized knowledge designed to assist a person trying to answer questions about an HIVDR paper. We designed 60 questions pertaining to HIVDR and created markdown versions of 60 published HIVDR papers in PubMed. We presented the 60 papers to GPT-4 in four configurations: (1) all 60 questions simultaneously; (2) all 60 questions simultaneously with the instruction sheet; (3) each of the 60 questions individually; and (4) each of the 60 questions individually with the instruction sheet. RESULTS: GPT-4 achieved a mean accuracy of 86.9% - 24.0% higher than when the answers to papers were permuted. The overall recall and precision were 72.5% and 87.4%, respectively. The standard deviation of three replicates for the 60 questions ranged from 0 to 5.3% with a median of 1.2%. The instruction sheet did not significantly increase GPT-4's accuracy, recall, or precision. GPT-4 was more likely to provide false positive answers when the 60 questions were submitted individually compared to when they were submitted together. CONCLUSIONS: GPT-4 reproducibly answered 3600 questions about 60 papers on HIVDR with moderately high accuracy, recall, and precision. The instruction sheet's failure to improve these metrics suggests that more sophisticated approaches are necessary. Either enhanced prompt engineering or finetuning an open-source model could further improve an LLM's ability to answer questions about highly specialized HIVDR papers.


Assuntos
Infecções por HIV , Humanos , Reprodutibilidade dos Testes , Infecções por HIV/tratamento farmacológico , PubMed , Publicações/estatística & dados numéricos , Publicações/normas , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas , Software
5.
J Gastroenterol Hepatol ; 39(8): 1535-1543, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38627920

RESUMO

BACKGROUND AND AIM: Effective clinical event classification is essential for clinical research and quality improvement. The validation of artificial intelligence (AI) models like Generative Pre-trained Transformer 4 (GPT-4) for this task and comparison with conventional methods remains unexplored. METHODS: We evaluated the performance of the GPT-4 model for classifying gastrointestinal (GI) bleeding episodes from 200 medical discharge summaries and compared the results with human review and an International Classification of Diseases (ICD) code-based system. The analysis included accuracy, sensitivity, and specificity evaluation, using ground truth determined by physician reviewers. RESULTS: GPT-4 exhibited an accuracy of 94.4% in identifying GI bleeding occurrences, outperforming ICD codes (accuracy 63.5%, P < 0.001). GPT-4's accuracy was either slightly lower or statistically similar to individual human reviewers (Reviewer 1: 98.5%, P < 0.001; Reviewer 2: 90.8%, P = 0.170). For location classification, GPT-4 achieved accuracies of 81.7% and 83.5% for confirmed and probable GI bleeding locations, respectively, with figures that were either slightly lower or comparable with those of human reviewers. GPT-4 was highly efficient, analyzing the dataset in 12.7 min at a cost of 21.2 USD, whereas human reviewers required 8-9 h each. CONCLUSION: Our study indicates GPT-4 offers a reliable, cost-efficient, and faster alternative to current clinical event classification methods, outperforming the conventional ICD coding system and performing comparably to individual expert human reviewers. Its implementation could facilitate more accurate and granular clinical research and quality audits. Future research should explore scalability, prompt and model tuning, and ethical implications of high-performance AI models in clinical data processing.


Assuntos
Inteligência Artificial , Hemorragia Gastrointestinal , Classificação Internacional de Doenças , Humanos , Hemorragia Gastrointestinal/classificação , Hemorragia Gastrointestinal/etiologia , Sensibilidade e Especificidade
6.
Philos Trans A Math Phys Eng Sci ; 382(2270): 20230254, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38403056

RESUMO

In this paper, we experimentally evaluate the zero-shot performance of GPT-4 against prior generations of GPT on the entire uniform bar examination (UBE), including not only the multiple-choice multistate bar examination (MBE), but also the open-ended multistate essay exam (MEE) and multistate performance test (MPT) components. On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 when compared with much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human test-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society. This article is part of the theme issue 'A complexity science approach to law and governance'.

7.
Neuroradiology ; 66(8): 1245-1250, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38705899

RESUMO

We compared different LLMs, notably chatGPT, GPT4, and Google Bard and we tested whether their performance differs in subspeciality domains, in executing examinations from four different courses of the European Society of Neuroradiology (ESNR) notably anatomy/embryology, neuro-oncology, head and neck and pediatrics. Written exams of ESNR were used as input data, related to anatomy/embryology (30 questions), neuro-oncology (50 questions), head and neck (50 questions), and pediatrics (50 questions). All exams together, and each exam separately were introduced to the three LLMs: chatGPT 3.5, GPT4, and Google Bard. Statistical analyses included a group-wise Friedman test followed by a pair-wise Wilcoxon test with multiple comparison corrections. Overall, there was a significant difference between the 3 LLMs (p < 0.0001), with GPT4 having the highest accuracy (70%), followed by chatGPT 3.5 (54%) and Google Bard (36%). The pair-wise comparison showed significant differences between chatGPT vs GPT 4 (p < 0.0001), chatGPT vs Bard (p < 0. 0023), and GPT4 vs Bard (p < 0.0001). Analyses per subspecialty showed the highest difference between the best LLM (GPT4, 70%) versus the worst LLM (Google Bard, 24%) in the head and neck exam, while the difference was least pronounced in neuro-oncology (GPT4, 62% vs Google Bard, 48%). We observed significant differences in the performance of the three different LLMs in the running of official exams organized by ESNR. Overall GPT 4 performed best, and Google Bard performed worst. This difference varied depending on subspeciality and was most pronounced in head and neck subspeciality.


Assuntos
Sociedades Médicas , Humanos , Europa (Continente) , Avaliação Educacional , Radiologia/educação , Neurorradiografia
8.
Neuroradiology ; 66(1): 73-79, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37994939

RESUMO

PURPOSE: The noteworthy performance of Chat Generative Pre-trained Transformer (ChatGPT), an artificial intelligence text generation model based on the GPT-4 architecture, has been demonstrated in various fields; however, its potential applications in neuroradiology remain unexplored. This study aimed to evaluate the diagnostic performance of GPT-4 based ChatGPT in neuroradiology. METHODS: We collected 100 consecutive "Case of the Week" cases from the American Journal of Neuroradiology between October 2021 and September 2023. ChatGPT generated a diagnosis from patient's medical history and imaging findings for each case. Then the diagnostic accuracy rate was determined using the published ground truth. Each case was categorized by anatomical location (brain, spine, and head & neck), and brain cases were further divided into central nervous system (CNS) tumor and non-CNS tumor groups. Fisher's exact test was conducted to compare the accuracy rates among the three anatomical locations, as well as between the CNS tumor and non-CNS tumor groups. RESULTS: ChatGPT achieved a diagnostic accuracy rate of 50% (50/100 cases). There were no significant differences between the accuracy rates of the three anatomical locations (p = 0.89). The accuracy rate was significantly lower for the CNS tumor group compared to the non-CNS tumor group in the brain cases (16% [3/19] vs. 62% [36/58], p < 0.001). CONCLUSION: This study demonstrated the diagnostic performance of ChatGPT in neuroradiology. ChatGPT's diagnostic accuracy varied depending on disease etiologies, and its diagnostic accuracy was significantly lower in CNS tumors compared to non-CNS tumors.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Cabeça , Encéfalo , Pescoço
9.
Surg Endosc ; 38(5): 2522-2532, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38472531

RESUMO

BACKGROUND: The readability of online bariatric surgery patient education materials (PEMs) often surpasses the recommended 6th grade level. Large language models (LLMs), like ChatGPT and Bard, have the potential to revolutionize PEM delivery. We aimed to evaluate the readability of PEMs produced by U.S. medical institutions compared to LLMs, as well as the ability of LLMs to simplify their responses. METHODS: Responses to frequently asked questions (FAQs) related to bariatric surgery were gathered from top-ranked health institutions. FAQ responses were also generated from GPT-3.5, GPT-4, and Bard. LLMs were then prompted to improve the readability of their initial responses. The readability of institutional responses, initial LLM responses, and simplified LLM responses were graded using validated readability formulas. Accuracy and comprehensiveness of initial and simplified LLM responses were also compared. RESULTS: Responses to 66 FAQs were included. All institutional and initial LLM responses had poor readability, with average reading levels ranging from 9th grade to college graduate. Simplified responses from LLMs had significantly improved readability, with reading levels ranging from 6th grade to college freshman. When comparing simplified LLM responses, GPT-4 responses demonstrated the highest readability, with reading levels ranging from 6th to 9th grade. Accuracy was similar between initial and simplified responses from all LLMs. Comprehensiveness was similar between initial and simplified responses from GPT-3.5 and GPT-4. However, 34.8% of Bard's simplified responses were graded as less comprehensive compared to initial. CONCLUSION: Our study highlights the efficacy of LLMs in enhancing the readability of bariatric surgery PEMs. GPT-4 outperformed other models, generating simplified PEMs from 6th to 9th grade reading levels. Unlike GPT-3.5 and GPT-4, Bard's simplified responses were graded as less comprehensive. We advocate for future studies examining the potential role of LLMs as dynamic and personalized sources of PEMs for diverse patient populations of all literacy levels.


Assuntos
Cirurgia Bariátrica , Compreensão , Educação de Pacientes como Assunto , Humanos , Educação de Pacientes como Assunto/métodos , Internet , Letramento em Saúde , Idioma , Estados Unidos
10.
Clin Exp Nephrol ; 28(5): 465-469, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38353783

RESUMO

BACKGROUND: Large language models (LLMs) have impacted advances in artificial intelligence. While LLMs have demonstrated high performance in general medical examinations, their performance in specialized areas such as nephrology is unclear. This study aimed to evaluate ChatGPT and Bard in their potential nephrology applications. METHODS: Ninety-nine questions from the Self-Assessment Questions for Nephrology Board Renewal from 2018 to 2022 were presented to two versions of ChatGPT (GPT-3.5 and GPT-4) and Bard. We calculated the correct answer rates for the five years, each year, and question categories and checked whether they exceeded the pass criterion. The correct answer rates were compared with those of the nephrology residents. RESULTS: The overall correct answer rates for GPT-3.5, GPT-4, and Bard were 31.3% (31/99), 54.5% (54/99), and 32.3% (32/99), respectively, thus GPT-4 significantly outperformed GPT-3.5 (p < 0.01) and Bard (p < 0.01). GPT-4 passed in three years, barely meeting the minimum threshold in two. GPT-4 demonstrated significantly higher performance in problem-solving, clinical, and non-image questions than GPT-3.5 and Bard. GPT-4's performance was between third- and fourth-year nephrology residents. CONCLUSIONS: GPT-4 outperformed GPT-3.5 and Bard and met the Nephrology Board renewal standards in specific years, albeit marginally. These results highlight LLMs' potential and limitations in nephrology. As LLMs advance, nephrologists should understand their performance for future applications.


Assuntos
Nefrologia , Autoavaliação (Psicologia) , Humanos , Avaliação Educacional , Conselhos de Especialidade Profissional , Competência Clínica , Inteligência Artificial
11.
Am J Emerg Med ; 81: 146-150, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38728938

RESUMO

INTRODUCTION: The term Artificial Intelligence (AI) was first coined in the 1960s and has made significant progress up to the present day. During this period, numerous AI applications have been developed. GPT-4 and Gemini are two of the best-known of these AI models. As a triage system The Emergency Severity Index (ESI) is currently one of the most commonly used for effective patient triage in the emergency department. The aim of this study is to evaluate the performance of GPT-4, Gemini, and emergency medicine specialists in ESI triage against each other; furthermore, it aims to contribute to the literature on the usability of these AI programs in emergency department triage. METHODS: Our study was conducted between February 1, 2024, and February 29, 2024, among emergency medicine specialists in Turkey, as well as with GPT-4 and Gemini. Ten emergency medicine specialists were included in our study but as a limitation the emergency medicine specialists participating in the study do not frequently use the ESI triage model in daily practice. In the first phase of our study, 100 case examples related to adult or trauma patients were extracted from the sample and training cases found in the ESI Implementation Handbook. In the second phase of our study, the provided responses were categorized into three groups: correct triage, over-triage, and under-triage. In the third phase of our study, the questions were categorized according to the correct triage responses. RESULTS: In the results of our study, a statistically significant difference was found between the three groups in terms of correct triage, over-triage, and under-triage (p < 0.001). GPT-4 was found to have the highest correct triage rate with an average of 70.60 (±3.74), while Gemini had the highest over-triage rate with an average of 35.2 (±2.93) (p < 0.001). The highest under-triage rate was observed in emergency medicine specialists (32.90 (±11.83)). In the ESI 1-2 class, Gemini had a correct triage rate of 87.77%, GPT-4 had 85.11%, and emergency medicine specialists had 49.33%. CONCLUSION: In conclusion, our study shows that both GPT-4 and Gemini can accurately triage critical and urgent patients in ESI 1&2 groups at a high rate. Furthermore, GPT-4 has been more successful in ESI triage for all patients. These results suggest that GPT-4 and Gemini could assist in accurate ESI triage of patients in emergency departments.


Assuntos
Medicina de Emergência , Serviço Hospitalar de Emergência , Triagem , Triagem/métodos , Humanos , Serviço Hospitalar de Emergência/organização & administração , Turquia , Inteligência Artificial , Adulto , Feminino , Masculino , Índice de Gravidade de Doença
12.
Am J Emerg Med ; 84: 68-73, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39096711

RESUMO

INTRODUCTION: GPT-4, GPT-4o and Gemini advanced, which are among the well-known large language models (LLMs), have the capability to recognize and interpret visual data. When the literature is examined, there are a very limited number of studies examining the ECG performance of GPT-4. However, there is no study in the literature examining the success of Gemini and GPT-4o in ECG evaluation. The aim of our study is to evaluate the performance of GPT-4, GPT-4o, and Gemini in ECG evaluation, assess their usability in the medical field, and compare their accuracy rates in ECG interpretation with those of cardiologists and emergency medicine specialists. METHODS: The study was conducted from May 14, 2024, to June 3, 2024. The book "150 ECG Cases" served as a reference, containing two sections: daily routine ECGs and more challenging ECGs. For this study, two emergency medicine specialists selected 20 ECG cases from each section, totaling 40 cases. In the next stage, the questions were evaluated by emergency medicine specialists and cardiologists. In the subsequent phase, a diagnostic question was entered daily into GPT-4, GPT-4o, and Gemini Advanced on separate chat interfaces. In the final phase, the responses provided by cardiologists, emergency medicine specialists, GPT-4, GPT-4o, and Gemini Advanced were statistically evaluated across three categories: routine daily ECGs, more challenging ECGs, and the total number of ECGs. RESULTS: Cardiologists outperformed GPT-4, GPT-4o, and Gemini Advanced in all three groups. Emergency medicine specialists performed better than GPT-4o in routine daily ECG questions and total ECG questions (p = 0.003 and p = 0.042, respectively). When comparing GPT-4o with Gemini Advanced and GPT-4, GPT-4o performed better in total ECG questions (p = 0.027 and p < 0.001, respectively). In routine daily ECG questions, GPT-4o also outperformed Gemini Advanced (p = 0.004). Weak agreement was observed in the responses given by GPT-4 (p < 0.001, Fleiss Kappa = 0.265) and Gemini Advanced (p < 0.001, Fleiss Kappa = 0.347), while moderate agreement was observed in the responses given by GPT-4o (p < 0.001, Fleiss Kappa = 0.514). CONCLUSION: While GPT-4o shows promise, especially in more challenging ECG questions, and may have potential as an assistant for ECG evaluation, its performance in routine and overall assessments still lags behind human specialists. The limited accuracy and consistency of GPT-4 and Gemini suggest that their current use in clinical ECG interpretation is risky.

13.
J Med Internet Res ; 26: e52758, 2024 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-39151163

RESUMO

BACKGROUND: The screening process for systematic reviews is resource-intensive. Although previous machine learning solutions have reported reductions in workload, they risked excluding relevant papers. OBJECTIVE: We evaluated the performance of a 3-layer screening method using GPT-3.5 and GPT-4 to streamline the title and abstract-screening process for systematic reviews. Our goal is to develop a screening method that maximizes sensitivity for identifying relevant records. METHODS: We conducted screenings on 2 of our previous systematic reviews related to the treatment of bipolar disorder, with 1381 records from the first review and 3146 from the second. Screenings were conducted using GPT-3.5 (gpt-3.5-turbo-0125) and GPT-4 (gpt-4-0125-preview) across three layers: (1) research design, (2) target patients, and (3) interventions and controls. The 3-layer screening was conducted using prompts tailored to each study. During this process, information extraction according to each study's inclusion criteria and optimization for screening were carried out using a GPT-4-based flow without manual adjustments. Records were evaluated at each layer, and those meeting the inclusion criteria at all layers were subsequently judged as included. RESULTS: On each layer, both GPT-3.5 and GPT-4 were able to process about 110 records per minute, and the total time required for screening the first and second studies was approximately 1 hour and 2 hours, respectively. In the first study, the sensitivities/specificities of the GPT-3.5 and GPT-4 were 0.900/0.709 and 0.806/0.996, respectively. Both screenings by GPT-3.5 and GPT-4 judged all 6 records used for the meta-analysis as included. In the second study, the sensitivities/specificities of the GPT-3.5 and GPT-4 were 0.958/0.116 and 0.875/0.855, respectively. The sensitivities for the relevant records align with those of human evaluators: 0.867-1.000 for the first study and 0.776-0.979 for the second study. Both screenings by GPT-3.5 and GPT-4 judged all 9 records used for the meta-analysis as included. After accounting for justifiably excluded records by GPT-4, the sensitivities/specificities of the GPT-4 screening were 0.962/0.996 in the first study and 0.943/0.855 in the second study. Further investigation indicated that the cases incorrectly excluded by GPT-3.5 were due to a lack of domain knowledge, while the cases incorrectly excluded by GPT-4 were due to misinterpretations of the inclusion criteria. CONCLUSIONS: Our 3-layer screening method with GPT-4 demonstrated acceptable level of sensitivity and specificity that supports its practical application in systematic review screenings. Future research should aim to generalize this approach and explore its effectiveness in diverse settings, both medical and nonmedical, to fully establish its use and operational feasibility.


Assuntos
Revisões Sistemáticas como Assunto , Humanos , Idioma
14.
J Med Internet Res ; 26: e48996, 2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-38214966

RESUMO

BACKGROUND: The systematic review of clinical research papers is a labor-intensive and time-consuming process that often involves the screening of thousands of titles and abstracts. The accuracy and efficiency of this process are critical for the quality of the review and subsequent health care decisions. Traditional methods rely heavily on human reviewers, often requiring a significant investment of time and resources. OBJECTIVE: This study aims to assess the performance of the OpenAI generative pretrained transformer (GPT) and GPT-4 application programming interfaces (APIs) in accurately and efficiently identifying relevant titles and abstracts from real-world clinical review data sets and comparing their performance against ground truth labeling by 2 independent human reviewers. METHODS: We introduce a novel workflow using the Chat GPT and GPT-4 APIs for screening titles and abstracts in clinical reviews. A Python script was created to make calls to the API with the screening criteria in natural language and a corpus of title and abstract data sets filtered by a minimum of 2 human reviewers. We compared the performance of our model against human-reviewed papers across 6 review papers, screening over 24,000 titles and abstracts. RESULTS: Our results show an accuracy of 0.91, a macro F1-score of 0.60, a sensitivity of excluded papers of 0.91, and a sensitivity of included papers of 0.76. The interrater variability between 2 independent human screeners was κ=0.46, and the prevalence and bias-adjusted κ between our proposed methods and the consensus-based human decisions was κ=0.96. On a randomly selected subset of papers, the GPT models demonstrated the ability to provide reasoning for their decisions and corrected their initial decisions upon being asked to explain their reasoning for incorrect classifications. CONCLUSIONS: Large language models have the potential to streamline the clinical review process, save valuable time and effort for researchers, and contribute to the overall quality of clinical reviews. By prioritizing the workflow and acting as an aid rather than a replacement for researchers and reviewers, models such as GPT-4 can enhance efficiency and lead to more accurate and reliable conclusions in medical research.


Assuntos
Inteligência Artificial , Pesquisa Biomédica , Revisões Sistemáticas como Assunto , Humanos , Consenso , Análise de Dados , Resolução de Problemas , Processamento de Linguagem Natural , Fluxo de Trabalho
15.
J Med Internet Res ; 26: e49139, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38427404

RESUMO

BACKGROUND: Previous work suggests that Google searches could be useful in identifying conjunctivitis epidemics. Content-based assessment of social media content may provide additional value in serving as early indicators of conjunctivitis and other systemic infectious diseases. OBJECTIVE: We investigated whether large language models, specifically GPT-3.5 and GPT-4 (OpenAI), can provide probabilistic assessments of whether social media posts about conjunctivitis could indicate a regional outbreak. METHODS: A total of 12,194 conjunctivitis-related tweets were obtained using a targeted Boolean search in multiple languages from India, Guam (United States), Martinique (France), the Philippines, American Samoa (United States), Fiji, Costa Rica, Haiti, and the Bahamas, covering the time frame from January 1, 2012, to March 13, 2023. By providing these tweets via prompts to GPT-3.5 and GPT-4, we obtained probabilistic assessments that were validated by 2 human raters. We then calculated Pearson correlations of these time series with tweet volume and the occurrence of known outbreaks in these 9 locations, with time series bootstrap used to compute CIs. RESULTS: Probabilistic assessments derived from GPT-3.5 showed correlations of 0.60 (95% CI 0.47-0.70) and 0.53 (95% CI 0.40-0.65) with the 2 human raters, with higher results for GPT-4. The weekly averages of GPT-3.5 probabilities showed substantial correlations with weekly tweet volume for 44% (4/9) of the countries, with correlations ranging from 0.10 (95% CI 0.0-0.29) to 0.53 (95% CI 0.39-0.89), with larger correlations for GPT-4. More modest correlations were found for correlation with known epidemics, with substantial correlation only in American Samoa (0.40, 95% CI 0.16-0.81). CONCLUSIONS: These findings suggest that GPT prompting can efficiently assess the content of social media posts and indicate possible disease outbreaks to a degree of accuracy comparable to that of humans. Furthermore, we found that automated content analysis of tweets is related to tweet volume for conjunctivitis-related posts in some locations and to the occurrence of actual epidemics. Future work may improve the sensitivity and specificity of these methods for disease outbreak detection.


Assuntos
Conjuntivite , Epidemias , Mídias Sociais , Humanos , Estados Unidos , Infodemiologia , Surtos de Doenças , Idioma
16.
J Med Internet Res ; 26: e54948, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38691404

RESUMO

This study demonstrates that GPT-4V outperforms GPT-4 across radiology subspecialties in analyzing 207 cases with 1312 images from the Radiological Society of North America Case Collection.


Assuntos
Radiologia , Radiologia/métodos , Radiologia/estatística & dados numéricos , Humanos , Processamento de Imagem Assistida por Computador/métodos
17.
J Med Internet Res ; 26: e52113, 2024 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-38261378

RESUMO

BACKGROUND: Large language models such as GPT-4 (Generative Pre-trained Transformer 4) are being increasingly used in medicine and medical education. However, these models are prone to "hallucinations" (ie, outputs that seem convincing while being factually incorrect). It is currently unknown how these errors by large language models relate to the different cognitive levels defined in Bloom's taxonomy. OBJECTIVE: This study aims to explore how GPT-4 performs in terms of Bloom's taxonomy using psychosomatic medicine exam questions. METHODS: We used a large data set of psychosomatic medicine multiple-choice questions (N=307) with real-world results derived from medical school exams. GPT-4 answered the multiple-choice questions using 2 distinct prompt versions: detailed and short. The answers were analyzed using a quantitative approach and a qualitative approach. Focusing on incorrectly answered questions, we categorized reasoning errors according to the hierarchical framework of Bloom's taxonomy. RESULTS: GPT-4's performance in answering exam questions yielded a high success rate: 93% (284/307) for the detailed prompt and 91% (278/307) for the short prompt. Questions answered correctly by GPT-4 had a statistically significant higher difficulty than questions answered incorrectly (P=.002 for the detailed prompt and P<.001 for the short prompt). Independent of the prompt, GPT-4's lowest exam performance was 78.9% (15/19), thereby always surpassing the "pass" threshold. Our qualitative analysis of incorrect answers, based on Bloom's taxonomy, showed that errors were primarily in the "remember" (29/68) and "understand" (23/68) cognitive levels; specific issues arose in recalling details, understanding conceptual relationships, and adhering to standardized guidelines. CONCLUSIONS: GPT-4 demonstrated a remarkable success rate when confronted with psychosomatic medicine multiple-choice exam questions, aligning with previous findings. When evaluated through Bloom's taxonomy, our data revealed that GPT-4 occasionally ignored specific facts (remember), provided illogical reasoning (understand), or failed to apply concepts to a new situation (apply). These errors, which were confidently presented, could be attributed to inherent model biases and the tendency to generate outputs that maximize likelihood.


Assuntos
Educação Médica , Medicina , Medicina Psicossomática , Humanos , Projetos de Pesquisa
18.
Am J Otolaryngol ; 45(4): 104303, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38678799

RESUMO

Otolaryngologists can enhance workflow efficiency, provide better patient care, and advance medical research and education by integrating artificial intelligence (AI) into their practices. GPT-4 technology is a revolutionary and contemporary example of AI that may apply to otolaryngology. The knowledge of otolaryngologists should be supplemented, not replaced when using GPT-4 to make critical medical decisions and provide individualized patient care. In our thorough examination, we explore the potential uses of the groundbreaking GPT-4 technology in the field of otolaryngology, covering aspects such as potential outcomes and technical boundaries. Additionally, we delve into the intricate and intellectually challenging dilemmas that emerge when incorporating GPT-4 into otolaryngology, considering the ethical considerations inherent in its implementation. Our stance is that GPT-4 has the potential to be very helpful. Its capabilities, which include aid in clinical decision-making, patient care, and administrative job automation, present exciting possibilities for enhancing patient outcomes, boosting the efficiency of healthcare delivery, and enhancing patient experiences. Even though there are still certain obstacles and limitations, the progress made so far shows that GPT-4 can be a valuable tool for modern medicine. GPT-4 may play a more significant role in clinical practice as technology develops, helping medical professionals deliver high-quality care tailored to every patient's unique needs.


Assuntos
Inteligência Artificial , Otolaringologia , Humanos , Otolaringologia/ética , Inteligência Artificial/ética , Tomada de Decisão Clínica/ética
19.
Med Teach ; : 1-7, 2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38295769

RESUMO

PURPOSE: Generative AI will become an integral part of education in future. The potential of this technology in different disciplines should be identified to promote effective adoption. This study evaluated the performance of ChatGPT in tutorial and case-based learning questions in physiology and biochemistry for medical undergraduates. Our study mainly focused on the performance of GPT-3.5 version while a subgroup was comparatively assessed on GPT-3.5 and GPT-4 performances. MATERIALS AND METHODS: Answers were generated in GPT-3.5 for 44 modified essay questions (MEQs) in physiology and 43 MEQs in biochemistry. Each answer was graded by two independent examiners. Subsequently, a subset of 15 questions from each subject were selected to represent different score categories of the GPT-3.5 answers; responses were generated in GPT-4, and graded. RESULTS: The mean score for physiology answers was 74.7 (SD 25.96). GPT-3.5 demonstrated a statistically significant (p = .009) superior performance in lower-order questions of Bloom's taxonomy in comparison to higher-order questions. Deficiencies in the application of physiological principles in clinical context were noted as a drawback. Scores in biochemistry were relatively lower with a mean score of 59.3 (SD 26.9) for GPT-3.5. There was no statistically significant difference in the scores for higher and lower-order questions of Bloom's taxonomy. The deficiencies highlighted were lack of in-depth explanations and precision. The subset of questions where the GPT-4 and GPT-3.5 were compared demonstrated a better overall performance in GPT-4 responses in both subjects. This difference between the GPT-3.5 and GPT-4 performance was statistically significant in biochemistry but not in physiology. CONCLUSIONS: The differences in performance across the two versions, GPT-3.5 and GPT-4 across the disciplines are noteworthy. Educators and students should understand the strengths and limitations of this technology in different fields to effectively integrate this technology into teaching and learning.

20.
J Hand Surg Am ; 2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-39066762

RESUMO

PURPOSE: Exploring the integration of artificial intelligence in clinical settings, this study examined the feasibility of using Generative Pretrained Transformer 4 (GPT-4), a large language model, as a consultation assistant in a hand surgery outpatient clinic. METHODS: The study involved 10 simulated patient scenarios with common hand conditions, where GPT-4, enhanced through specific prompt engineering techniques, conducted medical history interviews, and assisted in diagnostic processes. A panel of expert hand surgeons, each board-certified in hand surgery, evaluated GPT-4's responses using a Likert Scale across five criteria with scores ranging from 1 (lowest) to 5 (highest). RESULTS: Generative Pretrained Transformer 4 achieved an average score of 4.6, reflecting good performance in documenting a medical history, as evaluated by the hand surgeons. CONCLUSIONS: These findings suggest that GPT-4 can effectively document medical histories to meet the standards of hand surgeons in a simulated environment. The findings indicate potential for future application in patient care, but the actual performance of GPT-4 in real clinical settings remains to be investigated. CLINICAL RELEVANCE: This study provides a preliminary indication that GPT-4 could be a useful consultation assistant in a hand surgery outpatient clinic, but further research is required to explore its reliability and practicality in actual practice.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA