Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Assunto principal
País/Região como assunto
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Clin Nucl Med ; 2024 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-39385369

RESUMO

ABSTRACT: An 80-year-old patient with hepatocellular carcinoma (HCC) underwent an 18 F-FDG PET/CT scan owing to suspected lumbar metastasis identified via a CT scan performed during transarterial chemoembolization (TACE) 2 weeks earlier. The PET scan revealed segmental high uptake in the HCC and surrounding liver parenchyma, where lipiodol deposited during TACE had mostly washed out. The segmental uptake was attributed to TACE-induced inflammatory changes in the liver parenchyma around the HCC, confirmed by reduced uptake in a follow-up 18 F-FDG PET/CT scan 4 months later. This highlights the need to differentiate between inflammation and viable HCC in post-TACE 18 F-FDG PET/CT evaluations.

2.
Jpn J Radiol ; 42(2): 201-207, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37792149

RESUMO

PURPOSE: Herein, we assessed the accuracy of large language models (LLMs) in generating responses to questions in clinical radiology practice. We compared the performance of ChatGPT, GPT-4, and Google Bard using questions from the Japan Radiology Board Examination (JRBE). MATERIALS AND METHODS: In total, 103 questions from the JRBE 2022 were used with permission from the Japan Radiological Society. These questions were categorized by pattern, required level of thinking, and topic. McNemar's test was used to compare the proportion of correct responses between the LLMs. Fisher's exact test was used to assess the performance of GPT-4 for each topic category. RESULTS: ChatGPT, GPT-4, and Google Bard correctly answered 40.8% (42 of 103), 65.0% (67 of 103), and 38.8% (40 of 103) of the questions, respectively. GPT-4 significantly outperformed ChatGPT by 24.2% (p < 0.001) and Google Bard by 26.2% (p < 0.001). In the categorical analysis by level of thinking, GPT-4 correctly answered 79.7% of the lower-order questions, which was significantly higher than ChatGPT or Google Bard (p < 0.001). The categorical analysis by question pattern revealed GPT-4's superiority over ChatGPT (67.4% vs. 46.5%, p = 0.004) and Google Bard (39.5%, p < 0.001) in the single-answer questions. The categorical analysis by topic revealed that GPT-4 outperformed ChatGPT (40%, p = 0.013) and Google Bard (26.7%, p = 0.004). No significant differences were observed between the LLMs in the categories not mentioned above. The performance of GPT-4 was significantly better in nuclear medicine (93.3%) than in diagnostic radiology (55.8%; p < 0.001). GPT-4 also performed better on lower-order questions than on higher-order questions (79.7% vs. 45.5%, p < 0.001). CONCLUSION: ChatGPTplus based on GPT-4 scored 65% when answering Japanese questions from the JRBE, outperforming ChatGPT and Google Bard. This highlights the potential of using LLMs to address advanced clinical questions in the field of radiology in Japan.


Assuntos
Medicina Nuclear , Humanos , Japão , Radiografia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa