Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Oral Dis ; 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39039720

RESUMO

INTRODUCTION: Complex patient diagnoses in dentistry require a multifaceted approach which combines interpretations of clinical observations with an in-depth understanding of patient history and presenting problems. The present study aims to elucidate the implications of ChatGPT (OpenAI) as a comprehensive diagnostic tool in the dental clinic through examining the chatbot's diagnostic performance on challenging patient cases retrieved from the literature. METHODS: Our study subjected ChatGPT3.5 and ChatGPT4 to descriptions of patient cases for diagnostic challenges retrieved from the literature. Sample means were compared using a two-tailed t-test, while sample proportions were compared using a two-tailed χ2 test. A p-value below the threshold of 0.05 was deemed statistically significant. RESULTS: When prompted to generate their own differential diagnoses, ChatGPT3.5 and ChatGPT4 achieved a diagnostic accuracy of 40% and 62%, respectively. When basing their diagnostic processes on a differential diagnosis retrieved from the literature, ChatGPT3.5 and ChatGPT4 achieved a diagnostic accuracy of 70% and 80%, respectively. CONCLUSION: ChatGPT displays an impressive capacity to correctly diagnose complex diagnostic challenges in the field of dentistry. Our study paints a promising potential for the chatbot to 1 day serve as a comprehensive diagnostic tool in the dental clinic.

3.
J Periodontol ; 95(7): 682-687, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38197146

RESUMO

BACKGROUND: ChatGPT's (Chat Generative Pre-Trained Transformer) remarkable capacity to generate human-like output makes it an appealing learning tool for healthcare students worldwide. Nevertheless, the chatbot's responses may be subject to inaccuracies, putting forth an intense risk of misinformation. ChatGPT's capabilities should be examined in every corner of healthcare education, including dentistry and its specialties, to understand the potential of misinformation associated with the chatbot's use as a learning tool. Our investigation aims to explore ChatGPT's foundation of knowledge in the field of periodontology by evaluating the chatbot's performance on questions obtained from an in-service examination administered by the American Academy of Periodontology (AAP). METHODS: ChatGPT3.5 and ChatGPT4 were evaluated on 311 multiple-choice questions obtained from the 2023 in-service examination administered by the AAP. The dataset of in-service examination questions was accessed through Nova Southeastern University's Department of Periodontology. Our study excluded questions containing an image as ChatGPT does not accept image inputs. RESULTS: ChatGPT3.5 and ChatGPT4 answered 57.9% and 73.6% of in-service questions correctly on the 2023 Periodontics In-Service Written Examination, respectively. A two-tailed t test was incorporated to compare independent sample means, and sample proportions were compared using a two-tailed χ2 test. A p value below the threshold of 0.05 was deemed statistically significant. CONCLUSION: While ChatGPT4 showed a higher proficiency compared to ChatGPT3.5, both chatbot models leave considerable room for misinformation with their responses relating to periodontology. The findings of the study encourage residents to scrutinize the periodontic information generated by ChatGPT to account for the chatbot's current limitations.


Assuntos
Inteligência Artificial , Educação em Odontologia , Avaliação Educacional , Periodontia , Humanos , Periodontia/educação , Educação em Odontologia/métodos , Avaliação Educacional/métodos
4.
J Am Dent Assoc ; 154(11): 970-974, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37676187

RESUMO

BACKGROUND: Although Chat Generative Pre-trained Transformer (ChatGPT) (OpenAI) may be an appealing educational resource for students, the chatbot responses can be subject to misinformation. This study was designed to evaluate the performance of ChatGPT on a board-style multiple-choice dental knowledge assessment to gauge its capacity to output accurate dental content and in turn the risk of misinformation associated with use of the chatbot as an educational resource by dental students. METHODS: ChatGPT3.5 and ChatGPT4 were asked questions obtained from 3 different sources: INBDE Bootcamp, ITDOnline, and a list of board-style questions provided by the Joint Commission on National Dental Examinations. Image-based questions were excluded, as ChatGPT only takes text-based inputs. The mean performance across 3 trials was reported for each model. RESULTS: ChatGPT3.5 and ChatGPT4 answered 61.3% and 76.9% of the questions correctly on average, respectively. A 2-tailed t test was used to compare 2 independent sample means, and a 2-tailed χ2 test was used to compare 2 sample proportions. A P value less than .05 was considered to be statistically significant. CONCLUSION: ChatGPT3.5 did not perform sufficiently well on the board-style knowledge assessment. ChatGPT4, however, displayed a competent ability to output accurate dental content. Future research should evaluate the proficiency of emerging models of ChatGPT in dentistry to assess its evolving role in dental education. PRACTICAL IMPLICATIONS: Although ChatGPT showed an impressive ability to output accurate dental content, our findings should encourage dental students to incorporate ChatGPT to supplement their existing learning program instead of using it as their primary learning resource.


Assuntos
Inteligência Artificial , Idioma , Humanos , Escolaridade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA