Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Eur J Orthod ; 2024 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-38613510

RESUMEN

BACKGROUND: The increasing utilization of large language models (LLMs) in Generative Artificial Intelligence across various medical and dental fields, and specifically orthodontics, raises questions about their accuracy. OBJECTIVE: This study aimed to assess and compare the answers offered by four LLMs: Google's Bard, OpenAI's ChatGPT-3.5, and ChatGPT-4, and Microsoft's Bing, in response to clinically relevant questions within the field of orthodontics. MATERIALS AND METHODS: Ten open-type clinical orthodontics-related questions were posed to the LLMs. The responses provided by the LLMs were assessed on a scale ranging from 0 (minimum) to 10 (maximum) points, benchmarked against robust scientific evidence, including consensus statements and systematic reviews, using a predefined rubric. After a 4-week interval from the initial evaluation, the answers were reevaluated to gauge intra-evaluator reliability. Statistical comparisons were conducted on the scores using Friedman's and Wilcoxon's tests to identify the model providing the answers with the most comprehensiveness, scientific accuracy, clarity, and relevance. RESULTS: Overall, no statistically significant differences between the scores given by the two evaluators, on both scoring occasions, were detected, so an average score for every LLM was computed. The LLM answers scoring the highest, were those of Microsoft Bing Chat (average score = 7.1), followed by ChatGPT 4 (average score = 4.7), Google Bard (average score = 4.6), and finally ChatGPT 3.5 (average score 3.8). While Microsoft Bing Chat statistically outperformed ChatGPT-3.5 (P-value = 0.017) and Google Bard (P-value = 0.029), as well, and Chat GPT-4 outperformed Chat GPT-3.5 (P-value = 0.011), all models occasionally produced answers with a lack of comprehensiveness, scientific accuracy, clarity, and relevance. LIMITATIONS: The questions asked were indicative and did not cover the entire field of orthodontics. CONCLUSIONS: Language models (LLMs) show great potential in supporting evidence-based orthodontics. However, their current limitations pose a potential risk of making incorrect healthcare decisions if utilized without careful consideration. Consequently, these tools cannot serve as a substitute for the orthodontist's essential critical thinking and comprehensive subject knowledge. For effective integration into practice, further research, clinical validation, and enhancements to the models are essential. Clinicians must be mindful of the limitations of LLMs, as their imprudent utilization could have adverse effects on patient care.

2.
JMIR Med Educ ; 10: e51344, 2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38111256

RESUMEN

BACKGROUND: The recent artificial intelligence tool ChatGPT seems to offer a range of benefits in academic education while also raising concerns. Relevant literature encompasses issues of plagiarism and academic dishonesty, as well as pedagogy and educational affordances; yet, no real-life implementation of ChatGPT in the educational process has been reported to our knowledge so far. OBJECTIVE: This mixed methods study aimed to evaluate the implementation of ChatGPT in the educational process, both quantitatively and qualitatively. METHODS: In March 2023, a total of 77 second-year dental students of the European University Cyprus were divided into 2 groups and asked to compose a learning assignment on "Radiation Biology and Radiation Protection in the Dental Office," working collaboratively in small subgroups, as part of the educational semester program of the Dentomaxillofacial Radiology module. Careful planning ensured a seamless integration of ChatGPT, addressing potential challenges. One group searched the internet for scientific resources to perform the task and the other group used ChatGPT for this purpose. Both groups developed a PowerPoint (Microsoft Corp) presentation based on their research and presented it in class. The ChatGPT group students additionally registered all interactions with the language model during the prompting process and evaluated the final outcome; they also answered an open-ended evaluation questionnaire, including questions on their learning experience. Finally, all students undertook a knowledge examination on the topic, and the grades between the 2 groups were compared statistically, whereas the free-text comments of the questionnaires were thematically analyzed. RESULTS: Out of the 77 students, 39 were assigned to the ChatGPT group and 38 to the literature research group. Seventy students undertook the multiple choice question knowledge examination, and examination grades ranged from 5 to 10 on the 0-10 grading scale. The Mann-Whitney U test showed that students of the ChatGPT group performed significantly better (P=.045) than students of the literature research group. The evaluation questionnaires revealed the benefits (human-like interface, immediate response, and wide knowledge base), the limitations (need for rephrasing the prompts to get a relevant answer, general content, false citations, and incapability to provide images or videos), and the prospects (in education, clinical practice, continuing education, and research) of ChatGPT. CONCLUSIONS: Students using ChatGPT for their learning assignments performed significantly better in the knowledge examination than their fellow students who used the literature research methodology. Students adapted quickly to the technological environment of the language model, recognized its opportunities and limitations, and used it creatively and efficiently. Implications for practice: the study underscores the adaptability of students to technological innovations including ChatGPT and its potential to enhance educational outcomes. Educators should consider integrating ChatGPT into curriculum design; awareness programs are warranted to educate both students and educators about the limitations of ChatGPT, encouraging critical engagement and responsible use.


Asunto(s)
Inteligencia Artificial , Estudiantes , Humanos , Escolaridad , Aprendizaje , Educación en Odontología
3.
Dent J (Basel) ; 12(1)2023 Dec 26.
Artículo en Inglés | MEDLINE | ID: mdl-38248214

RESUMEN

This study aimed to evaluate the awareness, comprehension, and practices concerning forensic odontology among dental students and faculty at a Dental School in Cyprus. An online, cross-sectional, descriptive survey, employing an adapted, self-administered questionnaire, was disseminated to all dental students and faculty at the School of Dentistry, European University Cyprus, in November 2022. The survey assessed participants' demographic information and explored their awareness with questions alluding to knowledge, attitudes and practices in forensic dentistry. Of those surveyed, 47 faculty members and 304 students responded, yielding response rates of 66.2% and 80%, respectively. Statistical analysis, including Kendall's tau test and χ2 test were employed to examine correlations and associations, with Cramer's V used to measure the strength of significant associations. The predetermined significance level was α = 0.05. Awareness levels were assessed through participants' responses to specific questions in the survey. It was revealed that 87% of faculty and 65% of students were familiar with forensic odontology. A noteworthy 94% of faculty and 85% of students recognized teeth as DNA repositories. A high percentage, 98% of faculty and 89% of students, acknowledged the role of forensic odontology in the identification of criminals and deceased individuals. Awareness of age estimation through dental eruption patterns was evident in 85% of faculty and 81.6% of students. A substantial proportion (80% of faculty) maintained dental records, while 78% of students recognized the importance of dental record-keeping in ensuring quality care. Interestingly, 57% of students and 64% of faculty were aware of the possibility of dentists testifying as expert witnesses. The majority, 95.7% of faculty and 85% of students, concurred that physical harm, scars, and behavioral alterations predominantly indicate child abuse. The findings, revealing robust awareness among respondents, underscore the importance of enhancing faculty engagement in relevant seminars to further strengthen their knowledge. Additionally, emphasizing improved record-keeping practices for potential forensic applications emerges as a crucial aspect. These insights have implications for refining dental education in Cyprus and enhancing forensic practices by promoting ongoing professional development and emphasizing meticulous record-keeping within the dental community.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA