Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Intervalo de ano de publicação
1.
J Telemed Telecare ; : 1357633X241252454, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38766707

RESUMO

OBJECTIVE: The aim of this study was to assess the precision of a web-based tool in measuring visual acuity (VA) in ophthalmic patients, comparing it to the traditional in-clinic evaluation using a Snellen chart, considered the gold standard. METHODS: We conducted a prospective and in-clinic validation comparing the Eyecare Visual Acuity Test® to the standard Snellen chart, with patients undergoing both tests sequentially. Patients wore their standard spectacles as needed for both tests. Inclusion criteria involved individuals above 18 years with VA equal to or better than +1 logMar (20/200) in each eye. VA measurements were converted from Snellen to logMAR, and statistical analyses included Bland-Altman and descriptive statistics. RESULTS: The study, encompassing 322 patients and 644 eyes, compared Eyecare Visual Acuity Test® to conventional methods, revealing a statistically insignificant mean difference (0.01 logMAR, P = 0.1517). Bland-Altman analysis showed a narrow 95% limit of agreement (0.22 to -0.23 logMAR), indicating concordance, supported by a significant Pearson correlation (r = 0.61, P < 0.001) between the two assessments. CONCLUSION: The Eyecare Visual Acuity Test® demonstrates accuracy and reliability, with the potential to facilitate home monitoring, triage, and remote consultation. In future research, it is important to validate the Eyecare Visual Acuity Test® accuracy across varied age cohorts, including pediatric and geriatric populations, as well as among individuals presenting with specific comorbidities like cataract, uveitis, keratoconus, age-related macular disease, and amblyopia.

2.
Rev Assoc Med Bras (1992) ; 69(10): e20230848, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37792871

RESUMO

OBJECTIVE: The aim of this study was to evaluate the performance of ChatGPT-4.0 in answering the 2022 Brazilian National Examination for Medical Degree Revalidation (Revalida) and as a tool to provide feedback on the quality of the examination. METHODS: A total of two independent physicians entered all examination questions into ChatGPT-4.0. After comparing the outputs with the test solutions, they classified the large language model answers as adequate, inadequate, or indeterminate. In cases of disagreement, they adjudicated and achieved a consensus decision on the ChatGPT accuracy. The performance across medical themes and nullified questions was compared using chi-square statistical analysis. RESULTS: In the Revalida examination, ChatGPT-4.0 answered 71 (87.7%) questions correctly and 10 (12.3%) incorrectly. There was no statistically significant difference in the proportions of correct answers among different medical themes (p=0.4886). The artificial intelligence model had a lower accuracy of 71.4% in nullified questions, with no statistical difference (p=0.241) between non-nullified and nullified groups. CONCLUSION: ChatGPT-4.0 showed satisfactory performance for the 2022 Brazilian National Examination for Medical Degree Revalidation. The large language model exhibited worse performance on subjective questions and public healthcare themes. The results of this study suggested that the overall quality of the Revalida examination questions is satisfactory and corroborates the nullified questions.


Assuntos
Inteligência Artificial , Pessoal de Saúde , Humanos , Brasil , Idioma
3.
Rev. Assoc. Med. Bras. (1992, Impr.) ; 69(10): e20230848, 2023. graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1514686

RESUMO

SUMMARY OBJECTIVE: The aim of this study was to evaluate the performance of ChatGPT-4.0 in answering the 2022 Brazilian National Examination for Medical Degree Revalidation (Revalida) and as a tool to provide feedback on the quality of the examination. METHODS: A total of two independent physicians entered all examination questions into ChatGPT-4.0. After comparing the outputs with the test solutions, they classified the large language model answers as adequate, inadequate, or indeterminate. In cases of disagreement, they adjudicated and achieved a consensus decision on the ChatGPT accuracy. The performance across medical themes and nullified questions was compared using chi-square statistical analysis. RESULTS: In the Revalida examination, ChatGPT-4.0 answered 71 (87.7%) questions correctly and 10 (12.3%) incorrectly. There was no statistically significant difference in the proportions of correct answers among different medical themes (p=0.4886). The artificial intelligence model had a lower accuracy of 71.4% in nullified questions, with no statistical difference (p=0.241) between non-nullified and nullified groups. CONCLUSION: ChatGPT-4.0 showed satisfactory performance for the 2022 Brazilian National Examination for Medical Degree Revalidation. The large language model exhibited worse performance on subjective questions and public healthcare themes. The results of this study suggested that the overall quality of the Revalida examination questions is satisfactory and corroborates the nullified questions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA