Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Clin Med (Lond) ; 24(3): 100210, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38643828

RESUMO

Imposter phenomenon (IP) is the internalised experience of self-doubt or mediocracy that leads an individual to believe they do not belong. IP is increasingly recognised across the medical field, from medical school to consultancy, but likely affects different groups to varying extents. The transition in role from medical student to junior doctor can be a time of particularly high stress and insecurities about one's ability can act as a trigger or exacerbator of IP. Foundation doctors can arm themselves against IP by first acknowledging its existence and then actively attempting to dismantle these flawed misconceptions, as well as accessing support and resources available ubiquitously through the foundation programme.


Assuntos
Estudantes de Medicina , Humanos , Autoimagem , Médicos , Transtornos de Ansiedade
2.
Cureus ; 15(11): e48788, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38098921

RESUMO

Large language models (LLMs) have broad potential applications in medicine, such as aiding with education, providing reassurance to patients, and supporting clinical decision-making. However, there is a notable gap in understanding their applicability and performance in the surgical domain and how their performance varies across specialties. This paper aims to evaluate the performance of LLMs in answering surgical questions relevant to clinical practice and to assess how this performance varies across different surgical specialties. We used the MedMCQA dataset, a large-scale multi-choice question-answer (MCQA) dataset consisting of clinical questions across all areas of medicine. We extracted the relevant 23,035 surgical questions and submitted them to the popular LLMs Generative Pre-trained Transformers (GPT)-3.5 and GPT-4 (OpenAI OpCo, LLC, San Francisco, CA). Generative Pre-trained Transformer is a large language model that can generate human-like text by predicting subsequent words in a sentence based on the context of the words that come before it. It is pre-trained on a diverse range of texts and can perform a variety of tasks, such as answering questions, without needing task-specific training. The question-answering accuracy of GPT was calculated and compared between the two models and across surgical specialties. Both GPT-3.5 and GPT-4 achieved accuracies of 53.3% and 64.4%, respectively, on surgical questions, showing a statistically significant difference in performance. When compared to their performance on the full MedMCQA dataset, the two models performed differently: GPT-4 performed worse on surgical questions than on the dataset as a whole, while GPT-3.5 showed the opposite pattern. Significant variations in accuracy were also observed across different surgical specialties, with strong performances in anatomy, vascular, and paediatric surgery and worse performances in orthopaedics, ENT, and neurosurgery. Large language models exhibit promising capabilities in addressing surgical questions, although the variability in their performance between specialties cannot be ignored. The lower performance of the latest GPT-4 model on surgical questions relative to questions across all medicine highlights the need for targeted improvements and continuous updates to ensure relevance and accuracy in surgical applications. Further research and continuous monitoring of LLM performance in surgical domains are crucial to fully harnessing their potential and mitigating the risks of misinformation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA