Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.330
Filtrar
Mais filtros








Intervalo de ano de publicação
1.
BMJ Open Qual ; 13(Suppl 2)2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38719519

RESUMO

INTRODUCTION: Safe practice in medicine and dentistry has been a global priority area in which large knowledge gaps are present.Patient safety strategies aim at preventing unintended damage to patients that can be caused by healthcare practitioners. One of the components of patient safety is safe clinical practice. Patient safety efforts will help in ensuring safe dental practice for early detection and limiting non-preventable errors.A valid and reliable instrument is required to assess the knowledge of dental students regarding patient safety. OBJECTIVE: To determine the psychometric properties of a written test to assess safe dental practice in undergraduate dental students. MATERIAL AND METHODS: A test comprising 42 multiple-choice questions of one-best type was administered to final year students (52) of a private dental college. Items were developed according to National Board of Medical Examiners item writing guidelines. The content of the test was determined in consultation with dental experts (either professor or associate professor). These experts had to assess each item on the test for language clarity as A: clear, B: ambiguous and relevance as 1: essential, 2: useful, not necessary, 3: not essential. Ethical approval was taken from the concerned dental college. Statistical analysis was done in SPSS V.25 in which descriptive analysis, item analysis and Cronbach's alpha were measured. RESULT: The test scores had a reliability (calculated by Cronbach's alpha) of 0.722 before and 0.855 after removing 15 items. CONCLUSION: A reliable and valid test was developed which will help to assess the knowledge of dental students regarding safe dental practice. This can guide medical educationist to develop or improve patient safety curriculum to ensure safe dental practice.


Assuntos
Avaliação Educacional , Segurança do Paciente , Psicometria , Humanos , Psicometria/instrumentação , Psicometria/métodos , Segurança do Paciente/normas , Segurança do Paciente/estatística & dados numéricos , Inquéritos e Questionários , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Avaliação Educacional/normas , Reprodutibilidade dos Testes , Estudantes de Odontologia/estatística & dados numéricos , Estudantes de Odontologia/psicologia , Educação em Odontologia/métodos , Educação em Odontologia/normas , Masculino , Feminino , Competência Clínica/estatística & dados numéricos , Competência Clínica/normas
4.
Am J Pharm Educ ; 88(5): 100701, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38641172

RESUMO

As first-time pass rates on the North American Pharmacy Licensure Examination (NAPLEX) continue to decrease, pharmacy educators are left questioning the dynamics causing the decline and how to respond. Institutional and student factors both influence first-time NAPLEX pass rates. Pharmacy schools established before 2000, those housed within an academic medical center, and public rather than private schools have been associated with tendencies toward higher first-time NAPLEX pass rates. However, these factors alone do not sufficiently explain the issues surrounding first-time pass rates. Changes to the NAPLEX blueprint may also have influenced first-time pass rates. The number of existing pharmacy schools combined with decreasing numbers of applicants and influences from the COVID-19 pandemic should also be considered as potential causes of decreased first-time pass rates. In this commentary, factors associated with first-time NAPLEX pass rates are discussed along with some possible responses for the Academy to consider.


Assuntos
COVID-19 , Educação em Farmácia , Avaliação Educacional , Licenciamento em Farmácia , Faculdades de Farmácia , Humanos , Avaliação Educacional/normas , Faculdades de Farmácia/normas , COVID-19/epidemiologia , Estudantes de Farmácia , Farmacêuticos , Estados Unidos
5.
JMIR Med Educ ; 10: e55048, 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38686550

RESUMO

Background: The deployment of OpenAI's ChatGPT-3.5 and its subsequent versions, ChatGPT-4 and ChatGPT-4 With Vision (4V; also known as "GPT-4 Turbo With Vision"), has notably influenced the medical field. Having demonstrated remarkable performance in medical examinations globally, these models show potential for educational applications. However, their effectiveness in non-English contexts, particularly in Chile's medical licensing examinations-a critical step for medical practitioners in Chile-is less explored. This gap highlights the need to evaluate ChatGPT's adaptability to diverse linguistic and cultural contexts. Objective: This study aims to evaluate the performance of ChatGPT versions 3.5, 4, and 4V in the EUNACOM (Examen Único Nacional de Conocimientos de Medicina), a major medical examination in Chile. Methods: Three official practice drills (540 questions) from the University of Chile, mirroring the EUNACOM's structure and difficulty, were used to test ChatGPT versions 3.5, 4, and 4V. The 3 ChatGPT versions were provided 3 attempts for each drill. Responses to questions during each attempt were systematically categorized and analyzed to assess their accuracy rate. Results: All versions of ChatGPT passed the EUNACOM drills. Specifically, versions 4 and 4V outperformed version 3.5, achieving average accuracy rates of 79.32% and 78.83%, respectively, compared to 57.53% for version 3.5 (P<.001). Version 4V, however, did not outperform version 4 (P=.73), despite the additional visual capabilities. We also evaluated ChatGPT's performance in different medical areas of the EUNACOM and found that versions 4 and 4V consistently outperformed version 3.5. Across the different medical areas, version 3.5 displayed the highest accuracy in psychiatry (69.84%), while versions 4 and 4V achieved the highest accuracy in surgery (90.00% and 86.11%, respectively). Versions 3.5 and 4 had the lowest performance in internal medicine (52.74% and 75.62%, respectively), while version 4V had the lowest performance in public health (74.07%). Conclusions: This study reveals ChatGPT's ability to pass the EUNACOM, with distinct proficiencies across versions 3.5, 4, and 4V. Notably, advancements in artificial intelligence (AI) have not significantly led to enhancements in performance on image-based questions. The variations in proficiency across medical fields suggest the need for more nuanced AI training. Additionally, the study underscores the importance of exploring innovative approaches to using AI to augment human cognition and enhance the learning process. Such advancements have the potential to significantly influence medical education, fostering not only knowledge acquisition but also the development of critical thinking and problem-solving skills among health care professionals.


Assuntos
Avaliação Educacional , Licenciamento em Medicina , Chile , Humanos , Avaliação Educacional/métodos , Avaliação Educacional/normas , Competência Clínica/normas , Masculino , Feminino
6.
Curr Pharm Teach Learn ; 16(6): 465-468, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38582641

RESUMO

BACKGROUND AND PURPOSE: To describe one institution's approach to transformation of high-stakes objective structure clinical examinations (OSCEs) from norm-referenced to criterion-referenced standards setting and to evaluate the impact of these changes on OSCE performance and pass rates. EDUCATIONAL ACTIVITY AND SETTING: The OSCE writing team at the college selected a modified Angoff method appropriate for high-stakes assessments to replace the two standard deviation method previously used. Each member of the OSCE writing team independently reviewed the analytical checklist and calculated a passing score for active stations on OSCEs. Then the group met to determine a final pass score for each station. The team also determined critical cut points for each station, when indicated. After administration of the OSCEs, scores, pass rates, and need for remediation were compared to the previous norm-referenced method. Descriptive statistics were used to summarize the data. FINDINGS: OSCE scores remained relatively unchanged when switched to a criterion-referenced method, but the number of remediators increased up to 2.6 fold. In the first year, the average score increased from 86.8% to 91.7% while the remediation rate increased from 2.8% to 7.4%. In the third year, the average increased from 90.9% to 92% while the remediation rate increased from 6% to 15.6%. Likewise, the fourth-year average increased from 84.9% to 87.5% while the remediation rate increased from 4.4% to 9%. SUMMARY: Transition to a modified Angoff method did not impact average OSCE score but did increase the number of remediations.


Assuntos
Avaliação Educacional , Humanos , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Avaliação Educacional/normas , Competência Clínica/normas , Competência Clínica/estatística & dados numéricos , Educação em Farmácia/métodos , Educação em Farmácia/normas , Educação em Farmácia/estatística & dados numéricos
8.
J Neurosci Nurs ; 56(3): 86-91, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38451926

RESUMO

ABSTRACT: BACKGROUND: To measure the effectiveness of an educational intervention, it is essential to develop high-quality, validated tools to assess a change in knowledge or skills after an intervention. An identified gap within the field of neurology is the lack of a universal test to examine knowledge of neurological assessment. METHODS: This instrument development study was designed to determine whether neuroscience knowledge as demonstrated in a Neurologic Assessment Test (NAT) was normally distributed across healthcare professionals who treat patients with neurologic illness. The variables of time, knowledge, accuracy, and confidence were individually explored and analyzed in SAS. RESULTS: The mean (standard deviation) time spent by 135 participants to complete the NAT was 12.9 (3.2) minutes. The mean knowledge score was 39.5 (18.2), mean accuracy was 46.0 (15.7), and mean confidence was 84.4 (24.4). Despite comparatively small standard deviations, Shapiro-Wilk scores indicate that the time spent, knowledge, accuracy, and confidence are nonnormally distributed ( P < .0001). The Cronbach α was 0.7816 considering all 3 measures (knowledge, accuracy, and confidence); this improved to an α of 0.8943 when only knowledge and accuracy were included in the model. The amount of time spent was positively associated with higher accuracy ( r2 = 0.04, P < .05), higher knowledge was positively associated with higher accuracy ( r2 = 0.6543, P < .0001), and higher knowledge was positively associated with higher confidence ( r2 = 0.4348, P < .0001). CONCLUSION: The scores for knowledge, confidence, and accuracy each had a slightly skewed distribution around a point estimate with a standard deviation smaller than the mean. This suggests initial content validity in the NAT. There is adequate initial construct validity to support using the NAT as an outcome measure for projects that measure change in knowledge. Although improvements can be made, the NAT does have adequate construct and content validity for initial use.


Assuntos
Pessoal de Saúde , Exame Neurológico , Humanos , Exame Neurológico/normas , Exame Neurológico/métodos , Pessoal de Saúde/educação , Reprodutibilidade dos Testes , Competência Clínica/normas , Feminino , Masculino , Adulto , Enfermagem em Neurociência , Conhecimentos, Atitudes e Prática em Saúde , Doenças do Sistema Nervoso/enfermagem , Doenças do Sistema Nervoso/diagnóstico , Avaliação Educacional/métodos , Avaliação Educacional/normas
9.
Med Educ ; 58(6): 730-736, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38548481

RESUMO

OBJECTIVE: This study explored how the Syrian crisis, training conditions, and relocation influenced the National Medical Examination (NME) scores of final-year medical students. METHODS: Results of the NME were used to denote the performance of final-year medical students between 2014 and 2021. The NME is a mandatory standardised test that measures the knowledge and competence of students in various clinical subjects. We categorised the data into two periods: period-I (2014-2018) and period-II (2019-2021). Period-I represents students who trained under hostile circumstances, which refer to the devastating effects of a decade-long Syrian crisis. Period-II represents post-hostilities phase, which is marked by a deepening economic crisis. RESULTS: Collected data included test scores for a total of 18 312 final-year medical students from nine medical schools (from six public and three private universities). NME scores improved significantly in period-II compared with period-I tests (p < 0.0001). Campus location or relocation during the crisis affected the results significantly, with higher scores from students of medical schools located in lower-risk regions compared with those from medical schools located in high-risk regions (p < 0.0001), both during and in the post-hostilities phases. Also, students of medical schools re-located to lesser-risk regions scored significantly less than those of medical schools located in high-risk regions (p < 0.0001), but their scores remained inferior to that of students of medical schools that were originally located in lower-risk regions (p < 0.0001). CONCLUSION: Academic performance of final year medical students can be adversely affected by crises and conflicts, with a clear tendency to recovery upon crises resolution. The study underscores the importance of maintaining and safeguarding the infrastructure of educational institutions, especially during times of crisis. Governments and educational authorities should prioritise resource allocation to ensure that medical schools have access to essential services, learning resources, and teaching personnel.


Assuntos
Avaliação Educacional , Estudantes de Medicina , Síria , Humanos , Avaliação Educacional/métodos , Avaliação Educacional/normas , Competência Clínica/normas , Faculdades de Medicina , Educação de Graduação em Medicina , Educação Médica
11.
Indian Pediatr ; 61(5): 463-468, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38400729

RESUMO

India introduced competency-based medical education (CBME) in the year 2019. There is often confusion between terms like ability, skill, and competency. The provided curriculum encourages teaching and assessing skills rather than competencies. Though competency includes skill, it is more than a mere skill, and ignoring the other aspects like communication, ethics, and professionalism can compromise the teaching of competencies as well as their intended benefits to the patient and the society. The focus on skills also undermines the assessment of relevant knowledge. This paper clarifies the differences between ability, skill, and competency, and re-emphasizes the role of relevant knowledge and its assessment throughout clinical training. It is also emphasized that competency assessment is not a one-shot process; rather, it must be a longitudinal process where the assessment should bring out the achievement level of the student. Many of the components of competencies are not assessable by purely objective methods and there is a need to use expert subjective judgments, especially for the formative and classroom assessments. A mentor adds to the success of a competency-based curriculum.


Assuntos
Competência Clínica , Educação Baseada em Competências , Currículo , Humanos , Competência Clínica/normas , Índia , Educação Baseada em Competências/normas , Currículo/normas , Avaliação Educacional/métodos , Avaliação Educacional/normas , Educação Médica/normas , Educação Médica/métodos
13.
Acad Med ; 99(5): 534-540, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38232079

RESUMO

PURPOSE: Learner development and promotion rely heavily on narrative assessment comments, but narrative assessment quality is rarely evaluated in medical education. Educators have developed tools such as the Quality of Assessment for Learning (QuAL) tool to evaluate the quality of narrative assessment comments; however, scoring the comments generated in medical education assessment programs is time intensive. The authors developed a natural language processing (NLP) model for applying the QuAL score to narrative supervisor comments. METHOD: Samples of 2,500 Entrustable Professional Activities assessments were randomly extracted and deidentified from the McMaster (1,250 comments) and Saskatchewan (1,250 comments) emergency medicine (EM) residency training programs during the 2019-2020 academic year. Comments were rated using the QuAL score by 25 EM faculty members and 25 EM residents. The results were used to develop and test an NLP model to predict the overall QuAL score and QuAL subscores. RESULTS: All 50 raters completed the rating exercise. Approximately 50% of the comments had perfect agreement on the QuAL score, with the remaining resolved by the study authors. Creating a meaningful suggestion for improvement was the key differentiator between high- and moderate-quality feedback. The overall QuAL model predicted the exact human-rated score or 1 point above or below it in 87% of instances. Overall model performance was excellent, especially regarding the subtasks on suggestions for improvement and the link between resident performance and improvement suggestions, which achieved 85% and 82% balanced accuracies, respectively. CONCLUSIONS: This model could save considerable time for programs that want to rate the quality of supervisor comments, with the potential to automatically score a large volume of comments. This model could be used to provide faculty with real-time feedback or as a tool to quantify and track the quality of assessment comments at faculty, rotation, program, or institution levels.


Assuntos
Educação Baseada em Competências , Internato e Residência , Processamento de Linguagem Natural , Humanos , Educação Baseada em Competências/métodos , Internato e Residência/normas , Competência Clínica/normas , Narração , Avaliação Educacional/métodos , Avaliação Educacional/normas , Medicina de Emergência/educação , Docentes de Medicina/normas
14.
J Med Syst ; 47(1): 86, 2023 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-37581690

RESUMO

ChatGPT, a language model developed by OpenAI, uses a 175 billion parameter Transformer architecture for natural language processing tasks. This study aimed to compare the knowledge and interpretation ability of ChatGPT with those of medical students in China by administering the Chinese National Medical Licensing Examination (NMLE) to both ChatGPT and medical students. We evaluated the performance of ChatGPT in three years' worth of the NMLE, which consists of four units. At the same time, the exam results were compared to those of medical students who had studied for five years at medical colleges. ChatGPT's performance was lower than that of the medical students, and ChatGPT's correct answer rate was related to the year in which the exam questions were released. ChatGPT's knowledge and interpretation ability for the NMLE were not yet comparable to those of medical students in China. It is probable that these abilities will improve through deep learning.


Assuntos
Inteligência Artificial , Avaliação Educacional , Licenciamento , Medicina , Estudantes de Medicina , Humanos , Povo Asiático , China , Conhecimento , Idioma , Medicina/normas , Licenciamento/normas , Estudantes de Medicina/estatística & dados numéricos , Avaliação Educacional/normas
15.
Natl Med J India ; 36(5): 323-326, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38759987

RESUMO

Background Reflective practice is an integral component of continuing professional development. However, assessing the written narration is complex and difficult. Rubric is a potential tool that can overcome this difficulty. We aimed to develop, validate and estimate inter-rater reliability of an analytical rubric used for assessing reflective narration. Methods A triangulation type of mixed-methods design (Qual-Nominal group Technique, Quan-Analytical follow-up design and Qual-Open-ended response) was adopted to achieve the study objectives. Faculties involved in the active surveillance of Covid-19 participated in the process of development of assessment rubrics. The reflective narrations of medical interns were assessed by postgraduates with and without the rubric. Steps recommended by the assessment committee of the University of Hawaii were followed to develop rubrics. Content validity index and inter-rater reliability measures were estimated. Results An analytical rubric with eight criteria and four mastery levels yielding a maximum score of 40 was developed. There was a significant difference in the mean score obtained by interns when rated without and with the developed rubrics. Kendall's coefficient of concordance, which is a measure of concordance of scorers among more than two scorers, was higher after using rubrics. Conclusion Our attempt to develop an analytical rubric for assessing reflective narration was successful in terms of the high content validity index and better inter-rater concordance. The same process can be replicated to develop any such analytical rubric in the future.


Assuntos
COVID-19 , Humanos , Reprodutibilidade dos Testes , SARS-CoV-2 , Variações Dependentes do Observador , Internato e Residência , Avaliação Educacional/métodos , Avaliação Educacional/normas
16.
Educ. med. super ; 36(2)jun. 2022. ilus, tab
Artigo em Espanhol | LILACS, CUMED | ID: biblio-1404547

RESUMO

Introducción: La formación de los especialistas médico-quirúrgicos (residentes) se lleva a cabo en hospitales donde confluyen actividades asistenciales y de enseñanza-aprendizaje. El conocimiento sobre este ambiente dual es fundamental para identificar oportunidades para optimizar la calidad y efectividad de ambas actividades. Objetivo: Construir una escala para medir la percepción del ambiente de enseñanza-aprendizaje en la práctica clínica de los residentes en formación en Colombia. Métodos: Se diseñó una escala tipo Likert, que adaptó la guía de la Association for Medical Education in Europe Developing Questionnaires For Educational Research, con los siguientes pasos: revisión de literatura, revisión de la normatividad colombiana con respecto a los hospitales universitarios, síntesis de la evidencia, desarrollo de los ítems, validación de apariencia por expertos y aplicación del cuestionario a residentes. Resultados: Se construyó la escala de Ambiente de la Práctica Clínica (EAPRAC) sobre la base de la teoría educativa de la actividad y del aprendizaje situado en el lugar de trabajo. Inicialmente, se definieron 46 preguntas y, posterior a la validación de apariencia, se conformaron 39 ítems distribuidos en siete dominios: procesos académicos, docentes, convenios docencia-servicio, bienestar, infraestructura académica, infraestructura asistencial y organización y gestión. La aplicación de esta escala a residentes no mostró problemas de comprensión, motivo por el cual no fue necesario depurar la cantidad ni el contenido de los ítems. Conclusiones: La escala construida tiene validez de apariencia por los pares expertos y los residentes, lo que permite que en una fase posterior se le realice la validez de contenido y reproducibilidad(AU)


Introduction: The training of medical-surgical specialists (residents) takes place in hospitals where healthcare and teaching-learning activities converge. Knowledge about this dual setting is essential for identifying opportunities to optimize the quality and effectiveness of both activities. Objective: To construct a scale for measuring the perception about the teaching-learning environment in the clinical practice of residents who receive training in Colombia. Methods: A Likert-type scale was designed as an adapted form of the guide Developing Questionnaires for Educational Research, presented by the Association for Medical Education in Europe, with the following steps: literature review, review of Colombian regulations regarding university hospitals, synthesis of evidence, development of items, validation of appearance by experts, and questionnaire application to residents. Results: A clinical practice environment scale was constructed on the basis of the educational theory of activity and learning situated in the workplace. Initially, 46 questions were defined and, after the validation of appearance, 39 items distributed in seven domains were created: academic processes, teaching processes, teaching-service agreements, welfare, academic infrastructure, care infrastructure, and management and organization. The application of this scale to residents showed no comprehension problems; therefore, it was not necessary to refine the number or content of the items. Conclusions: The scale constructed has validity of appearance by expert peers and residents, which allows, in further stages, to carry out content validity and reproducibility(AU)


Assuntos
Humanos , Ensino , Conhecimento , Aprendizagem , Gestão em Saúde , Educação Médica , Avaliação Educacional/normas , Estudos de Avaliação como Assunto , Hospitais/normas
17.
Med Teach ; 44(4): 353-359, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35104191

RESUMO

Health professions education has undergone significant changes over the last few decades, including the rise of competency-based medical education, a shift to authentic workplace-based assessments, and increased emphasis on programmes of assessment. Despite these changes, there is still a commonly held assumption that objectivity always leads to and is the only way to achieve fairness in assessment. However, there are well-documented limitations to using objectivity as the 'gold standard' to which assessments are judged. Fairness, on the other hand, is a fundamental quality of assessment and a principle that almost no one contests. Taking a step back and changing perspectives to focus on fairness in assessment may help re-set a traditional objective approach and identify an equal role for subjective human judgement in assessment alongside objective methods. This paper explores fairness as a fundamental quality of assessments. This approach legitimises human judgement and shared subjectivity in assessment decisions alongside objective methods. Widening the answer to the question: 'What is fair assessment' to include not only objectivity but also expert human judgement and shared subjectivity can add significant value in ensuring learners are better equipped to be the health professionals required of the 21st century.


Assuntos
Educação Baseada em Competências , Avaliação Educacional/métodos , Avaliação Educacional/normas , Ocupações em Saúde/educação , Local de Trabalho , Humanos , Julgamento
19.
Am J Surg ; 222(6): 1112-1119, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34600735

RESUMO

BACKGROUND: The American Board of Surgery has mandated chief residents complete 25 cases in the teaching assistant (TA) role. We developed a structured instrument, the Teaching Evaluation and Assessment of the Chief Resident (TEACh-R), to determine readiness and provide feedback for residents in this role. METHODS: Senior (PGY3-5) residents were scored on technical and teaching performance by faculty observers using the TEACh-R instrument in the simulation lab. Residents were provided with their TEACh-R scores and surveyed on their experience. RESULTS: Scores in technical (p < 0.01) and teaching (p < 0.01) domains increased with PGY. Higher technical, but not teaching, scores correlated with attending-rated readiness for operative independence (p 0.02). Autonomy mismatch was inversely correlated with teaching competence (p < 0.01). Residents reported satisfaction with TEACh-R feedback and desire for use of this instrument in operating room settings. CONCLUSION: Our TEACh-R instrument is an effective way to assess technical and teaching performance in the TA role.


Assuntos
Internato e Residência/organização & administração , Ensino/normas , Avaliação Educacional/normas , Humanos , Internato e Residência/métodos , Internato e Residência/normas , Reprodutibilidade dos Testes , Inquéritos e Questionários
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA