Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 6.716
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
J Appl Clin Med Phys ; 25(5): e14354, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38620004

RESUMO

PURPOSE: In 2019, a formal review and update of the current training program for medical physics residents/registrars in Australasia was conducted. The purpose of this was to ensure the program met current local clinical and technological requirements, to improve standardization of training across Australia and New Zealand and generate a dynamic curriculum and programmatic assessment model. METHODS: A four-phase project was initiated, including a consultant desktop review of the current program and stakeholder consultation. Overarching program outcomes on which to base the training model were developed, with content experts used to update the scientific content. Finally, assessment specialists reviewed a range of assessment models to determine appropriate assessment methods for each learning outcome, creating a model of programmatic assessment. RESULTS: The first phase identified a need for increased standardized assessment incorporating programmatic assessment. Seven clear program outcome statements were generated and used to guide and underpin the new curriculum framework. The curriculum was expanded from the previous version to include emerging technologies, while removing previous duplication. Finally, a range of proposed assessments for learning outcomes in the curriculum were generated into the programmatic assessment model. These new assessment methods were structured to incorporate rubric scoring to provide meaningful feedback. CONCLUSIONS: An updated training program for Radiation Oncology Medial Physics registrars/residents was released in Australasia. Scientific content from a previous program was used as a foundation and revised for currency with the ability to accommodate a dynamic curriculum model. A programmatic model of assessment was created after comprehensive review and consultation. This new model of assessment provides more structured, ongoing assessment throughout the training period. It contains allowances for local bespoke assessment, and guidance for supervisors by the provision of marking templates and rubrics.


Assuntos
Currículo , Física Médica , Radioterapia (Especialidade) , Radioterapia (Especialidade)/educação , Humanos , Física Médica/educação , Internato e Residência , Competência Clínica/normas , Austrália , Educação de Pós-Graduação em Medicina/métodos , Avaliação Educacional/métodos , Nova Zelândia
2.
BMC Med Educ ; 24(1): 431, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38649959

RESUMO

BACKGROUND: Artificial intelligence (AI) tools are designed to create or generate content from their trained parameters using an online conversational interface. AI has opened new avenues in redefining the role boundaries of teachers and learners and has the potential to impact the teaching-learning process. METHODS: In this descriptive proof-of- concept cross-sectional study we have explored the application of three generative AI tools on drug treatment of hypertension theme to generate: (1) specific learning outcomes (SLOs); (2) test items (MCQs- A type and case cluster; SAQs; OSPE); (3) test standard-setting parameters for medical students. RESULTS: Analysis of AI-generated output showed profound homology but divergence in quality and responsiveness to refining search queries. The SLOs identified key domains of antihypertensive pharmacology and therapeutics relevant to stages of the medical program, stated with appropriate action verbs as per Bloom's taxonomy. Test items often had clinical vignettes aligned with the key domain stated in search queries. Some test items related to A-type MCQs had construction defects, multiple correct answers, and dubious appropriateness to the learner's stage. ChatGPT generated explanations for test items, this enhancing usefulness to support self-study by learners. Integrated case-cluster items had focused clinical case description vignettes, integration across disciplines, and targeted higher levels of competencies. The response of AI tools on standard-setting varied. Individual questions for each SAQ clinical scenario were mostly open-ended. The AI-generated OSPE test items were appropriate for the learner's stage and identified relevant pharmacotherapeutic issues. The model answers supplied for both SAQs and OSPEs can aid course instructors in planning classroom lessons, identifying suitable instructional methods, establishing rubrics for grading, and for learners as a study guide. Key lessons learnt for improving AI-generated test item quality are outlined. CONCLUSIONS: AI tools are useful adjuncts to plan instructional methods, identify themes for test blueprinting, generate test items, and guide test standard-setting appropriate to learners' stage in the medical program. However, experts need to review the content validity of AI-generated output. We expect AIs to influence the medical education landscape to empower learners, and to align competencies with curriculum implementation. AI literacy is an essential competency for health professionals.


Assuntos
Inteligência Artificial , Avaliação Educacional , Humanos , Estudos Transversais , Estudantes de Medicina , Currículo , Hipertensão/tratamento farmacológico , Hipertensão/terapia , Educação de Graduação em Medicina , Estudo de Prova de Conceito , Educação Médica
3.
BMC Med Educ ; 24(1): 399, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600531

RESUMO

BACKGROUND: The use of simulated patients (SPs) to assess medical students' clinical performance is gaining prominence, underscored by patient safety perspective. However, few reports have investigated the validity of such assessment. Here, we examined the validity and reliability of an assessment tool that serves as a standardized tool for SPs to assess medical students' medical interview. METHODS: This longitudinal survey was conducted at Keio University School of Medicine in Japan from 2014 to 2021. To establish content validity, the simulated patient assessment tool (SPAT) was developed by several medical education specialists from 2008 to 2013. A cohort of 36 SPs assessed the performance of 831 medical students in clinical practice medical interview sessions from April 2014 to December 2021. The assessment's internal structure was analyzed using descriptive statistics (maximum, minimum, median, mean, and standard deviation) for the SPAT's 13 item total scores. Structural validity was examined with exploratory factor analysis, and internal consistency with Cronbach's alpha coefficients. The mean SPAT total scores across different SPs and scenarios were compared using one way analysis of variance (ANOVA). Convergent validity was determined by correlating SPAT with the post-clinical clerkship obstructive structured clinical examination (post-CC OSCE) total scores using Pearson's correlation coefficient. RESULTS: Of the 831 assessment sheets, 36 with missing values were excluded, leaving 795 for analysis. Thirty-five SPs, excluding one SP who quit in 2014, completed 795 assessments, for a response rate of 95.6%. Exploratory factor analysis revealed two factors, communication and physician performance. The overall Cronbach's alpha coefficient was 0.929. Significant differences in SPAT total scores were observed across SPs and scenarios via one-way ANOVA. A moderate correlation (r =.212, p <.05) was found between SPAT and post-CC OSCE total scores, indicating convergent validity. CONCLUSIONS: Evidence for the validity of SPAT was examined. These findings may be useful in the standardization of SP assessment of the scenario-based clinical performance of medical students.


Assuntos
Educação Médica , Estudantes de Medicina , Humanos , Avaliação Educacional , Reprodutibilidade dos Testes , Comunicação , Competência Clínica
4.
MedEdPORTAL ; 20: 11398, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38628548

RESUMO

Introduction: Integrating climate change and health into a medical school curriculum is critical for future physicians who will manage health crises caused by a rapidly changing climate. Although medical schools have increasingly included climate change in the curriculum, there remains a need to address the link between the climate crisis, environmental justice, and historical policies that shape environmental health disparities in local communities. Methods: In academic years 2021-2022 (AY22) and 2022-2023 (AY23), second-year medical students participated in a 2.5-hour seminar utilizing didactic teaching and small breakout groups that included interactive mapping activities and case scenarios. Learner knowledge and attitudes were self-assessed using pre- and postcurriculum surveys and a quiz. Qualitative thematic and content analysis was used to evaluate short-answer quiz responses and feedback. Results: Of 357 students who participated in the seminar, 208 (58%) completed both the precurriculum and postcurriculum surveys. Self-assessed ability increased significantly for all educational objectives across both years. Attitudes on the importance of climate change knowledge for patient health also improved from a mean of 3.5 precurriculum to 4.2 postcurriculum (difference = 0.7, p < .01) in AY22 and from 3.6 pre- to 4.3 postcurriculum (difference = 0.7, p < .01) in AY23 on a 5-point Likert scale. Discussion: This climate change and health session highlighting the link between environmental policy and climate change health vulnerability in the local context was successful in improving students' self-assessed ability across all stated educational objectives. Students cited the interactive small-group sessions as a major strength.


Assuntos
Estudantes de Medicina , Humanos , Estudantes de Medicina/psicologia , Justiça Ambiental , Mudança Climática , Currículo , Avaliação Educacional
5.
S Afr Fam Pract (2004) ; 66(1): e1-e8, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38572871

RESUMO

The series 'Mastering your Fellowship' provides examples of the question formats encountered in the written and clinical examinations, Part A of the Fellowship of the College of Family Physicians of South Africa (FCFP [SA]) examination. The series aims to help family medicine registrars (and supervisors) prepare for this examination.


Assuntos
Avaliação Educacional , Bolsas de Estudo , Humanos , Competência Clínica , Medicina de Família e Comunidade/educação , Médicos de Família
7.
Urogynecology (Phila) ; 30(4): 394-398, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38564624

RESUMO

ABSTRACT: In the field of obstetrics and gynecology (OB/GYN), the Council on Resident Education in Obstetrics and Gynecology (CREOG) administers an annual in-training examination to all OB/GYN residents as a formative educational tool for assessing medical knowledge and promoting self-improvement. Although the CREOG examination is not designed or intended for knowledge certification, many OB/GYN subspecialty fellowship programs request and use CREOG examination scores as a metric to evaluate fellowship candidates. Among the 57 gynecology-based urogynecology fellowship programs, 30 programs (53%) request CREOG examination scores to be submitted by candidates, as of March 2023. Although the use of CREOG examination scores as an evaluation metric may constitute a minor component within the fellowship match process, this practice fundamentally contradicts the intended purpose of the examination as an educational self-assessment. In addition, it introduces the potential for bias in fellowship recruitment, lacks psychometric validity in predicting specialty board examination failure, and shifts the CREOG examination from its original intention as low-stakes self-assessment into a high-stakes examination akin to a certification examination. For these reasons, we call upon the urogynecology community to prioritize the educational mission of the CREOG examination and reconsider the practice of requesting or using CREOG examination scores in the fellowship match progress.


Assuntos
Ginecologia , Internato e Residência , Obstetrícia , Bolsas de Estudo , Ginecologia/educação , Obstetrícia/educação , Avaliação Educacional
8.
Psychometrika ; 89(1): 64-83, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38565794

RESUMO

Rapid advances in psychology and technology open opportunities and present challenges beyond familiar forms of educational assessment and measurement. Viewing assessment through the perspectives of complex adaptive sociocognitive systems and argumentation helps us extend the concepts and methods of educational measurement to new forms of assessment, such as those involving interaction in simulation environments and automated evaluation of performances. I summarize key ideas for doing so and point to the roles of measurement models and their relation to sociocognitive systems and assessment arguments. A game-based learning assessment SimCityEDU: Pollution Challenge! is used to illustrate ideas.


Assuntos
Avaliação Educacional , Psicometria , Psicometria/métodos , Humanos , Avaliação Educacional/métodos , Modelos Estatísticos
9.
BMC Med Educ ; 24(1): 308, 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38504289

RESUMO

BACKGROUND: Health professionals are increasingly called upon and willing to engage in planetary health care and management. However, so far, this topic is rarely covered in medical curricula. As the need for professional communication is particularly high in this subject area, this study aimed to evaluate whether the objective structured clinical examination (OSCE) could be used as an accompanying teaching tool. METHODS: During the winter semester 2022/2023, 20 third- and fifth-year medical students voluntarily participated in a self-directed online course, three workshops, and a formal eight-station OSCE on planetary health care and management. Each examinee was also charged alternatingly as a shadower with the role of providing feedback. Experienced examiners rated students' performance using a scoring system supported by tablet computers. Examiners and shadowers provided timely feedback on candidates` performance in the OSCE. Immediately after the OSCE, students were asked about their experience using a nine-point Likert-scale survey and a videotaped group interview. Quantitative analysis included the presentation of the proportional distribution of student responses to the survey and of box plots showing percentages of maximum scores for the OSCE performance. The student group interview was analyzed qualitatively. RESULTS: Depending on the sub-theme, 60% -100% of students rated the subject of planetary health as likely to be useful in their professional lives. Similar proportions (57%-100%) were in favour of integrating planetary health into required courses. Students perceived learning success from OSCE experience and feedback as higher compared to that from online courses and workshops. Even shadowers learned from observation and feedback discussions. Examiners assessed students' OSCE performance at a median of 80% (interquartile range: 83%-77%) of the maximum score. CONCLUSIONS: OSCE can be used as an accompanying teaching tool for advanced students on the topic of planetary health care and management. It supports learning outcomes, particularly in terms of communication skills to sensitise and empower dialogue partners, and to initiate adaptation steps at the level of individual patients and local communities.


Assuntos
Exame Físico , Estudantes de Medicina , Humanos , Currículo , Avaliação Educacional , Atenção à Saúde , Competência Clínica
10.
Patient Educ Couns ; 123: 108237, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38461793

RESUMO

OBJECTIVE: Given the importance of unhurried conversations for providing careful and kind care, we sought to create, test, and validate the Unhurried Conversations Assessment Tool (UCAT) for assessing the unhurriedness of patient-clinician consultations. METHODS: In the first two phases, the unhurried conversation dimensions were identified and transformed into an assessment tool. In the third phase, two independent raters used UCAT to evaluate the unhurriedness of 100 randomly selected consultations from 184 videos recorded for a large research trial. UCAT's psychometric properties were evaluated using this data. RESULTS: UCAT demonstrates content validity based on the literature and expert review. EFA and reliability analyses confirm its construct validity and internal consistency. The seven formative dimensions account for 89.93% of the variance in unhurriedness, each displaying excellent internal consistency (α > 0.90). Inter-rater agreement for the overall assessment item was fair (ICC = 0.59), with individual dimension ICCs ranging from 0.26 (poor) to 0.95 (excellent). CONCLUSION: UCAT components comprehensively assess the unhurriedness of consultations. The tool exhibits content and construct validity and can be used reliably. PRACTICE IMPLICATIONS: UCAT's design and psychometric properties make it a practical and efficient tool. Clinicians can use it for self-evaluations and training to foster unhurried conversations.


Assuntos
Comunicação , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes , Avaliação Educacional/métodos , Psicometria , Competência Clínica
11.
J Evid Based Soc Work (2019) ; 21(2): 199-213, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38493306

RESUMO

PURPOSE: The Association of Social Work Boards (2022a) released a report evidencing test-taker demographics as the strongest predictor of professional licensure exam pass-rates. The purpose of this study was to examine statistical predictors of social work professional licensure exam pass rate disparities between first-time Black/African American and White test-takers. MATERIALS AND METHODS: The study addressed the following research question: To what extent do institutional and state licensure characteristics predict race-based disparities in social work licensure exam pass rates? To answer this question, the authors built a data set in an Excel spreadsheet comprised of institutional and state licensure variables using publicly available and reliable sources. RESULTS: States requiring more clinical supervision hours and imposing higher licensure fees tended to report higher overall pass rates on the ASWB exam. Additionally, a notable correlation was found between states with a higher proportion of Black/African American residents and increased pass rates. Conversely, states that had established a larger number of licensure tiers typically saw lower overall pass rates. Furthermore, it was noted that schools located in the Southern U.S. demonstrated significantly lower ASWB pass rates compared to schools in other regions of the country. DISCUSSION: Recommendations are made regarding future research efforts and professional licensure and regulation standards. CONCLUSION: Pass rate disparities have implications for individual exam-takers and their families; for clients and constituencies; and for social work practice, research, ethics, and education.


Assuntos
Avaliação Educacional , Licenciamento , Humanos , Instituições Acadêmicas
12.
MedEdPORTAL ; 20: 11386, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38476297

RESUMO

Introduction: The Accreditation Council for Graduate Medical Education (ACGME) requires emergency medicine (EM) residency training programs to monitor residents' progress using standardized milestones. The first assessment of PGY 1 resident milestones occurs midway through the first year and could miss initial deficiencies. Early assessment of PGY 1 EM resident milestones has potential to identify at-risk residents prior to standard midyear evaluations. We developed an orientation syllabus for PGY 1 residents followed by a milestone assessment. Assessment scores helped predict future milestone scores and American Board of Emergency Medicine (ABEM) In-Training Examination (ITE) scores for PGY 1 residents. Methods: From 2013 to 2020, we developed and implemented Milestone Evaluation Day (MED), a simulation-based day and written exam assessing PGY 1 EM residents during their first month on the 23 ACGME 1.0 milestones. MED stations included a history and physical with verbal presentation, patient simulation, vascular access, wound management, and airway management. MED, Clinical Competency Committee-generated (CCC-generated) milestone, and ABEM ITE scores were averaged and compared utilizing Pearson's correlation coefficient. Results: Of 112 PGY 1 EM residents, 110 (98%) were analyzed over an 8-year period. We observed a moderate positive correlation of MED and CCC-generated milestone scores (r = .34, p < .001). There was a nonstatistically significant weak positive correlation of MED and ABEM ITE scores (r = .13, p = .17). Discussion: An early assessment of EM milestones in the PGY 1 year can assist in the prediction of CCC-generated milestone scores for PGY 1 residents.


Assuntos
Medicina de Emergência , Internato e Residência , Humanos , Estados Unidos , Avaliação Educacional , Educação de Pós-Graduação em Medicina , Acreditação , Medicina de Emergência/educação
13.
Radiol Technol ; 95(3): 175-187, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38479770

RESUMO

PURPOSE: To analyze a radiography program's use of the Health Education Systems Incorporated Admission Assessment (HESI A2) and the HESI Exit Exam for preparing graduates to take the American Registry of Radiologic Technologists (ARRT) certification and registration exam for radiography. METHODS: The program collected exam scores from the HESI Exit Exam for radiography and the ARRT radiography certification and registration exam for a 10-year period. The study included scores of 171 students who graduated from the radiography education program. The program administered the HESI A2 exam during the last 4 years of the study period, which included 81 students. The authors analyzed the data using mean differences, correlations, and a receiver operating characteristic curve analysis. RESULTS: The correlation of HESI A2 scores with final HESI Exit Exam scores was 0.58 and with ARRT exam scores it was 0.64, which are extremely high correlations in an admissions context. The correlation of final HESI Exit Exam scores with ARRT exam scores was 0.73, which also is a strong correlation for predicting ARRT exam success. More than 94% of students who scored above the recommended performance level of 750 on the second HESI Exit Exam passed the ARRT exam on the first attempt. A receiver operating characteristic curve analysis indicated that the final HESI Exit Exam score was a strong predictor of pass or fail status on the ARRT exam. DISCUSSION: The HESI A2 and Exit Exam are effective measurement tools when used with cogent testing policies. Such policies include strong proctoring practices such as rigorous in-person testing or online proctoring with an attentive, live proctor. Having practice exam results count for a reasonable amount of a course grade (eg, less than 30%) also could be a good policy for the HESI Exit Exam. CONCLUSION: The HESI A2 and Exit Exam are effective tools for helping radiography educators select students for admission and measure student knowledge to help achieve positive certification outcomes.


Assuntos
Certificação , Avaliação Educacional , Estados Unidos , Humanos , Radiografia , Avaliação Educacional/métodos , Curva ROC , Escolaridade
16.
J Neurosci Nurs ; 56(3): 86-91, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38451926

RESUMO

ABSTRACT: BACKGROUND: To measure the effectiveness of an educational intervention, it is essential to develop high-quality, validated tools to assess a change in knowledge or skills after an intervention. An identified gap within the field of neurology is the lack of a universal test to examine knowledge of neurological assessment. METHODS: This instrument development study was designed to determine whether neuroscience knowledge as demonstrated in a Neurologic Assessment Test (NAT) was normally distributed across healthcare professionals who treat patients with neurologic illness. The variables of time, knowledge, accuracy, and confidence were individually explored and analyzed in SAS. RESULTS: The mean (standard deviation) time spent by 135 participants to complete the NAT was 12.9 (3.2) minutes. The mean knowledge score was 39.5 (18.2), mean accuracy was 46.0 (15.7), and mean confidence was 84.4 (24.4). Despite comparatively small standard deviations, Shapiro-Wilk scores indicate that the time spent, knowledge, accuracy, and confidence are nonnormally distributed ( P < .0001). The Cronbach α was 0.7816 considering all 3 measures (knowledge, accuracy, and confidence); this improved to an α of 0.8943 when only knowledge and accuracy were included in the model. The amount of time spent was positively associated with higher accuracy ( r2 = 0.04, P < .05), higher knowledge was positively associated with higher accuracy ( r2 = 0.6543, P < .0001), and higher knowledge was positively associated with higher confidence ( r2 = 0.4348, P < .0001). CONCLUSION: The scores for knowledge, confidence, and accuracy each had a slightly skewed distribution around a point estimate with a standard deviation smaller than the mean. This suggests initial content validity in the NAT. There is adequate initial construct validity to support using the NAT as an outcome measure for projects that measure change in knowledge. Although improvements can be made, the NAT does have adequate construct and content validity for initial use.


Assuntos
Pessoal de Saúde , Exame Neurológico , Humanos , Exame Neurológico/normas , Exame Neurológico/métodos , Pessoal de Saúde/educação , Reprodutibilidade dos Testes , Competência Clínica/normas , Feminino , Masculino , Adulto , Enfermagem em Neurociência , Conhecimentos, Atitudes e Prática em Saúde , Doenças do Sistema Nervoso/enfermagem , Doenças do Sistema Nervoso/diagnóstico , Avaliação Educacional/métodos , Avaliação Educacional/normas
17.
Clin Exp Nephrol ; 28(5): 465-469, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38353783

RESUMO

BACKGROUND: Large language models (LLMs) have impacted advances in artificial intelligence. While LLMs have demonstrated high performance in general medical examinations, their performance in specialized areas such as nephrology is unclear. This study aimed to evaluate ChatGPT and Bard in their potential nephrology applications. METHODS: Ninety-nine questions from the Self-Assessment Questions for Nephrology Board Renewal from 2018 to 2022 were presented to two versions of ChatGPT (GPT-3.5 and GPT-4) and Bard. We calculated the correct answer rates for the five years, each year, and question categories and checked whether they exceeded the pass criterion. The correct answer rates were compared with those of the nephrology residents. RESULTS: The overall correct answer rates for GPT-3.5, GPT-4, and Bard were 31.3% (31/99), 54.5% (54/99), and 32.3% (32/99), respectively, thus GPT-4 significantly outperformed GPT-3.5 (p < 0.01) and Bard (p < 0.01). GPT-4 passed in three years, barely meeting the minimum threshold in two. GPT-4 demonstrated significantly higher performance in problem-solving, clinical, and non-image questions than GPT-3.5 and Bard. GPT-4's performance was between third- and fourth-year nephrology residents. CONCLUSIONS: GPT-4 outperformed GPT-3.5 and Bard and met the Nephrology Board renewal standards in specific years, albeit marginally. These results highlight LLMs' potential and limitations in nephrology. As LLMs advance, nephrologists should understand their performance for future applications.


Assuntos
Nefrologia , Autoavaliação (Psicologia) , Humanos , Avaliação Educacional , Conselhos de Especialidade Profissional , Competência Clínica , Inteligência Artificial
18.
CBE Life Sci Educ ; 23(1): ar11, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38306615

RESUMO

Many students who enroll in a public U.S. 4-y college will not graduate. The odds of completing a college degree are even lower for students who have been marginalized in higher education, especially in Science, Technology, Engineering, and Math (STEM) fields. Can undergraduate research increase a student's likelihood of graduating college and close educational equity gaps in college completion? To answer this question, we use data from six public U.S. universities (N = 120,308 students) and use Propensity Score Matching to generate a comparison group for analyses. We conducted logistic regressions on graduation rates and equity gaps in 4 and 6 y using the matched comparison group and undergraduate researchers in STEM (n = 2727). When being compared with like-peers and controlling for background characteristics and prior academic performance, students who participated in undergraduate research were twice as likely to graduate in 4 y and over 10 times as likely to graduate in 6 y. We also found that equity gaps in 4-y graduation rates for students of color, low-income, and first-generation students were cut in half for undergraduate researchers. At 6 y, these gaps were completely closed for undergraduate researchers. As we seek ways to close education gaps and increase graduation rates, undergraduate research can be a meaningful practice to improve student success.


Assuntos
Engenharia , Estudantes , Humanos , Engenharia/educação , Tecnologia/educação , Avaliação Educacional , Matemática
19.
Artigo em Inglês | MEDLINE | ID: mdl-38387881

RESUMO

PURPOSE: Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into six elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure. METHODS: Messick's unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from three pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018-2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables). RESULTS: Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone, r = 0.34, P = .019. CONCLUSION: Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.


Assuntos
Internato e Residência , Humanos , Estados Unidos , Criança , Reprodutibilidade dos Testes , Competência Clínica , Avaliação Educacional , Docentes
20.
BMC Med Educ ; 24(1): 191, 2024 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-38403582

RESUMO

BACKGROUND: The global outbreak of coronavirus disease (COVID-19) has led medical universities in China to conduct online teaching. This study aimed to assess the effectiveness of a blended learning approach that combines online teaching and virtual reality technology in dental education and to evaluate the acceptance of the blended learning approach among dental teachers and students. METHODS: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist was followed in this study. A total of 157 students' perspectives on online and virtual reality technology education and 54 teachers' opinions on online teaching were collected via questionnaires. Additionally, 101 students in the 2015-year group received the traditional teaching method (TT group), while 97 students in the 2017-year group received blended learning combining online teaching and virtual reality technology (BL group). The graduation examination results of students in the two groups were compared. RESULTS: The questionnaire results showed that most students were satisfied with the online course and the virtual simulation platform teaching, while teachers held conservative and neutral attitudes toward online teaching. Although the theoretical score of the BL group on the final exam was greater than that of the TT group, there was no significant difference between the two groups (P = 0.805). The skill operation score of the BL group on the final exam was significantly lower than that of the TT group (P = 0.004). The overall score of the BL group was lower than that of the TT group (P = 0.018), but the difference was not statistically significant (P = 0.112). CONCLUSIONS: The blended learning approach combining online teaching and virtual reality technology plays a positive role in students' learning and is useful and effective in dental education.


Assuntos
Educação a Distância , Humanos , Estudos Transversais , Educação a Distância/métodos , Aprendizagem , Avaliação Educacional/métodos , Educação em Odontologia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA