Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
País como assunto
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Educ ; 23(1): 286, 2023 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-37106417

RESUMO

BACKGROUND: The American Board of Anesthesiology piloted 3-option multiple-choice items (MCIs) for its 2020 administration of 150-item subspecialty in-training examinations for Critical Care Medicine (ITE-CCM) and Pediatric Anesthesiology (ITE-PA). The 3-option MCIs were transformed from their 4-option counterparts, which were administered in 2019, by removing the least effective distractor. The purpose of this study was to compare physician performance, response time, and item and exam characteristics between the 4-option and 3-option exams. METHODS: Independent-samples t-test was used to examine the differences in physician percent-correct score; paired t-test was used to examine the differences in response time and item characteristics. The Kuder and Richardson Formula 20 was used to calculate the reliability of each exam form. Both the traditional (distractor being selected by fewer than 5% of examinees and/or showing a positive correlation with total score) and sliding scale (adjusting the frequency threshold of distractor being chosen by item difficulty) methods were used to identify non-functioning distractors (NFDs). RESULTS: Physicians who took the 3-option ITE-CCM (mean = 67.7%) scored 2.1 percent correct higher than those who took the 4-option ITE-CCM (65.7%). Accordingly, 3-option ITE-CCM items were significantly easier than their 4-option counterparts. No such differences were found between the 4-option and 3-option ITE-PAs (71.8% versus 71.7%). Item discrimination (4-option ITE-CCM [an average of 0.13], 3-option ITE-CCM [0.12]; 4-option ITE-PA [0.08], 3-option ITE-PA [0.09]) and exam reliability (0.75 and 0.74 for 4- and 3-option ITE-CCMs, respectively; 0.62 and 0.67 for 4-option and 3-option ITE-PAs, respectively) were similar between these two formats for both ITEs. On average, physicians spent 3.4 (55.5 versus 58.9) and 1.3 (46.2 versus 47.5) seconds less per item on 3-option items than 4-option items for ITE-CCM and ITE-PA, respectively. Using the traditional method, the percentage of NFDs dropped from 51.3% in the 4-option ITE-CCM to 37.0% in the 3-option ITE-CCM and from 62.7% to 46.0% for the ITE-PA; using the sliding scale method, the percentage of NFDs dropped from 36.0% to 21.7% for the ITE-CCM and from 44.9% to 27.7% for the ITE-PA. CONCLUSIONS: Three-option MCIs function as robustly as their 4-option counterparts. The efficiency achieved by spending less time on each item poses opportunities to increase content coverage for a fixed testing period. The results should be interpreted in the context of exam content and distribution of examinee abilities.


Assuntos
Avaliação Educacional , Exame Físico , Humanos , Estados Unidos , Criança , Avaliação Educacional/métodos , Reprodutibilidade dos Testes
2.
J Dent Educ ; 77(12): 1593-609, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24319131

RESUMO

How many incorrect response options (known as distractors) to use in multiple-choice questions has been the source of considerable debate in the assessment literature, especially relative to influence on the likelihood of students' guessing the correct answer. This study compared distractor use by second-year dental students in three successive oral and maxillofacial pathology classes that had three different examination question formats and scoring resulting in different levels of academic performance. One class was given all multiple-choice questions; the two other were given half multiple-choice questions, with and without formula scoring, and half un-cued short-answer questions. Use by at least 1 percent of the students was found to better identify functioning distractors than higher cutoffs. The average number of functioning distractors differed among the three classes and did not always correspond to differences in class scores. Increased numbers of functioning distractors were associated with higher question discrimination and greater question difficulty. Fewer functioning distractors fostered more effective student guessing and overestimation of academic achievement. Appropriate identification of functioning distractors is essential for improving examination quality and better estimating actual student knowledge through retrospective use of formula scoring, where the amount subtracted for incorrect answers is based on the harmonic mean number of functioning distractors.


Assuntos
Educação em Odontologia/normas , Avaliação Educacional/métodos , Patologia Bucal/educação , Estudantes de Odontologia , Logro , Algoritmos , Avaliação Educacional/normas , Humanos , Aprendizagem , Probabilidade
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa