Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Adv Health Sci Educ Theory Pract ; 29(4): 1309-1321, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38224412

RESUMO

Given the high prevalence of multiple-choice examinations with formula scoring in medical training, several studies have tried to identify other factors in addition to the degree of knowledge of students which influence their response patterns. This study aims to measure the effect of students' attitude towards risk and ambiguity on their number of correct, wrong, and blank answers. In October 2018, 233 3rd year medical students from the Faculty of Medicine of the University of Porto, in Porto, Portugal, completed a questionnaire which assessed the student's attitudes towards risk and ambiguity, and aversion to ambiguity in medicine. Simple and multiple regression models and the respective regression coefficients were used to measure the association between the students' attitudes, and their answers in two examinations that they had taken in June 2018. Having an intermediate level of ambiguity aversion in medicine (as opposed to a very high or low level) was associated with a significant increase in the number of correct answers and decrease in the number of blank answers in the first examination. In the second examination, high levels of ambiguity aversion in medicine were associated with a decrease in the number of wrong answers. Attitude towards risk, tolerance for ambiguity, and gender did not show significant association with the number of correct, wrong, and blank answers for either examination. Students' ambiguity aversion in medicine is correlated with their performance in multiple-choice examinations with negative marking. Therefore, it is suggested the planning and implementation of counselling sessions with medical students regarding the possible impact of ambiguity aversion on their performance in multiple-choice questions with negative marking.


Assuntos
Avaliação Educacional , Estudantes de Medicina , Humanos , Portugal , Estudos Transversais , Feminino , Masculino , Estudantes de Medicina/psicologia , Faculdades de Medicina , Educação de Graduação em Medicina , Adulto Jovem , Inquéritos e Questionários , Adulto , Risco , Atitude
2.
BMC Med Educ ; 17(1): 192, 2017 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-29121888

RESUMO

BACKGROUND: Progress testing is an assessment tool used to periodically assess all students at the end-of-curriculum level. Because students cannot know everything, it is important that they recognize their lack of knowledge. For that reason, the formula-scoring method has usually been used. However, where partial knowledge needs to be taken into account, the number-right scoring method is used. Research comparing both methods has yielded conflicting results. As far as we know, in all these studies, Classical Test Theory or Generalizability Theory was used to analyze the data. In contrast to these studies, we will explore the use of the Rasch model to compare both methods. METHODS: A 2 × 2 crossover design was used in a study where 298 students from four medical schools participated. A sample of 200 previously used questions from the progress tests was selected. The data were analyzed using the Rasch model, which provides fit parameters, reliability coefficients, and response option analysis. RESULTS: The fit parameters were in the optimal interval ranging from 0.50 to 1.50, and the means were around 1.00. The person and item reliability coefficients were higher in the number-right condition than in the formula-scoring condition. The response option analysis showed that the majority of dysfunctional items emerged in the formula-scoring condition. CONCLUSIONS: The findings of this study support the use of number-right scoring over formula scoring. Rasch model analyses showed that tests with number-right scoring have better psychometric properties than formula scoring. However, choosing the appropriate scoring method should depend not only on psychometric properties but also on self-directed test-taking strategies and metacognitive skills.


Assuntos
Educação de Graduação em Medicina , Avaliação Educacional/métodos , Psicometria , Estudos Cross-Over , Humanos , Países Baixos
3.
Adv Health Sci Educ Theory Pract ; 20(5): 1325-38, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25912621

RESUMO

Formula scoring (FS) is the use of a don't know option (DKO) with subtraction of points for wrong answers. Its effect on construct validity and reliability of progress test scores, is subject of discussion. Choosing a DKO may not only be affected by knowledge level, but also by risk taking tendency, and may thus introduce construct-irrelevant variance into the knowledge measurement. On the other hand, FS may result in more reliable test scores. To evaluate the impact of FS on construct validity and reliability of progress test scores, a progress test for radiology residents was divided into two tests of 100 parallel items (A and B). Each test had a FS and a number-right (NR) version, A-FS, B-FS, A-NR, and B-NR. Participants (337) were randomly divided into two groups. One group took test A-FS followed by B-NR, and the second group test B-FS followed by A-NR. Evidence for impaired construct validity was sought in a hierarchical regression analysis by investigating how much of the participants' FS-score variance was explained by the DKO-score, compared to the contribution of the knowledge level (NR-score), while controlling for Group, Gender, and Training length. Cronbach's alpha was used to estimate NR and FS-score reliability per year group. NR score was found to explain 27 % of the variance of FS [F(1,332) = 219.2, p < 0.0005], DKO-score, and the interaction of DKO and Gender were found to explain 8 % [F(2,330) = 41.5, p < 0.0005], and the interaction of DKO and NR 1.6 % [F(1,329) = 16.6, p < 0.0005], supporting our hypothesis that FS introduces construct-irrelevant variance into the knowledge measurement. However, NR-scores showed considerably lower reliabilities than FS-scores (mean year-test group Cronbach's alphas were 0.62 and 0.74, respectively). Decisions about FS with progress tests should be a careful trade-off between systematic and random measurement error.


Assuntos
Avaliação Educacional/métodos , Avaliação Educacional/normas , Internato e Residência/métodos , Internato e Residência/normas , Radiologia/educação , Estudos Cross-Over , Feminino , Humanos , Conhecimento , Masculino , Reprodutibilidade dos Testes , Fatores Sexuais
4.
Psychometrika ; 82(1): 1-16, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-27844268

RESUMO

Under a formula score instruction (FSI), test takers omit items. If students are encouraged to answer every item (under a rights-only scoring instruction, ROI), the score distribution will be different. In this paper, we formulate a simple statistical model to predict the score ROI distribution using the FSI data. Estimation error is also provided. In addition, a preliminary investigation of the probability of guessing correctly on omitted items and its sensitivity is presented in the paper. Based on the data used in this paper, the probability of guessing correctly may be close or slightly greater than the chance score.


Assuntos
Avaliação Educacional , Modelos Estatísticos , Probabilidade , Humanos , Matemática/educação , Psicometria , Estudantes
5.
Anat Sci Educ ; 8(3): 242-8, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25053378

RESUMO

In theory the formula scoring methods increase the reliability of multiple-choice tests in comparison with number-right scoring. This study aimed to evaluate the impact of the formula scoring method in clinical anatomy multiple-choice examinations, and to compare it with that from the number-right scoring method, hoping to achieve an evidence-based decision about test scoring rules. Two hundred and ninety-eight students completed an examination in clinical anatomy which included 40 multiple-choice questions with five response options each. Among these, 245 (82.2%) examinees were assessed according to the number-right scoring method (group A) while 53 (17.8%) were assessed according to the formula scoring method (group B). The prevalence of passing was significantly higher in group A than in group B, after correction of the pass and fail cutoffs for guessing (84.9% vs. 62.3%, P = 0.005), keeping a similar reliability in both groups (0.7 vs. 0.8, P = 0.094). Pearson Correlation coefficients between practical and theoretical examination scores were 0.66 [95%CI = (0.58-0.73)] and 0.72 [95%CI = (0.56-0.83)] for groups A and B, respectively. Based solely on the reliability and validity assessments, the test-maker could therefore use either scoring rules; however, if the test-maker also takes into account the students' ability to deduce answers with partial knowledge, then the number-right score rule is most appropriate.


Assuntos
Anatomia/educação , Tomada de Decisões , Avaliação Educacional/normas , Projetos de Pesquisa/normas , Adulto , Comportamento de Escolha , Educação de Graduação em Medicina/métodos , Humanos , Reprodutibilidade dos Testes , Estudantes de Medicina
6.
Psychometrika ; 80(4): 1105-22, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25142256

RESUMO

We investigate the implications of penalizing incorrect answers to multiple-choice tests, from the perspective of both test-takers and test-makers. To do so, we use a model that combines a well-known item response theory model with prospect theory (Kahneman and Tversky, Prospect theory: An analysis of decision under risk, Econometrica 47:263-91, 1979). Our results reveal that when test-takers are fully informed of the scoring rule, the use of any penalty has detrimental effects for both test-takers (they are always penalized in excess, particularly those who are risk averse and loss averse) and test-makers (the bias of the estimated scores, as well as the variance and skewness of their distribution, increase as a function of the severity of the penalty).


Assuntos
Teoria da Decisão , Avaliação Educacional/métodos , Psicometria , Habilidades para Realização de Testes/psicologia , Humanos , Assunção de Riscos
7.
J Dent Educ ; 78(12): 1643-54, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25480280

RESUMO

Creating a learning environment that fosters student acquisition of self-assessment behaviors and skills is critically important in the education and training of health professionals. Self-assessment is a vital component of competent practice and lifelong learning. This article proposes applying a version of confidence scoring of multiple-choice questions as one avenue to address this crucial educational objective for students to be able to recognize and admit what they do not know. The confidence scoring algorithm assigns one point for a correct answer, deducts fractional points for an incorrect answer, but rewards students fractional points for leaving the question unanswered in admission that they are unsure of the correct answer. The magnitude of the reward relative to the deduction is selected such that the expected gain due to random guessing, even after elimination of all but one distractor, is never greater than the reward. Curricular implementation of this confidence scoring algorithm should motivate health professions students to develop self-assessment behaviors and enable them to acquire the skills necessary to critically evaluate the extent of their current knowledge throughout their professional careers. This is a professional development competency that is emphasized in the educational standards of the Commission on Dental Accreditation (CODA).


Assuntos
Educação em Odontologia , Avaliação Educacional/métodos , Autoavaliação (Psicologia) , Estudantes de Odontologia , Logro , Algoritmos , Calibragem , Educação Baseada em Competências , Humanos , Aprendizagem , Desenvolvimento de Programas , Autoimagem , Programas de Autoavaliação
8.
J Dent Educ ; 77(12): 1593-609, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24319131

RESUMO

How many incorrect response options (known as distractors) to use in multiple-choice questions has been the source of considerable debate in the assessment literature, especially relative to influence on the likelihood of students' guessing the correct answer. This study compared distractor use by second-year dental students in three successive oral and maxillofacial pathology classes that had three different examination question formats and scoring resulting in different levels of academic performance. One class was given all multiple-choice questions; the two other were given half multiple-choice questions, with and without formula scoring, and half un-cued short-answer questions. Use by at least 1 percent of the students was found to better identify functioning distractors than higher cutoffs. The average number of functioning distractors differed among the three classes and did not always correspond to differences in class scores. Increased numbers of functioning distractors were associated with higher question discrimination and greater question difficulty. Fewer functioning distractors fostered more effective student guessing and overestimation of academic achievement. Appropriate identification of functioning distractors is essential for improving examination quality and better estimating actual student knowledge through retrospective use of formula scoring, where the amount subtracted for incorrect answers is based on the harmonic mean number of functioning distractors.


Assuntos
Educação em Odontologia/normas , Avaliação Educacional/métodos , Patologia Bucal/educação , Estudantes de Odontologia , Logro , Algoritmos , Avaliação Educacional/normas , Humanos , Aprendizagem , Probabilidade
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa