Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Behav Res Methods ; 55(6): 3260-3280, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36085544

RESUMO

Online learning systems are able to offer customized content catered to individual learner's needs, and have seen growing interest from industry and academia alike in recent years. In contrast to the traditional computerized adaptive testing setting, which has a well-calibrated item bank with new items added periodically, the online learning system has two unique features: (1) the number of items is large, and they have likely not gone through costly field testing for item calibration; and (2) the individual's ability may change as a result of learning. The Elo rating system has been recognized as an effective method for fast updating of item and person parameters in online learning systems to enable personalized learning. However, the updating parameter in Elo has to be tuned post hoc, and Elo is only suitable for the Rasch model. In this paper, we propose the use of a moment-matching Bayesian update algorithm to estimate item and person parameters on the fly. With sequentially updated item and person parameters, a modified maximum posterior weighted information criterion (MPWI) is proposed to adaptively assign items to individuals. The Bayesian updated algorithm along with MPWI is validated in a simulated multiple-session online learning setting, and the results show that the new combo can achieve fast and reasonably accurate parameter estimations that are comparable to random selection, match-difficulty selection, and traditional online calibration. Moreover, the combo can still function reasonably well with as low as 20% of items being pre-calibrated in the item bank.


Assuntos
Algoritmos , Educação a Distância , Humanos , Teorema de Bayes , Calibragem , Sistemas On-Line , Psicometria/métodos
2.
Front Psychol ; 10: 620, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30984068

RESUMO

Adaptive learning systems have received an increasing attention as they enable to provide personalized instructions tailored to the behaviors and needs of individual learners. In order to reach this goal, it is desired to have an assessment system, monitoring each learner's ability change in real time. The Elo Rating System (ERS), a popular scoring algorithm for paired competitions, has recently been considered as a fast and flexible method that can assess learning progress in online learning environments. However, it has been argued that a standard ERS may be problematic due to the multidimensional nature of the abilities embedded in learning materials. In order to handle this issue, we propose a system that incorporates a multidimensional item response theory model (MIRT) in the ERS. The basic idea is that instead of updating a single ability parameter from the Rasch model, our method allows a simultaneous update of multiple ability parameters based on a compensatory MIRT model, resulting in a multidimensional extension of the ERS ("M-ERS"). To evaluate the approach, three simulation studies were conducted. Results suggest that the ERS that incorrectly assumes unidimensionality has a seriously lower prediction accuracy compared to the M-ERS. Accounting for both speed and accuracy in M-ERS is shown to perform better than using accuracy data only. An application further illustrates the method using real-life data from a popular educational platform for exercising math skills.

3.
Behav Res Methods ; 51(2): 895-909, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30511157

RESUMO

Electronic learning systems have received increasing attention because they are easily accessible to many students and are capable of personalizing the learning environment in response to students' learning needs. To that end, using fast and flexible algorithms that keep track of the students' ability change in real time is desirable. Recently, the Elo rating system (ERS) has been applied and studied in both research and practical settings (Brinkhuis & Maris, 2009; Klinkenberg, Straatemeier, & van der Maas in Computers & Education, 57, 1813-1824, 2011). However, such adaptive algorithms face the cold-start problem, defined as the problem that the system does not know a new student's ability level at the beginning of the learning stage. The cold-start problem may also occur when a student leaves the e-learning system for a while and returns (i.e., a between-session period). Because external effects could influence the student's ability level during the period, there is again much uncertainty about ability level. To address these practical concerns, in this study we propose alternative approaches to cold-start issues in the context of the e-learning environment. Particularly, we propose making the ERS more efficient by using an explanatory item response theory modeling to estimate students' ability levels on the basis of their background information and past trajectories of learning. A simulation study was conducted under various conditions, and the results showed that the proposed approach substantially reduces ability estimation errors. We illustrate the approach using real data from a popular learning platform.


Assuntos
Algoritmos , Instrução por Computador/métodos , Aprendizagem , Educação a Distância/métodos , Humanos , Modelos Teóricos , Estudantes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA