S. Fu and M. Desmarais (Canada)
Assessment, e-learning, Bayesian theory, computerized adaptive testing (CAT), multidimensional, item response theory (IRT)
Effective and efficient assessment of a learner’s proficiency has always been a high priority for intelligent e-learning environments. The fields of psychometrics and Computer Adaptive Testing (CAT) provide a strong theoretical and practical basis for performing skills assessment, of which Item Response Theory (IRT) is the best recognized approach. For assessing multiple skills at once, which is called for in e-learning environments because they rely on fine-grained skill models to know the knowledge state of learners and provide them appropriate study path, multidimensional IRT (MIRT) is a necessity and emerged as a candidate. However, MIRT is computationally expensive. A simpler multidimensional model, based on classical Bayesian theory, is proposed. It is an extension of Rudner’s work on unidimensional Bayesian decision theory. The explanation of theory basis is based on binary classification (master/non-master) test. Its evaluation is performed through simulations with pseudo-random data samples comprised of 6 skill dimensions. We firstly show that the model can take advantage of multidimensional test items to accelerate assessment and that it performs better than the unidimensional version. Secondly, we compare its classification accuracy rate with its unidimensional version and MIRT approach. The results show that its performance is much better than existing unidimensional model, and at least as good as MIRT, in spite of the fact that its complexity and computational burden is lighter than MIRT.
Important Links:
Go Back