Using Bayesian techniques with item response theory to analyze mathematics tests

Show simple item record

dc.contributor Belbas, Stavros Apostol
dc.contributor Moore, Robert L.
dc.contributor Tomek, Sara
dc.contributor Wu, Zhijian
dc.contributor.advisor Gleason, Jim Maxwell, Mary 2017-03-01T16:52:54Z 2017-03-01T16:52:54Z 2013
dc.identifier.other u0015_0000001_0001410
dc.identifier.other Maxwell_alatus_0004D_11706
dc.description Electronic Thesis or Dissertation
dc.description.abstract Due to the cost of a college education, final exams for college level courses fall under the category of ``high-stakes'' tests. An incorrectly measured assessment may result in students paying thousands of dollars toward retaking the course, scholarships being rescinded, or universities with students taking courses for which they are not prepared, as well as many other undesirable consequences. Therefore, faculty at colleges and universities must understand the reliability of these tests to accurately measure student knowledge. Traditionally, for large general education courses, faculty use common exams and measure their reliability using Classical Test Theory (CTT). However, the cut off scores are arbitrarily chosen, and little is known about the accuracy of measurement at these critical scores. A solution to this dilemma is to use Item Response Theory (IRT) models to determine the instrument's reliability at various points along the student ability spectrum. Since cost is always on the mind of faculty and administrators at these schools, we compare the use of free software (Item Response Theory Command Language) to generally accepted commercial software (Xcalibre) in the analysis of College Algebra final exams. With both programs, a Bayesian approach was used: Bayes modal estimates were obtained for item parameters and EAP (expected a posteriori) estimates were obtained for ability parameters. Model-data fit analysis was conducted using two well-known chi-square fit statistics with no significant difference found in model-data fit. Parameter estimates were compared directly along with a comparison of Item Response Functions using a weighted version of the root mean square error (RMSE) that factors in the ability distribution of examinees resulting in comparable item response functions between the two programs. Furthermore, ability estimates from both programs were found to be nearly identical. Thus, when the assumptions of IRT are met for the two- and three-parameter logistic and the generalized partial credit models, the freely available software program is an appropriate choice for the analysis of College Algebra final exams.
dc.format.extent 202 p.
dc.format.medium electronic
dc.format.mimetype application/pdf
dc.language English
dc.language.iso en_US
dc.publisher University of Alabama Libraries
dc.relation.ispartof The University of Alabama Electronic Theses and Dissertations
dc.relation.ispartof The University of Alabama Libraries Digital Collections
dc.relation.hasversion born digital
dc.rights All rights reserved by the author unless otherwise indicated.
dc.subject.other Mathematics
dc.title Using Bayesian techniques with item response theory to analyze mathematics tests
dc.type thesis
dc.type text University of Alabama. Dept. of Mathematics Mathematics The University of Alabama doctoral Ph.D.

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


My Account