Cross-Validated Assessments of Test Scoring Models |
| |
Authors: | Marc Sobel |
| |
Affiliation: | Dept of Statistics , Temple University |
| |
Abstract: | On a multiple choice test in which each item has r alternative options, a given number c of which are correct, various scoring models have been proposed. In one case the test-taker is allowed to choose any size solution subset and he/she is graded according to whether the subset is small and according to how many correct answers the subset contains. In a second case the test-taker is allowed to select only solution subsets of a prespecified maximum size and is graded as above. The first case is analogous to the situation where the test-taker is given a set of r options with each question; each question calls for a solution which consists of selecting that subset of the r responses which he/she believes to be correct. In the second case, when the prespecified solution subset is restricted to be of size at most one, the resulting scoring model corresponds to the usual model, referred to below as standard. The number c of correct options per item is usually known to the test-taker in this case. Scoring models are evaluated according to how well they correctly identify the total scores of the individuals in the class of test-takers. Loss functions are constructed which penalize scoring models resulting in student scores which are not associated with the students true (or average) total score on the exam. Scoring models are compared on the basis of cross-validated assessments of the loss incurred by using each of the given models. It is shown that in many cases the assessment of the loss for scoring models which allow students the opportunity to choose more than one option for each question are smaller than the assessment of the loss for the standard scoring model. |
| |
Keywords: | change–point nonparametric regression kernel density estimator asymptotic normality strong convergence rate cross–validation |
|
|