首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Evaluation of open items using the many-facet Rasch model
Authors:Sonia Ferreira Lopes Toffoli  Dalton Francisco de Andrade  Antonio Cezar Bornia
Institution:1. Department of Mathematics, State University of Londrina, Londrina, PR, Brazil;2. Informatics and Statistics Department, Federal University of Santa Catarina, Florianópolis, SC, Brazil;3. Production Engineering Department, Federal University of Santa Catarina, Florianópolis, SC, Brazil
Abstract:The goal of this study is to analyze the quality of ratings assigned to two constructed response questions for evaluating the written ability of essays in Portuguese language from the perspective of the many-facet Rasch (MFR 15 J.M. Linacre, Many-facet Rasch Measurement, 2nd ed., MESA Press, Chicago, 1994. Google Scholar]]) model. The analyzed data set comes from 350 written tests with two open-item tasks that were developed based on a rating process independently marked by two rater coordinators and a group of 42 raters. The MFR model analysis shows the measurement quality related to the examinees, raters, tasks and items, and classification scale that has been used for the task rating process. The findings indicate significant differences amongst the rater severities and show that the raters cannot be interchanged. The results also suggest that the comparison between the two task difficulties needs further investigation. An additional study has been done on the scale structure of the classification used by each rater for each item. The result suggests that there have been some similarities amongst the tasks and a need of revision for some criteria of the rating process. Overall, the scale of evaluation has shown to be efficient for a classification of the examinees.
Keywords:many-facet Rach model  performance evaluation  rating reliability  open questions  rate severity  classification scale
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号