-
Salim Said Bani Orabah deposited Facet Variability in the Light of Rater Training in Measuring Oral Performance: A Multifaceted Rasch Analysis in the group
Public Humanities on Humanities Commons 3 months, 3 weeks ago
Due to subjectivity in oral assessment, much concentration has been put on obtaining a satisfactory measure of consistency among raters. However, obtaining consistency might not result in valid decisions. One matter that is at the core of both reliability and validity in oral performance is rater training. Recently, the Multifaceted Rasch Measurement (MFRM) has been adopted to address the problem of rater bias and inconsistency; however, no research has incorporated the facets of test takers’ ability, raters’ severity, task difficulty, group expertise, scale criterion category, and test version together in a piece of research along with their two-sided impacts. Moreover, little research has investigated how long rater training effects endure. Consequently, this study explored the influence of the training program and feedback by having 20 raters score the oral production, as measured by the CEP (Community English Program) test, produced by 300 test takers in three phases, i.e., before, immediately after, and long after the training program. The results indicated that training can lead to higher degrees of interrater reliability and decrease in measures of severity/leniency and biasedness. However, it did not lead the raters into total unanimity, except for making them more self-consistent. Although rater training might result in higher internal consistency among raters, it cannot eradicate individual differences. That is, experienced raters, due to their idiosyncratic characteristics, did not benefit as much as the inexperienced ones. This study also showed that the outcome of training might not endure in long terms after training; thus, it requires ongoing training, letting raters regain consistency.