The effect of raters fatigue on scoring EFL writing tasks

Amir Mahshanian, Mohammadtaghi Shahnazari

Abstract


Given the importance of testing, in general, and scoring writing tasks in particular, the negative effect of fatigue on human raters is important to investigate. This study aimed to (1) explore the relationship between fatigue and scoring composition tasks written by upper-intermediate EFL learners; and (2) to investigate the discrepancy of the frequency of comments among EFL raters while scoring composition tasks. Four raters were selected, and each given 28 composition tasks to score and comment on. The data were analyzed through SPSS software by running ANOVA, Pearson correlation coefficient, and post-hoc tests. Results suggested that the scores assigned to the first 16 tasks were significantly lower than those assigned to the last 12 tasks and that the last four tasks were scored highest. Based on the results obtained from the questionnaire, the observed diversity is argued to be rooted in raters’ fatigue and result in test bias. Furthermore, findings indicated that the frequency of comments given by the raters on the first 12 essays was significantly higher than those on the last 16 essays (the highest and the lowest frequency of comments were observed in the first four, and the last four scored essays, respectively).


Keywords


Assessing writing; EFL writing; fatigue; rater consistency; reliability; scoring composition tasks

Full Text:

PDF


DOI: https://doi.org/10.17509/ijal.v10i1.24956

Refbacks

  • There are currently no refbacks.


View My Stats

Creative Commons License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.