摘要:AbstractThe issue of score reliability has always been a contentious one in the testing of language performance because of the subjectivity involved in the assessment process. Assessment of a performance is usually carried out by human raters and studies have proved that there is a lack of consistency and accuracy in such judgments. This leads to a lack of standardization of marks raising concern about fairness to the students taking the course. One way of ensuring reliability is to mandate the use of a language proficiency rating scale. In addition to being a scoring tool, the rating scale also acts as the “de facto construct” (McNamara, 1996) and as a term of reference for stakeholders. Despite its importance, its development and use in institutional testing tend to be ad hoc (Fulcher 2008) and hardly ever researched. This paper will report on the preliminary findings of a study that investigates the practices relating to scoring reliability in the assessment of ESL writing. The ultimate aim of the study is to come up with guidelines for improving the reliability of scores awarded for writing assessment.