While student evaluation of courses (SEC) in higher education is an intensely researched area, the existing literature has not paid due attention to rigorous econometric analysis of the SEC data. Using the four-year (2010–2013) evaluation results for economics courses on offer at a leading Australian university, this study employed a random effects ordered probit model with Mundlak correction to identify factors influencing student ratings of courses. This represents an innovative application to educational data.
Findings show that class-level, course-level, class-size, instructors’ course-specific experience and their linguistic background influence student ratings of courses. Lecturers’ prior teaching experience in a course and their English language background attracted higher rating while second and third-level courses relative to postgraduate classes, 2010 and 2012 student cohorts relative to 2013, and larger classes attracted lower ratings.
Implications include specific training for instructors of non-English speaking background (NESB), teaching larger classes, and intermediate and upper undergraduate courses.
This study underscores the critical importance of student-specific responses capturing student heterogeneity in preference to class-average data including students’ academic performance, discipline destination, linguistic background, age and indicators of effort-level. It raises survey instrument implications e.g., sub-scales, data on course contents providing intellectual challenges, real world applications, and problem-solving skills.