학술논문

A new approach towards marking large-scale complex assessments: Developing a distributed marking system that uses an automatically scaffolding and rubric-targeted interface for guided peer-review.
Document Type
Article
Source
Assessing Writing; Apr2015, Vol. 24, p1-15, 15p
Subject
Scaffolded instruction
Scoring rubrics
Professional peer review
Academic achievement
Education research
Language
ISSN
10752935
Abstract
Currently, complex tasks incur significant costs to mark, becoming exorbitant for courses with large number of students (e.g., in MOOCs). Large scale assessments are currently dependent on automated scoring systems. However, these systems tend to work best in assessments where correct responses can be explicitly defined. There is considerable scoring challenge when it comes to assessing tasks that require deeper analysis and richer responses. Structured peer-grading can be reliable, but the diversity inherent in very large classes can be a weakness for peer-grading systems because it raises objections that peer-reviewers may not have qualifications matching the level of the task being assessed. Distributed marking can offer a solution to handle both the volume and complexity of these assessments. We propose a solution wherein peer scoring is assisted by a guidance system to improve peer-review and increase the efficiency of large scale marking of complex tasks. The system involves developing an engine that automatically scaffolds the target paper based on predefined rubrics so that relevant content and indicators of higher level thinking skills are framed and drawn to the attention of the marker. Eventually, we aim to establish that the scores produced are comparable to scores produced by expert raters. [ABSTRACT FROM AUTHOR]