학술논문

Exploring Pre-scoring Clustering for Short Answer Grading
Document Type
Conference
Source
2023 46th MIPRO ICT and Electronics Convention (MIPRO) ICT and Electronics Convention (MIPRO), 2023 46th MIPRO. :1567-1571 May, 2023
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Scalability
Manuals
Machine learning
Natural language processing
Indexes
Task analysis
Computer crime
automatic short answer grading
ASAG
semi-automated short answer scoring
short answer grading
short text
short answer
automatic grading
natural language processing
Language
ISSN
2623-8764
Abstract
Automatic short answer grading is a topic that has gained significant popularity recently, especially due to developments in natural language processing. While automated grading in computer supported assessment tasks traditionally imposed significant restrictions on the answer format (e.g., multiple choice questions), automated short answer grading could enable assessment scalability with very few answer format limitations and thereby increase the assessment tasks’ validity. Here, ‘short answer’ refers to a text of up to, approximately, 10 sentences. However, automatic solutions require a lot of pre-graded material. In this paper, several pre-trained machine learning models were utilized to explore pre-scoring clustering for short answer grading of text in Croatian. The aim of this approach is to shorten the process of manual short answer grading by clustering similar answers, facilitating the development of automatic grading solutions. The described approach was evaluated on a dataset containing graduate students’ answers in Croatian to six questions related to cyber security topics. The obtained results are promising and show how increases in cluster purity, normalized mutual information, Rand index, and adjusted Rand index measures can be achieved by finetuning a pre-trained model.