학술논문

Automatic learner summary assessment for reading comprehension
Document Type
Working Paper
Source
Subject
Computer Science - Computation and Language
Language
Abstract
Automating the assessment of learner summaries provides a useful tool for assessing learner reading comprehension. We present a summarization task for evaluating non-native reading comprehension and propose three novel approaches to automatically assess the learner summaries. We evaluate our models on two datasets we created and show that our models outperform traditional approaches that rely on exact word match on this task. Our best model produces quality assessments close to professional examiners.
Comment: NAACL2019