Automatic Short Answer Grading Using Paragraph Vectors and Transfer Learning Models
Main Article Content
Abstract
Grading questions is one of the most duties that consume the time and efforts of instructors. Among different questions formats, short answers measure students’ depth of understanding. Several types of research have been done to grade short answers automatically. Recent approaches attempt to solve this problem using semantic similarity and deep learning models. Correspondingly, paragraph embedding models and the transfer learning models have shown promising results in text-similarity tasks considering the context of the text. This study investigates distributional semantics and deep learning models applied to the task of short answers grading by computing the semantic similarity between students submitted answers and a key answer. We analyze the effect of training two different models on the domain-specific corpus. We suggest to train the models on a domain-specific corpus instead of using pre-trained models. In the first experiment, paragraph vectors are trained on a domain-specific corpus and in the second one, transfer learning selected models were fine-tuned on the domain-specific corpus. The best accuracy achieved by fine-tuning the (roberta-large) masked language model on domain-specific corpus is 0.620 for correlation coefficient, and 0.777 for RMSE.
We compare the achieved results against baseline models and the results of former studies. We conclude that pre-trained paragraph vectors achieve better semantic similarity than training paragraph vectors on a domain-specific corpus. On the contrary, fine-tuning transfer learning models on a domain-specific corpus improve the performance using the pre-trained masked language models.