Providing Meaningful Feedback for Autograding of Programming Assignments.

SIGCSE '18: The 49th ACM Technical Symposium on Computer Science Education Baltimore Maryland USA February, 2018(2018)

引用 24|浏览14
暂无评分
摘要
Autograding systems are increasingly being deployed to meet the challenge of teaching programming at scale. We propose a methodology for extending autograders to provide meaningful feedback for incorrect programs. Our methodology starts with the instructor identifying the concepts and skills important to each programming assignment, designing the assignment, and designing a comprehensive test suite. Tests are then applied to code submissions to learn classes of common errors and produce classifiers to automatically categorize errors in future submissions. The instructor maps the errors to concepts and skills and writes hints to help students find their misconceptions and mistakes. We have applied the methodology to two assignments from our Introduction to Computer Science course. We used submissions from one semester of the class to build classifiers and write hints for observed common errors. We manually validated the automatic error categorization and potential usefulness of the hints using submissions from a second semester. We found that the hints given for erroneous submissions should be helpful for 96% or more of the cases. Based on these promising results, we have deployed our hints and are currently collecting submissions and feedback from students and instructors.
更多
查看译文
关键词
Autograding,concepts/skills-based hints,error categorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要