Week 3

Week 3 Post (6/03 - 6/09)


In the third week of my DREU Research Program, I dedicated most of my time to developing the individual autograder assigned to me. I spent the first two days writing test cases for the autograder and the following two days debugging them. After thoroughly testing my code in both admin and student views, I requested a code review from a fellow student on my research team. Based on the feedback received, I refined my code and subsequently requested a faculty review to ensure its quality.

This week, we also began working on another autograder, and I spent some time writing test cases for it. As I progress through these tasks, I am gaining a better understanding of how the autograders function. In addition to this work, I completed my UR2PhD assignments. We had our second draft of the research proposal due. We received a good feedback but had to improve on in some areas of our proposal. A highlight of the week was a dinner hosted by Professor Rodger at her house, which was a thoughtful initiative that helped me bond more with my team.

Finally, I read 2 research papers this week that focused on the kind of assignments and textbook assignments that students prefer:

  • “Mining autograding data in computer science education” by Vincent Gramoli, Michael Charleston, Bryn Jeffries, Irena Koprinska, Martin McGrane, Alex Radu , Anastasios Viglas, Kalina Yacef. Discusses the impact of instant feedback and autograding in computer science education. The authors analysed the behavior of 1st to 4th year students when submitting programming assignments at University of Sydney for 3 years (2013-2015). The assignments covered different languages like C, C++, Java and Python- from fundamental to practical courses. They observed that instant feedback and autograding can help students and instructors in subjects not necessarily focused on programming.

  • “Automatic Grading of Programming Assignments: An Approach Based on Formal Semantics” by Xiao Liu, Shuai Wang, Pei Wang, Dinghao Wu. Presents an approach for automatic grading of programming assignments using formal semantics. The proposed method involves defining the formal semantics of the programming language and using these definitions to generate test cases that can evaluate the correctness of student submissions. The goal is to provide a reliable and consistent grading system that can handle a variety of programming tasks and languages.

Written on June 3, 2024