Abstract
Students sometimes produce code that works but that its author does not comprehend. For example, a student may apply a poorly-understood code template, stumble upon a working solution through trial and error, or plagiarize. Similarly, passing an automated functional assessment does not guarantee that the student understands their code. One way to tackle these issues is to probe students’ comprehension by asking them questions about their own programs. We propose an approach to automatically generate questions about student-written program code. We moreover propose a use case for such questions in the context of automatic assessment systems: after a student’s program passes unit tests, the system poses questions to the student about the code. We suggest that these questions can enhance assessment systems, deepen student learning by acting as self-explanation prompts, and provide a window into students’ program comprehension. This discussion paper sets an agenda for future technical development and empirical research on the topic.
Original language | English |
---|---|
Title of host publication | Proceedings - 2021 IEEE/ACM 29th International Conference on Program Comprehension, ICPC 2021 |
Publisher | IEEE |
Pages | 467-475 |
Number of pages | 9 |
ISBN (Electronic) | 978-1-6654-1403-6 |
DOIs | |
Publication status | Published - 20 May 2021 |
MoE publication type | A4 Conference publication |
Event | International Conference on Program Comprehension - Virtual, Online Duration: 20 May 2021 → 21 May 2021 Conference number: 29 |
Publication series
Name | Proceedings/IEEE International Conference on Program Comprehension |
---|---|
ISSN (Electronic) | 2643-7171 |
Conference
Conference | International Conference on Program Comprehension |
---|---|
Abbreviated title | ICPC |
City | Virtual, Online |
Period | 20/05/2021 → 21/05/2021 |
Keywords
- automatic assessment
- automatic question generation
- program comprehension
- programming education
- self-explanation