Abstract
Novice programmers have a limited understanding of the program code they produce. Their programs are often based on code snippets from examples and internet searches. Recently and rather suddenly, artificial intelligence has changed programming environments that can now suggest and complete entire programs based on the available context. However, the ability to comprehend and discuss programs is essential in becoming a programmer who is responsible for their work and can reliably solve problems as a member of a team. Many introductory programming courses have hundreds of students per teacher. Therefore, automated systems are often used to produce immediate feedback and assessment for programming exercises. Current systems focus on the created program and its requirements. Unfortunately, their feedback helps students in iterating toward acceptable code rather than acquiring a deep understanding of the program. This dissertation addresses that gap. The dissertation defines and introduces questions about learners' code (QLCs). After a student has submitted a program, they are asked automated, personal QLCs about the structure and the logic of their program. The dissertation describes a system to generate QLCs and contributes three open-source implementations supporting Java, JavaScript, and Python. The empirical contributions of the dissertation are based on multiple studies that research both quantitatively and qualitatively how novice programmers answer various types of QLCs. From the students, who create a correct program, as many as 20% may answer incorrectly about concepts that are critical to systematically reason about their program code. More than half of the students fail to mentally trace the execution of their program. This confirms that novices' program comprehension needs improvement and instructors may overestimate their abilities. The more students answer incorrectly to QLCs, the more they tinker with their code and have less success on the course. Current artificial intelligence systems respond to QLCs better than the average novice. However, they also lapse into humanlike errors producing failed reasoning about the code they generated, which could present an important learning opportunity for the critical use of AI in programming.
Translated title of the contribution | Kysymyksiä oppijan ohjelmakoodista: Automaattisen arvioinnin kehittäminen ohjelmien ymmärtämisen tueksi |
---|---|
Original language | English |
Qualification | Doctor's degree |
Awarding Institution |
|
Supervisors/Advisors |
|
Publisher | |
Print ISBNs | 978-952-64-1767-7 |
Electronic ISBNs | 978-952-64-1768-4 |
Publication status | Published - 2024 |
MoE publication type | G5 Doctoral dissertation (article) |
Keywords
- programming education
- introductory programming
- automated assessment
- unproductive success
- program comprehension
- fragile knowledge
- metacognition