Advances in assessment of programming skills

Research output: ThesisDoctoral ThesisCollection of Articles

Researchers

Research units

Abstract

This thesis concerns assessment techniques used in university level programming education. The motivation is in improving existing assessment methods to yield more detailed or fundamentally different kinds of information which can be used to provide higher quality feedback to the student. One central theme in this thesis is the use of program reading and tracing skills in different aspects of programming. Tracing is a critical skill in reading and writing code and debugging. Simple tracing exercises can be used to test understanding of programming language constructs and program execution. We present results from an international study of student competence in tracing program code in the end of their first programming course. The results highlight that while students are expected to have elementary skills in program construction, some of them lack knowledge of execution of programs of the same difficulty. The effect of students' annotations and solving strategies on tracing performance was analyzed further. Tracing exercises can also be used to test understanding of data structures and algorithms. Visual algorithm simulation is a method in which a student manipulates data structure visualizations with a mouse, trying to simulate the steps of a given algorithm. A successful simulation is evidence of understanding the core concepts of that algorithm. Automatic assessment and delivery of visual algorithm simulation problems is implemented in the tool TRAKLA2. In this thesis we present a technique that improves TRAKLA2's assessment, trying to interpret errors in student simulations using information on known misconceptions and by simulating careless errors. Another topic studied in this thesis is whether mutation testing can be applied to evaluating the adequacy of software tests made by students. In mutation testing the effectiveness of a test suite in discovering errors is evaluated by seeding errors into the program under test. A well constructed test suite should find most such errors quite easily. Code coverage analysis, the method used in available assessment platforms, can yield results that give the students false understanding of the quality of their testing. Feedback from programming exercises assessed by unit tests has traditionally been in the form of text. Explaining more complicated object hierarchies textually can be complicated to understand. The last topic covered is using visualization to convey information either in a domain specific visual format or in a form of a generic visualization. An empirical study of student performance comparing the two types of input against detailed textual feedback is presented as a part of the thesis.

Details

Original languageEnglish
QualificationDoctor's degree
Awarding Institution
Supervisors/Advisors
Publisher
  • Aalto University
Print ISBNs978-952-60-4719-5
Electronic ISBNs978-952-60-4720-1
Publication statusPublished - 2012
MoE publication typeG5 Doctoral dissertation (article)

    Research areas

  • teaching programming, tracing program code, visual algorithm simulation, automated assessment, visual feedback, testing, mutation analysis

ID: 21565481