Automated Questions about Learners' Own Code Help to Detect Fragile Prerequisite Knowledge

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference article in proceedingsScientificvertaisarvioitu

4 Sitaatiot (Scopus)
45 Lataukset (Pure)

Abstrakti

Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works. Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created. Prior research studied this approach in the context of the first programming course. We replicate the study on a follow-up programming course for engineering students which contains a recap of general concepts in CS1. The task was the classic rainfall problem which was solved by 90% of the students. The QLCs generated from each passing submission were kept intentionally simple, yet 27% of the students failed in at least one of them. Students who struggled with questions about their own program logic had a lower median for overall course points than students who answered correctly.

AlkuperäiskieliEnglanti
OtsikkoITiCSE 2023 - Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education
KustantajaACM
Sivut505-511
Sivumäärä7
ISBN (elektroninen)979-8-4007-0138-2
DOI - pysyväislinkit
TilaJulkaistu - 29 kesäk. 2023
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaAnnual Conference on Innovation and Technology in Computer Science Education - Turku, Suomi
Kesto: 8 heinäk. 202312 heinäk. 2023
Konferenssinumero: 28

Conference

ConferenceAnnual Conference on Innovation and Technology in Computer Science Education
LyhennettäITiCSE
Maa/AlueSuomi
KaupunkiTurku
Ajanjakso08/07/202312/07/2023

Sormenjälki

Sukella tutkimusaiheisiin 'Automated Questions about Learners' Own Code Help to Detect Fragile Prerequisite Knowledge'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä