Abstract
Much of the recent work investigating large language models and AI Code Generation tools in computing education has focused on assessing their capabilities for solving typical programming problems and for generating resources such as code explanations and exercises. If progress is to be made toward the inevitable lasting pedagogical change, there is a need for research that explores the instructor voice, seeking to understand how instructors with a range of experiences plan to adapt. In this paper, we report the results of an interview study involving 12 instructors from Australia, Finland and New Zealand, in which we investigate educators' current practices, concerns, and planned adaptations relating to these tools. Through this empirical study, our goal is to prompt dialogue between researchers and educators to inform new pedagogical strategies in response to the rapidly evolving landscape of AI code generation tools.
Original language | English |
---|---|
Title of host publication | SIGCSE 2024 - Proceedings of the 55th ACM Technical Symposium on Computer Science Education |
Publisher | ACM |
Pages | 1223-1229 |
Number of pages | 7 |
ISBN (Electronic) | 979-8-4007-0423-9 |
DOIs | |
Publication status | Published - 7 Mar 2024 |
MoE publication type | A4 Conference publication |
Event | ACM Technical Symposium on Computer Science Education - Portland, United States Duration: 20 Mar 2024 → 23 Mar 2024 Conference number: 55 |
Conference
Conference | ACM Technical Symposium on Computer Science Education |
---|---|
Abbreviated title | SIGCSE |
Country/Territory | United States |
City | Portland |
Period | 20/03/2024 → 23/03/2024 |
Keywords
- ai code generation
- generative ai
- instructor perceptions
- interview study
- large language models
- llms
- programming education