A Scoping Review of Student and Educator Engagement with Large Language Models in Introductory Programming Education
A Scoping Review of Student and Educator Engagement with Large Language Models in Introductory Programming Education
Kehinde D. Aruleba,S. Oyelere,G. Obaido,I. Sanusi
TLDR
A scoping review of CS1 curricula that foster critical AI literacy, inclusive participation, and thoughtful engagement with human–AI collaboration in programming education, focusing on student and educator engagement with LLMs in CS1 contexts.
Abstract
As Large Language Models (LLMs) like ChatGPT and GitHub Copilot gain traction in computing education, understanding their role in introductory programming (CS1) is essential. This scoping review synthesises 38 empirical studies published between 2022 and 2024, focusing on student and educator engagement with LLMs in CS1 contexts. Following Arksey and O’Malley’s five-stage framework and PRISMA-ScR guidelines, we identify four thematic areas: (1) varied student prompting behaviours, from surface-level code copying to iterative refinement; (2) evolving educator practices, from passive allowance to guided integration; (3) assessment-related tensions, notably the “assistance dilemma”; and (4) ethical concerns around bias, integrity, and access. While LLMs support debugging and code comprehension, their value depends on pedagogical framing and learner agency. Gaps remain in longitudinal research, diverse learner representation, and alignment with curriculum frameworks. We offer practical recommendations for scaffolded GenAI integration, prompt engineering strategies, and ethical classroom use. This review supports the development of CS1 curricula that foster critical AI literacy, inclusive participation, and thoughtful engagement with human–AI collaboration in programming education.
