UPDF AI

How Proficient Is Generative AI in an Introductory Object-Oriented Programming Course?

M. Lepp,Joosep Kaimre

2025 · DOI: 10.5220/0013199200003932
International Conference on Computer Supported Education · 0 Citations

TLDR

This study examines how well AI-based chatbots, including ChatGPT and Microsoft Copilot, perform in solving tasks in the "Object-Oriented Programming" course and offers valuable insights for programming instructors by highlighting the strengths and weaknesses of AI chatbots.

Abstract

: In 2022, the release of ChatGPT marked a significant breakthrough in Artificial Intelligence (AI) chatbot usage, particularly impacting computer science education. AI chatbots can now generate code snippets, but their proficiency in solving various tasks remains debated. This study examines how well AI-based chatbots, including ChatGPT and Microsoft Copilot, perform in solving tasks in the "Object-Oriented Programming" course. Both tools were tested on multiple programming tasks and exam questions, with their results compared to those of students. Currently, ChatGPT-3.5 performs below the average student, while Copilot is on par. The chatbots performed better on introductory topics, though their performance varied as task difficulty increased. They also fared better on longer programming test tasks than on shorter exam tasks. A common error was failing to provide all possible solutions and misinterpreting implied requirements. Despite these challenges, both AI tools are capable of passing the course. These findings offer valuable insights for programming instructors by highlighting the strengths and weaknesses of AI chatbots, helping guide potential improvements in computer science education.

Cited Papers
Citing Papers