UPDF AI

Adapting Program Assessment for the Age of Generative AI

Michael Verdicchio

2025 · DOI: 10.1109/EDUNINE62377.2025.10981409
0 Citations

TLDR

This work summarizes recent literature and best practices in program assessment, offers suggestions for adaptations from the assessment perspective, along with the course-level perspective, and describes efforts to account for student use of generative AI.

Abstract

Program assessment practices not designed to account for student use of generative AI have the potential to mislead as to the perceived degree of student outcome attainment in all STEM fields. Understanding how AI tools like Chat-GPT and GitHub Copilot have impacted student approaches to problem-solving will allow us to design and deploy more effective assessment instruments and collect more meaningful data. An assessment plan structured according to ABET accreditation criteria has opportunities at several levels to make meaningful adjustments. This work summarizes recent literature and best practices in program assessment. Next, it offers suggestions for adaptations from the assessment perspective, along with the course-level perspective. Finally, a brief experience report is provided that describes efforts to account for student use of generative AI, along with example assignments, which are described from the perspective of the instructor and student. The experience is then generalized for ETC education with recommendations for other programs to follow.

Cited Papers
Citing Papers