UPDF AI

"Can You See Me Think?" Grounding LLM Feedback in Keystrokes and Revision Patterns

Samra Zafar,Shifa Yousaf,Muhammad Shaheer Minhas

2025 · DOI: 10.48550/arXiv.2508.13543
arXiv.org · 0 Citations

TLDR

This study conducts an ablation study on 52 student essays comparing feedback generated with access to only the final essay and feedback that also incorporates keylogs and time-stamped snapshots.

Abstract

As large language models (LLMs) increasingly assist in evaluating student writing, researchers have begun questioning whether these models can be cognitively grounded, that is, whether they can attend not just to the final product, but to the process by which it was written. In this study, we explore how incorporating writing process data, specifically keylogs and time-stamped snapshots, affects the quality of LLM-generated feedback. We conduct an ablation study on 52 student essays comparing feedback generated with access to only the final essay (C1) and feedback that also incorporates keylogs and time-stamped snapshots (C2). While rubric scores changed minimally, C2 feedback demonstrated significantly improved structural evaluation and greater process-sensitive justification.

Cited Papers