UPDF AI

Enhancing User Intent Detection in Chatbots through LLM Distillation

Oanh Thi Tran,Tho Chi Luong,Anh Quynh Do

2025 · DOI: 10.1109/MAPR67746.2025.11133834
International Conference on Multimedia Analysis and Pattern Recognition · 0 Citations

TLDR

This paper compares baseline student models—such as BiLSTM, CNN, and BERT with their counterparts integrated with distilled knowledge from a powerful fine-tuned LLM, and demonstrates remarkable improvements in classification accuracy.

Abstract

User intent detection is an important task in building intelligent conversational systems. A good intent detector will help improve the chatbot’s ability to understand and respond to user queries more effectively. While large language models (LLMs) have demonstrated state-of-the-art performance on various NLP tasks, their high computational demands make them impractical for most real-time or resource-constrained chatbot deployments. In this paper, we explore the use of knowledge distillation to transfer the capabilities of LLMs to lightweight student models optimized for intent detection. We compare baseline student models—such as BiLSTM, CNN, and BERT with their counterparts integrated with distilled knowledge from a powerful fine-tuned LLM. Our experiments demonstrate remarkable improvements in classification accuracy across a benchmark dataset. This highlights the effectiveness of LLM-informed distillation. The resulting models achieve substantial reductions in inference time and memory usage while retaining high accuracy, making them well-suited for deployment in real-world conversational systems.

Cited Papers
Citing Papers