UPDF AI

M2K-VDG: Model-Adaptive Multimodal Knowledge Anchor Enhanced Video-grounded Dialogue Generation

Hongcheng Liu,Pingjie Wang,Yu Wang,Yanfeng Wang

2024 · DOI: 10.48550/arXiv.2402.11875
arXiv.org · 1 Citations

TLDR

This paper proposes M2K-VDG, a model-adaptive multimodal knowledge anchor enhancement framework for hallucination reduction, and introduces the counterfactual effect for more accurate anchor token detection.

Abstract

Video-grounded dialogue generation (VDG) requires the system to generate a fluent and accurate answer based on multimodal knowledge. However, the difficulty in multimodal knowledge utilization brings serious hallucinations to VDG models in practice. Although previous works mitigate the hallucination in a variety of ways, they hardly take notice of the importance of the multimodal knowledge anchor answer tokens. In this paper, we reveal via perplexity that different VDG models experience varying hallucinations and exhibit diverse anchor tokens. Based on this observation, we propose M2K-VDG, a model-adaptive multimodal knowledge anchor enhancement framework for hallucination reduction. Furthermore, we introduce the counterfactual effect for more accurate anchor token detection. The experimental results on three popular benchmarks exhibit the superiority of our approach over state-of-the-art methods, demonstrating its effectiveness in reducing hallucinations.

Cited Papers
Citing Papers