UCEMA: Uni-modal and cross-modal encoding network based on multi-head attention for emotion recognition in conversation
UCEMA: Uni-modal and cross-modal encoding network based on multi-head attention for emotion recognition in conversation
Hongkun Zhao,Siyuan Liu,3 Authors,Kang Li
2024 · DOI: 10.1007/s00530-024-01561-z
Multimedia Systems · 1 Citations
TLDR
It is noteworthy that the proposed framework effectively balances intra-modal emotional orientation information, inter-modal emotional association information, and context-related cues, thereby demonstrating superior performance in recognition accuracy compared to current state-of-the-art (SOTA) models.
