UPDF AI

Cloud-Base Retrieval Augmented Language Models in Multi-Hop QA Task: A Meta-Analysis

Zhenhao Xu,Peihang Jiang,Yan Gao,Jun You

2025 · DOI: 10.1109/ICAACE65325.2025.11019854
0 Citations

TLDR

This review provides a comprehensive review of recent developments in RAG+LLMs for multi-hop QA, and reveals the strengths and weaknesses of various approaches, highlighting the trade-offs between retrieval efficiency, model scalability, and the complexity of multi-hop reasoning.

Abstract

Large Language Models (LLMs) have revolutionized natural language processing by providing impressive capabilities in generating coherent responses across a range of tasks. However, LLMs face limitations in handling dynamic, real-time access to external knowledge, particularly in complex tasks such as multi-hop question answering (QA), which require the integration of multiple pieces of information from various sources. Retrieval-Augmented Generation (RAG) have been introduced to address these limitations by augmenting LLMs with the ability to query external knowledge bases, enhancing their capacity to answer multi-hop questions. We provides a comprehensive review of recent developments in RAG+LLMs for multi-hop QA. Our meta-analysis reveals the strengths and weaknesses of various approaches, highlighting the trade-offs between retrieval efficiency, model scalability, and the complexity of multi-hop reasoning. Then we identify several promising avenues for future work, including the optimization of retrieval strategies, the development of dynamic and adaptive retrieval mechanisms, and the integration of metacognitive and self-correction processes to enhance the accuracy and efficiency of multi-hop reasoning in real-world applications. Through this review, we aim to provide valuable insights into the ongoing evolution of RAG+LLMs.