UPDF AI

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding

Heming Xia,Zhe Yang,6 Authors,Zhifang Sui

2024 · DOI: 10.48550/arXiv.2401.07851
Annual Meeting of the Association for Computational Linguistics · 130 Citations

TLDR

This paper presents a comprehensive overview and analysis of Speculative Decoding, providing a formal definition and formulation of this promising decoding paradigm, and presents a comparative analysis of leading methods under third-party testing environments.

Abstract

To mitigate the high inference latency stemming from autoregressive decoding in Large Language Models (LLMs), Speculative Decoding has emerged as a novel decoding paradigm for LLM inference. In each decoding step, this method first drafts several future tokens efficiently and then verifies them in parallel. Unlike autoregressive decoding, Speculative Decoding facilitates the simultaneous decoding of multiple tokens per step, thereby accelerating inference. This paper presents a comprehensive overview and analysis of this promising decoding paradigm. We begin by providing a formal definition and formulation of Speculative Decoding. Then, we organize in-depth discussions on its key facets, such as drafter selection and verification strategies. Furthermore, we present a comparative analysis of leading methods under third-party testing environments. We aim for this work to serve as a catalyst for further research on Speculative Decoding, ultimately contributing to more efficient LLM inference.

Cited Papers
Citing Papers