UPDF AI

Linformer: Self-Attention with Linear Complexity

Sinong Wang,Belinda Z. Li,2 Authors,Hao Ma

2020 · ArXiv: 2006.04768
arXiv.org · 1,721 Citations

TLDR

This paper demonstrates that the self-attention mechanism of the Transformer can be approximated by a low-rank matrix, and proposes a new self-Attention mechanism, which reduces the overall self-ATTention complexity from O(n2)O(n^2) to O(n)O (n) in both time and space.

Abstract

Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses O(n2)O(n^2) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from O(n2)O(n^2) to O(n)O(n) in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.