UPDF AI

Recent Advances in Natural Language Processing via Large Pre-trained Language Models: A Survey

Bonan Min,Hayley Ross,6 Authors,D. Roth

2021 · DOI: 10.1145/3605943
ACM Computing Surveys · 1,288 Citations

TLDR

This article presents the key fundamental concepts of PLM architectures and a comprehensive view of the shift to PLM-driven NLP techniques, and surveys work applying the pre-training then fine-tuning, prompting, and text generation approaches.

Abstract

Large, pre-trained language models (PLMs) such as BERT and GPT have drastically changed the Natural Language Processing (NLP) field. For numerous NLP tasks, approaches leveraging PLMs have achieved state-of-the-art performance. The key idea is to learn a generic, latent representation of language from a generic task once, then share it across disparate NLP tasks. Language modeling serves as the generic task, one with abundant self-supervised text available for extensive training. This article presents the key fundamental concepts of PLM architectures and a comprehensive view of the shift to PLM-driven NLP techniques. It surveys work applying the pre-training then fine-tuning, prompting, and text generation approaches. In addition, it discusses PLM limitations and suggested directions for future research.