UPDF AI

Training Tips for the Transformer Model

M. Popel,Ondrej Bojar

2018 · DOI: 10.2478/pralin-2018-0002
Prague Bulletin of Mathematical Linguistics · 321 Citations

TLDR

The experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model are described, confirming the general mantra “more data and larger models”.

Abstract

Abstract This article describes our experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model (Vaswani et al., 2017). We examine some of the critical parameters that affect the final translation quality, memory usage, training stability and training time, concluding each experiment with a set of recommendations for fellow researchers. In addition to confirming the general mantra “more data and larger models”, we address scaling to multiple GPUs and provide practical tips for improved training regarding batch size, learning rate, warmup steps, maximum sentence length and checkpoint averaging. We hope that our observations will allow others to get better results given their particular hardware and data constraints.

Cited Papers
Citing Papers