UPDF AI

Sailor: Open Language Models for South-East Asia

Longxu Dou,Qian Liu,4 Authors,Min Lin

2024 · DOI: 10.48550/arXiv.2404.03608
Conference on Empirical Methods in Natural Language Processing · 9 Citations

TLDR

Experimental results on four typical tasks indicate that Sailor models demonstrate strong performance across different benchmarks, including commonsense reasoning, question answering, reading comprehension and examination, to spark a wider interest in developing large language models for multilingual use cases.

Abstract

We present Sailor, a family of open language models ranging from 0.5B to 14B parameters, tailored for South-East Asian (SEA) languages. From Qwen1.5, Sailor models accept 200B to 400B tokens during continual pre-training, primarily covering the languages of English, Chinese, Vietnamese, Thai, Indonesian, Malay, and Lao. The training leverages several techniques, including BPE dropout for improving the model robustness, aggressive data cleaning and deduplication, and small proxy models to optimize the data mixture. Experimental results on four typical tasks indicate that Sailor models demonstrate strong performance across different benchmarks, including commonsense reasoning, question answering, reading comprehension and examination. We share our insights to spark a wider interest in developing large language models for multilingual use cases.

Cited Papers
Citing Papers