No Language Left Behind: Scaling Human-Centered Machine Translation
Authors: NLLB team, Marta R. Costa-jussà (NLLB Team), James Cross (NLLB Team), Onur Çelebi (NLLB Team), Maha Elbayad (NLLB Team), Kenneth Heafield (NLLB Team), Kevin Heffernan (NLLB Team), Elahe Kalbassi (NLLB Team), Janice Lam (NLLB Team), Daniel Licht (NLLB Team), Jean Maillard (NLLB Team), Anna Sun (NLLB Team), Skyler Wang (NLLB Team), Guillaume Wenzek (NLLB Team), Al Youngblood (NLLB Team), Bapi Akula (NLLB Team), Loic Barrault (NLLB Team), Gabriel Mejia Gonzalez (NLLB Team), Prangthip Hansanti (NLLB Team), John Hoffman (NLLB Team), Semarley Jarrett (NLLB Team), Kaushik Ram Sadagopan (NLLB Team), Dirk Rowe (NLLB Team), Shannon Spruit (NLLB Team), Chau Tran (NLLB Team), Pierre Andrews (NLLB Team), Necip Fazil Ayan (NLLB Team), Shruti Bhosale (NLLB Team), Sergey Edunov (NLLB Team), Angela Fan (NLLB Team), Cynthia Gao (NLLB Team), Vedanuj Goswami (NLLB Team), Francisco Guzmán (NLLB Team), Philipp Koehn (NLLB Team), Alexandre Mourachko (NLLB Team), Christophe Ropers (NLLB Team), Safiyyah Saleem (NLLB Team), Holger Schwenk (NLLB Team), Jeff Wang (NLLB Team)
Abstract: Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.