Skip to yearly menu bar Skip to main content


Poster

ALLaM: Large Language Models for Arabic and English

M Saiful Bari · Yazeed Alnumay · Norah Alzahrani · Nouf Alotaibi · Hisham Alyahya · AlRashed · Faisal Mirza · Shaykhah Alsubaie · Hassan Alahmed · Ghadah Alabduljabbar · Raghad Alkhathran · Yousef Almushayqih · Raneem Alnajim · Salman I Alsubaihi · Maryam Al Mansour · Saad Hassan · Majed Alrubaian · Ali Alammari · Zaki Alawami · Abdulmohsen Al-Thubaity · Ahmed Abdelali · Jeril Kuriakose · Abdalghani Abujabal · Nora Al-Twairesh · Areeb Alowisheq · Haidar Khan

Hall 3 + Hall 2B #250
[ ] [ Project Page ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

In this work, we present ALLaM: Arabic Large Language Model, a series of large language models to support the ecosystem of Arabic Language Technologies (ALT). ALLaM is carefully trained, considering the values of language alignment and transferability of knowledge at scale. The models are based on an autoregressive decoder-only architecture and are pretrained on a mixture of Arabic and English texts. We illustrate how the second-language acquisition via vocabulary expansion can help steer a language model towards a new language without any major catastrophic forgetting in English. Furthermore, we highlight the effectiveness of using translation data and the process of knowledge encoding within the language model's latent space. Finally, we show that effective alignment with human preferences can significantly enhance the performance of a large language model (LLM) compared to less aligned models of a larger scale. Our methodology enables us to achieve state-of-the-art performance in various Arabic benchmarks, including MMLU Arabic, ACVA, and Arabic Exams. Our aligned models improve both in Arabic and English from its base aligned models.

Live content is unavailable. Log in and register to view live content