--- library_name: transformers pipeline_tag: text-generation language: - multilingual tags: - generation - question answering - instruction tuning datasets: - MBZUAI/Bactrian-X license: cc-by-nc-4.0 --- ### Model Description This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages. We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks. Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details. * Base model: [BLOOM 7B1](https://huggingface.co./bigscience/bloom-7b1) * Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian, Latvian, Macedonian, Malayalam, Mongolian, Marathi, Burmese, Nepali, Dutch, Polish, Pashto, Portuguese, Romanian, Russian, Sinhala, Slovenian, Swedish, Swahili, Tamil, Telugu * Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt, lv, mk, ml, mn, mr, my, ne, nl, pl, ps, pt, ro, ru, si, sl, sv, sw, ta, te * Training method: full-parameter fine-tuning. ### Usage The model checkpoint should be loaded using `transformers` library. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-45") model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-45") ``` ### Citation ``` @misc{lucky52, title = "Lucky 52: How Many Languages Are Needed to Instruction Fine-Tune Large Language Models?", author = "Shaoxiong Ji and Pinzhen Chen", year = "2024", eprint = "2404.04850", archiveprefix = "arXiv", primaryclass = "cs.CL" } ```