AfroLingu-MT / README.md
elmadany's picture
Update README.md
8d2a50c verified
|
raw
history blame
5.26 kB
metadata
language:
  - aar
  - ach
  - afr
  - aka
  - amh
  - bam
  - bas
  - bem
  - btg
  - eng
  - ewe
  - fon
  - fra
  - hau
  - ibo
  - kbp
  - lgg
  - lug
  - mlg
  - nyn
  - orm
  - som
  - sot
  - swa
  - tir
  - yor
  - teo
  - gez
  - wal
  - fan
  - kau
  - kin
  - kon
  - lin
  - nya
  - pcm
  - ssw
  - tsn
  - tso
  - twi
  - wol
  - xho
  - zul
  - nnb
  - swc
  - ara
pipeline_tag: text-generation
tags:
  - African MT
  - Toucan
  - Machine translation
  - UBC
  - DLNLP
extra_gated_fields:
  First Name: text
  Last Name: text
  Country: country
  Affiliation: text
  Job title:
    type: select
    options:
      - Student
      - Research Graduate
      - AI researcher
      - AI developer/engineer
      - Reporter
      - Other
  I agree to use this model for non-commercial use ONLY: checkbox
  I agree to cite the Toucan paper: checkbox
  geo: ip_location
  By clicking Submit below I accept the terms of the license: checkbox
extra_gated_button_content: Submit

This is the repository accompanying our ACL 2024 paper Toucan: Many-to-Many Translation for 150 African Language Pairs. We address a notable gap in Natural Language Processing (NLP) by introducing a collection of resources designed to improve Machine Translation (MT) for low-resource languages, with a specific focus on African languages. First, We introduce two language models (LMs), Cheetah-1.2B and Cheetah-3.7B, with 1.2 billion and 3.7 billion parameters respectively. Next, we finetune the aforementioned models to create Toucan, an Afrocentric machine translation model designed to support 156 African language pairs. To evaluate Toucan, we carefully develop an extensive machine translation benchmark, dubbed AfroLingu-MT, tailored for evaluating machine translation. Toucan significantly outperforms other models, showcasing its remarkable performance on MT for African languages. Finally, we train a new model, spBLEU_1K, to enhance translation evaluation metrics, covering 1K languages, including 614 African languages. This work aims to advance the field of NLP, fostering cross-cultural understanding and knowledge exchange, particularly in regions with limited language resources such as Africa.

AfroLingu-MT Benchmark

Our collection comprises data from a total of 43 datasets, encompassing 84 unique language pairs derived from 46 different languages. We also develop a new manually translated dataset useful for evaluation in the government domain. In all, the data cover 43 African languages from five language families domiciled in 29 African countries. We also include Arabic, English, and French, since these are widely spoken in Africa.

Supoorted langauges

Below the supported langauges

lang_names={
    "aar": "Afar",
    "ach": "Acholi",
    "afr": "Afrikaans",
    "aka": "Akan",
    "amh": "Amharic",
    "bam": "Bambara",
    "bas": "Basaa",
    "bem": "Bemba",
    "btg": "Bete Gagnoa",
    "eng": "English",
    "ewe": "Ewe",
    "fon": "Fon",
    "fra": "French",
    "hau": "Hausa",
    "ibo": "Igbo",
    "kbp": "Kabiye",
    "lgg": "Lugbara",
    "lug": "Luganda",
    "mlg": "Malagasy",
    "nyn": "Nyakore",
    "orm": "Oromo",
    "som": "Somali",
    "sot": "Sesotho",
    "swa": "Swahili",
    "tir": "Tigrinya",
    "yor": "Yoruba",
    "teo": "Ateso",
    "gez": "Geez",
    "wal": "Wolaytta",
    "fan": "Fang",
    "kau": "Kanuri",
    "kin": "Kinyawanda",
    "kon": "Kongo",
    "lin": "Lingala",
    "nya": "Chichewa",
    "pcm": "Nigerian Pidgin",
    "ssw": "Siswati",
    "tsn": "Setswana",
    "tso": "Tsonga",
    "twi": "Twi",
    "wol": "Wolof",
    "xho": "Xhosa",
    "zul": "Zulu",
    "nnb": "Nande",
    "swc": "Swahili Congo",
    "ara": "Arabic"
}

Loading the dataset

from datasets import load_dataset
afrolingu_mt = load_dataset("UBC-NLP/AfroLingu-MT")
print(afrolingu_mt)

Output:

DatasetDict({
    train: Dataset({
        features: ['langcode', 'instruction', 'input', 'output'],
        num_rows: 586261
    })
    validation: Dataset({
        features: ['langcode', 'instruction', 'input', 'output'],
        num_rows: 7437
    })
    test: Dataset({
        features: ['langcode', 'instruction', 'input', 'output'],
        num_rows: 26875
    })
})

Citation

If you use the AfroLingu-MT benchmark for your scientific publication, or if you find the resources in this repository useful, please cite our papers as follows (to be updated):

Toucan's Paper

@inproceedings{adebara-etal-2024-cheetah,
    title = "Cheetah: Natural Language Generation for 517 {A}frican Languages",
    author = "Adebara, Ife  and
      Elmadany, AbdelRahim  and
      Abdul-Mageed, Muhammad",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand and virtual meeting",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.691",
    pages = "12798--12823",
}