Edit model card

๐Ÿ‡บ๐Ÿ‡ฟ Uzbek mGPT 1.3B

Language model for Uzbek. Model has 1.3B parameters as you can guess from it's name.

Uzbek belongs to Turkic language family. It's a very rhythmic language with approximately 32 million speakers. Here are some facts about it:

  1. The official language of Uzbekistan.
  2. It transitioned from the Cyrillic script to the Latin script after Uzbekistans independence, but the Cyrillic script is still in use among older generations.
  3. Historically, it was influenced by Persian and Arabic due to trade and Islamic scholarly traditions.

Technical details

It's one of the models derived from the base mGPT-XL (1.3B) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.

We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. Uzbek mGPT 1.3B was trained for another 50000 steps with batch_size=4 and context window of 2048 tokens on 1 A100.

Final perplexity for this model on validation is 6.84.

Chart of the training loss and perplexity:

Other mGPT-1.3B models

Feedback

If you'll found a bug of have additional data to train model on your language โ€” please, give us feedback.

Model will be improved over time. Stay tuned!

Downloads last month
520
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.