qwen_1.5_odia_7b / README.md
shantipriya's picture
Update README.md
74cfbe0 verified
metadata
license: apache-2.0
library_name: peft
base_model: Qwen/Qwen1.5-7B
model-index:
  - name: qwen_1.5_odia_7b
    results: []

qwen_1.5_odia_7b (Pre-trained)

Qwen_1.5_Odia_7B is a pre-trained Odia large language model with 7 billion parameters, and it is based on Qwen 1.5-7B. The model is pre-trained on the Culturex-Odia dataset, a filtered version of the original CulturaX dataset for Odia text. The training dataset contains 49 million tokens. The CulturaX-Odia dataset is sourced from mc4 and four distinct OSCAR corpora.

For more details about the model, data, training procedure, and evaluations, go through the blog post.

Model Description

  • Model type: A 7B pre-trained decoder-only model
  • Primary Language(s): Odia and English
  • License: Apache-2.0 (Commercial)

NOTE

This is not an instruction-tuned model, so it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model has no moderation mechanisms and may generate harmful or inappropriate responses. It is recommended to first fine-tune it on the task(s) you are interested in.

Citation Information

If you find this model useful, please consider giving 👏 and citing:

@misc{Qwen1.5_odia_7b,
  author = {Sambit Sekhar and Shantipriya Parida and Debasish Dhal},
  title = {Introducing OdiaGenAI's Qwen-Based Pre-trained LLM for Odia Language},
  year = {2023},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co./OdiaGenAI}},
}

Contributions

  • Sambit Sekhar
  • Shantipriya Parida
  • Debasish Dhal