Qwen-14B-Hindi
Qwen-2.5-14B-Hindi is a 14.8B parameter pre-trained and instruction-tuned bilingual large language model for both Hindi and English, trained on a mixed language dataset.
- ~1% better performance on English Tasks compared to the original (average benchmark scores)
- ~4% better performance on Hindi Tasks compared to the original (average benchmark scores)
- Less Biases due to ordering of choices while answering MCQs
Model Details:
- Developed by: Traversaal.ai, 1-800-LLMs
- Language(s) (NLP): Optimized for Hindi and English
- License: Apache 2.0
- Paper : TBA April 15
Intended Use
We release Qwen-2.5-14B-Hindi under the Apache 2.0 license, encouraging researchers, developers, and enterprises to experiment with and build upon the model, particularly for bilingual, multilingual and non-English applications. At the time of release, the model demonstrated state-of-the-art performance across an extensive English and Hindi evaluation suite.
Some potential downstream applications are as follows:
- Research: This model serves as a valuable tool for researchers and developers working in NLP.
- Commercial Use: It can be utilized as a foundational model for fine-tuning to meet specific industry needs.
Possible applications include:- AI-powered Chat Assistants
- Customer Support Service
- Educational tools for language learning
Target audiences who may benefit from our model:
- Academics: Researchers focused on Hindi and multilingual NLP advancements.
- Businesses: Companies catering to Hindi-speaking and bilingual users.
- Developers: Those integrating Hindi language capabilities into applications and services.
- Educational Institutions: Schools and universities developing AI-powered learning tools.
Prompt Formats
Task | Input Format |
---|---|
Natural Language Inference | "Text1 ### Text2 ### NLI ### " |
Multiple Choice Questions | "Question ### A) a, B) b,... ### MCQ ### " |
Numeric Questions | "Question ### NUMERIC ### " |
Boolean Questions | "Question ### BOOLEAN ### " |
Questions seeking Long responses | "Question ### LONG RESPONSE ### " |
Short responses (few words) | "Input ### DIRECT RESPONSE ### " |
Coding | "Input ### CODE ### " |
Text Summarization | "Input ### SUMMARIZE ### " |
Paraphrasing/Rephrasing | "Input ### PARAPHRASE ### " |
Translation to specified language | "Input ### TRANSLATION [lang] ### " |
Text Simplification/ELI5 | "Input ### SIMPLIFY ### " |
The following prompt formats were used during training and are better suited for usage, however the model works well even without such formatting
Out-of-Scope Use
While Qwen-2.5-14B-Hindi is a powerful bilingual model designed for Hindi and English, it is crucial to acknowledge its limitations and the potential for misuse. The model must not be used in ways that violate any applicable laws or regulations. Below are specific scenarios where its use is restricted:
Harmful or Malicious Use: The model should not be employed to create or distribute harmful, misleading, or inappropriate content, including but not limited to:
- Encouraging hate speech, violence, or discrimination
- Spreading misinformation or false narratives
- Facilitating or promoting illegal activities
Sensitive Data Handling: The model is not designed to process or generate personal, confidential, or sensitive information.
Language Constraints: While optimized for Hindi and English, the model should not be assumed to have the same proficiency in other languages.
High-Risk Decision-Making: It should not be used for critical decision-making without human oversight, especially in medical, legal, financial, or safety-related contexts.
Bias, Risks, and Limitations
While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.
The model is trained as an AI assistant for Hindi and English speakers. The model is limited to produce responses for queries in these two languages and may not produce appropriate responses to other language queries.
By using this model, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model~~
Evaluation:
We evaluated our models on multiple well-known benchmarks to measure their effectiveness against other leading models, and the results are as follows:
Model | ARC-C | ARC-E | BoolQ | CMCQ | MMLU | Average* | MMLU-Pro | GPQA | MuSR | BBH | MATH-Hard |
---|---|---|---|---|---|---|---|---|---|---|---|
AryaBhatta-GemmaUltra-8.5B | 22.70 | 25.04 | 22.95 | 62.23 | 23.70 | 31.32 | 22.66 | 25.34 | 42.72 | 41.12 | 2.95 |
Airavata-7B | 25.09 | 30.47 | 25.31 | 62.17 | 33.20 | 35.25 | 16.35 | 27.43 | 37.57 | 36.00 | 13.60 |
sarvam-1-2B | 30.03 | 33.25 | 62.17 | 42.80 | 27.90 | 39.23 | - | - | - | - | - |
Nemotron-4-Mini-Hindi-Instruct | 55.80 | 71.63 | 62.11 | 68.10 | 43.20 | 60.17 | 25.95 | 30.87 | 41.53 | 40.11 | 2.04 |
Llama-3-Nanda-10B-Chat | 65.36 | 80.64 | 82.29 | 67.60 | 50.61 | 69.30 | 31.57 | 30.12 | 43.52 | 49.38 | 5.59 |
Krutrim-2-12b-instruct | 67.32 | 81.10 | 84.74 | 76.30 | 56.10 | 73.11 | - | - | - | - | - |
aya-expanse-8b | 74.06 | 87.08 | 86.45 | 83.30 | 56.89 | 77.56 | 30.04 | 30.29 | 37.17 | 49.42 | 7.02 |
aya-expanse-32B | 85.41 | 95.08 | 90.43 | 89.80 | 69.71 | 86.08 | 41.30 | 32.55 | 38.62 | 56.29 | 13.37 |
Our Qwen Model (14b) | 90.61 | 94.82 | 88.53 | 90.70 | 75.00 | 87.93 | 52.63 | 36.24 | 44.84 | 64.97 | 25.08 |
Our Phi Model (14b) | 97.39 | 92.24 | 87.65 | 87.40 | 75.59 | 88.05 | 52.39 | 39.77 | 49.07 | 66.97 | 23.11 |
Table 1: Metrics (.2f) of our models and other LLMs over several English benchmarks
Model | ARC-C | ARC-E | BoolQ | CMCQ | MMLU | Average |
---|---|---|---|---|---|---|
AryaBhatta-GemmaUltra-8.5B | 22.70 | 25.08 | 22.95 | 62.17 | 23.80 | 31.34 |
Airavata-7B | 22.87 | 25.13 | 23.28 | 62.17 | 33.20 | 33.33 |
sarvam-1-2B | 32.76 | 35.06 | 62.16 | 47.10 | 24.22 | 40.26 |
Llama-3-Nanda-10B-Chat | 45.99 | 60.56 | 71.96 | 54.70 | 36.35 | 53.91 |
Nemotron-4-Mini-Hindi-4B-Instruct | 50.68 | 63.72 | 68.74 | 51.30 | 37.18 | 54.32 |
Krutrim-2-12b-instruct | 56.83 | 70.66 | 78.86 | 64.10 | 46.51 | 63.39 |
aya-expanse-8b | 57.42 | 72.90 | 80.42 | 69.00 | 43.39 | 64.63 |
aya-expanse-32B | 73.29 | 85.48 | 87.73 | 79.70 | 56.96 | 76.63 |
Our Qwen Model (14b) | 74.06 | 81.23 | 84.07 | 78.20 | 53.85 | 74.82 |
Our Phi Model (14b) | 81.74 | 89.06 | 86.02 | 78.70 | 56.39 | 78.38 |
Table 2: Metrics (.2f) of our models and other LLMs over several Hindi benchmarks
Benchmark | Lang | Qwen-2.5-14B-Instruct | Our Qwen | Change | Phi-4 | Our Phi | Change |
---|---|---|---|---|---|---|---|
ARC-Easy | En | 95.45 | 94.82 | 🔻 0.63 | 97.31 | 97.39 | 🔼 0.08 |
Hi | 78.49 | 81.23 | 🔼 2.74 | 86.87 | 89.06 | 🔼 2.19 | |
ARC-Challenge | En | 90.87 | 90.61 | 🔻 0.26 | 92.41 | 92.24 | 🔻 0.17 |
Hi | 69.62 | 74.06 | 🔼 4.44 | 79.18 | 81.74 | 🔼 2.56 | |
BoolQ | En | 86.09 | 88.53 | 🔼 2.44 | 86.30 | 87.65 | 🔼 1.35 |
Hi | 78.89 | 84.07 | 🔼 5.18 | 82.72 | 86.02 | 🔼 3.30 | |
Context-MCQ | En | 91.20 | 90.70 | 🔻 0.50 | 86.30 | 87.40 | 🔼 1.10 |
Hi | 77.40 | 78.20 | 🔼 0.80 | 75.70 | 78.70 | 🔼 3.00 | |
MMLU | En | 74.37 | 75.00 | 🔼 0.63 | 74.67 | 75.59 | 🔼 0.92 |
Hi | 52.16 | 53.85 | 🔼 1.69 | 53.24 | 56.39 | 🔼 3.15 | |
Average | En | 87.60 | 87.93 | 🔼 0.33 | 87.40 | 88.05 | 🔼 0.65 |
Hi | 71.31 | 74.82 | 🔼 3.51 | 75.54 | 78.38 | 🔼 2.84 | |
Overall | 79.46 | 81.38 | 🔼 1.92 | 81.47 | 83.22 | 🔼 1.75 |
Table 3: Performance of our models compared to originals over each benchmark : evals through log likelihoods
Benchmark | Lang | Qwen-2.5-14B-Instruct | Our Qwen | Change | Phi-4 | Our Phi | Change |
---|---|---|---|---|---|---|---|
MMLU-Pro | En | 49.04 | 52.63 | 🔼 3.59 | 53.78 | 52.39 | 🔻 1.39 |
MATH hard | En | 00.00 | 25.08 | N/A | 12.31 | 23.11 | 🔼 10.80 |
GPQA | En | 32.21 | 36.24 | 🔼 4.03 | 33.72 | 39.77 | 🔼 6.05 |
MuSR | En | 40.87 | 44.84 | 🔼 3.97 | 41.01 | 49.07 | 🔼 8.06 |
BigBench-Hard | En | 63.74 | 64.97 | 🔼 1.23 | 68.60 | 66.97 | 🔻 1.63 |
Average | 37.17 | 44.75 | 🔼 7.58 | 41.88 | 46.26 | 🔼 4.38 |
Table 4: Performance of our models compared to originals over each benchmark : evals through eval-harness
Recommendations
It is advisable for users to:
- Refrain from deploying the model in sensitive domains without human supervision.
- Cross-check factual information generated by the model for accuracy.
- Continuously assess the model to ensure compliance with ethical standards.
- Be mindful of potential biases and unintended outputs, especially in critical applications.
Model Responses vs Order of Choices in MCQs
As benchmarks like MMLU-Pro have upto 10 choices, while most training datasets consist of typically 4-5 choices, we modified the ordering and labelling of choices i.e re-ordering choices to create an imbalance opposing the original model's choice distribution, replacement of labels from A/B/C/D to a/b/c/d or 1/2/3/4 or w/x/y/z etc.. in 5% of the MCQ samples for better robustness This resulted in less bias towards the earlier choices among MCQs as compared to the original phi-4. The below images are a distution of choices selected by the model while being evaluated over MMLU-pro
The same info on a different model built from the base checkpoint instruction tuned solely on our data can be seen below :
Team
Ram Mohan Rao Kadiyala
Siddartha Pullakhandam
Siddhant Gupta
Drishti Sharma
Jebish Purbey
Kanwal Mehreen
Muhammad Arham
Hamza Farooq
Correspondence
- Downloads last month
- 23
Model tree for large-traversaal/Qwen-2.5-14B-Hindi
Dataset used to train large-traversaal/Qwen-2.5-14B-Hindi
Collection including large-traversaal/Qwen-2.5-14B-Hindi
Evaluation results
- accuracy on MMLU Pro (5-Shot)test set Open LLM Leaderboard52.630
- accuracy (normalized) on GPQA (0-Shot)test set Open LLM Leaderboard36.240
- accuracy (normalized) on MuSR (0-Shot)test set Open LLM Leaderboard44.840
- accuracy (normalized) on Big Bench Hard (3-Shot)test set Open LLM Leaderboard64.970
- accuracy (exact match) on Math HARD (4-Shot)test set Open LLM Leaderboard25.080