File size: 16,315 Bytes
11708ee
a13f1eb
11708ee
a13f1eb
 
11708ee
a13f1eb
 
 
 
 
11708ee
a13f1eb
11708ee
a13f1eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11708ee
f6266de
11708ee
f5cc170
 
11708ee
0cb92ac
 
 
dd01bb2
0cb92ac
4e75f82
 
ad68a75
f5cc170
a13f1eb
f5cc170
4e75f82
 
 
 
a13f1eb
4e75f82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad68a75
 
 
 
361f925
 
 
 
 
 
ad68a75
361f925
 
 
 
ad68a75
 
4e75f82
 
 
 
 
f5cc170
4e75f82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
098eb06
4e75f82
 
 
 
f5cc170
4e75f82
 
 
 
 
 
098eb06
4e75f82
 
 
 
 
098eb06
4e75f82
 
b8c6e4e
 
 
4e75f82
b8c6e4e
4e75f82
 
 
 
 
 
 
 
 
 
b8c6e4e
098eb06
b8c6e4e
4e75f82
b8c6e4e
4e75f82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b8c6e4e
4e75f82
 
 
 
 
 
 
 
 
 
b8c6e4e
4e75f82
 
 
 
 
 
 
 
 
524bb32
 
 
 
 
 
 
 
dd01bb2
 
 
 
 
 
 
 
098eb06
 
361f925
 
 
 
 
 
 
 
783241c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
---
license: apache-2.0
language:
- en
- hi
tags:
- multilingual
- instruction-tuning
- phi4
- efficiency
- hindi
datasets:
- 1024m/PHI-4-Hindi-Instruct-Data
model-index:
- name: Phi-4-Hindi
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU Pro (5-Shot)
      type: mmlu_pro
      config: MMLU Pro
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 52.39
      name: accuracy
    source:
      url: >-
        https://huggingface.co./datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-Shot)
      type: gpqa
      config: GPQA
      split: test
      args:
        num_few_shot: 0
    metrics:
    - type: acc
      value: 39.77
      name: accuracy (normalized)
    source:
      url: >-
        https://huggingface.co./datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-Shot)
      type: musr
      config: MuSR
      split: test
      args:
        num_few_shot: 0
    metrics:
    - type: acc
      value: 49.07
      name: accuracy (normalized)
    source:
      url: >-
        https://huggingface.co./datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Big Bench Hard (3-Shot)
      type: bbh
      config: Big Bench Hard
      split: test
      args:
        num_few_shot: 3
    metrics:
    - type: acc
      value: 66.97
      name: accuracy (normalized)
    source:
      url: >-
        https://huggingface.co./datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Math HARD (4-Shot)
      type: math_hard
      config: Math Hard
      split: test
      args:
        num_few_shot: 4
    metrics:
    - type: acc
      value: 23.11
      name: accuracy (exact match)
    source:
      url: >-
        https://huggingface.co./datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
      name: Open LLM Leaderboard
---
# Phi-4-Hindi

Phi-4-Hindi is a 14.7B parameter pre-trained and instruction-tuned bilingual large language model for both Hindi and English, 
trained on a mixed language dataset.

- ~1% better performance on English Tasks compared to the original (average benchmark scores)
- ~4% better performance on Hindi Tasks compared to the original (average benchmark scores)
- ~10% less emissions than the original (as reported on benchmark evaluations like open-llm-leaderboard)
- Less Biases due to ordering of choices while answering MCQs

### Model Details:

- **Developed by:** [Traversaal.ai](https://huggingface.co./large-traversaal), [1-800-LLMs](https://huggingface.co./1-800-LLMs)
- **Language(s) (NLP):** Optimized for Hindi and English
- **License:** Apache 2.0
- **Paper :** TBA April 15

## Intended Use

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
We release Phi-4-Hindi under the Apache 2.0 license, encouraging researchers, developers, and enterprises to experiment with and build upon the model, particularly for bilingual, multilingual and non-English applications.
At the time of release, the model demonstrated state-of-the-art performance across an extensive English and Hindi evaluation suite.  

Some potential downstream applications are as follows:  
- *Research*: This model serves as a valuable tool for researchers and developers working in NLP.  
- *Commercial Use*: It can be utilized as a foundational model for fine-tuning to meet specific industry needs.  
Possible applications include:  
  - AI-powered Chat Assistants  
  - Customer Support Service  
  - Educational tools for language learning  

Target audiences who may benefit from our model:  
- *Academics*: Researchers focused on Hindi and multilingual NLP advancements.  
- *Businesses*: Companies catering to Hindi-speaking and bilingual users.  
- *Developers*: Those integrating Hindi language capabilities into applications and services.  
- *Educational Institutions*: Schools and universities developing AI-powered learning tools.  

### Prompt Formats

| Task                           | Input Format                                            |
|--------------------------------|---------------------------------------------------------|
| Natural Language Inference     | "`Text1 ### Text2 ### NLI ###`"                           |
| Multiple Choice Questions      | "`Question ### A) a, B) b,... ### MCQ ###`"               |
| Numeric Questions              | "`Question ### NUMERIC ###`"                              |
| Boolean Questions              | "`Question ### BOOLEAN ###`"                              |
| Questions seeking Long responses | "`Question ### LONG RESPONSE ###`"                      |
| Short responses (few words)    | "`Input ### DIRECT RESPONSE ###`"                         |
| Coding                         | "`Input ### CODE ###`"                                  |
| Text Summarization             | "`Input ### SUMMARIZE ###`"                               |
| Paraphrasing/Rephrasing        | "`Input ### PARAPHRASE ###`"                              |
| Translation to specified language | "`Input ### TRANSLATION [lang] ###`"                   |
| Text Simplification/ELI5       | "`Input ### SIMPLIFY ###`"                                |

The following prompt formats were used during training and are better suited for usage, however the model works well even without such formatting

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

While Phi-4-Hindi is a powerful bilingual model designed for Hindi and English, it is crucial to acknowledge its limitations and the potential for misuse. The model must not be used in ways that violate any applicable laws or regulations. Below are specific scenarios where its use is restricted:  

- *Harmful or Malicious Use*: The model should not be employed to create or distribute harmful, misleading, or inappropriate content, including but not limited to:  
  - Encouraging hate speech, violence, or discrimination  
  - Spreading misinformation or false narratives  
  - Facilitating or promoting illegal activities  

- *Sensitive Data Handling*: The model is not designed to process or generate personal, confidential, or sensitive information.  

- *Language Constraints*: While optimized for Hindi and English, the model should not be assumed to have the same proficiency in other languages.  

- *High-Risk Decision-Making*: It should not be used for critical decision-making without human oversight, especially in medical, legal, financial, or safety-related contexts.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

<!-- The model is trained on publicly available data which was in part curated by Inception. -->
While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias. 

The model is trained as an AI assistant for Hindi and English speakers. The model is limited to produce responses for queries in these two languages
and may not produce appropriate responses to other language queries.

By using this model, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. 
The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. 
We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model~~

## Evaluation:
We evaluated our models on multiple well-known benchmarks to measure their effectiveness against other leading models, and the results are as follows:

| Model                           | ARC-C | ARC-E | BoolQ | CMCQ  | MMLU  | Average* | MMLU-Pro | GPQA | MuSR  | BBH   | MATH-Hard  |
|---------------------------------|-------|-------|-------|-------|-------|----------|----------|------|-------|-------|-------|
| AryaBhatta-GemmaUltra-8.5B      | 22.70 | 25.04 | 22.95 | 62.23 | 23.70 | 31.32    | 22.66    | 25.34| 42.72 | 41.12 | 2.95  |
| Airavata-7B                     | 25.09 | 30.47 | 25.31 | 62.17 | 33.20 | 35.25    | 16.35    | 27.43| 37.57 | 36.00 | 13.60 |
| sarvam-1-2B                     | 30.03 | 33.25 | 62.17 | 42.80 | 27.90 | 39.23    | -        | -    | -     | -     | -     |
| Nemotron-4-Mini-Hindi-Instruct  | 55.80 | 71.63 | 62.11 | 68.10 | 43.20 | 60.17    | 25.95    | 30.87| 41.53 | 40.11 | 2.04  |
| Llama-3-Nanda-10B-Chat          | 65.36 | 80.64 | 82.29 | 67.60 | 50.61 | 69.30    | 31.57    | 30.12| 43.52 | 49.38 | 5.59  |
| Krutrim-2-12b-instruct          | 67.32 | 81.10 | 84.74 | 76.30 | 56.10 | 73.11    | -        | -    | -     | -     | -     |
| aya-expanse-8b                  | 74.06 | 87.08 | 86.45 | 83.30 | 56.89 | 77.56    | 30.04    | 30.29| 37.17 | 49.42 | 7.02  |
| aya-expanse-32B                 | 85.41 | **95.08** | **90.43** | 89.80 | 69.71 | 86.08    | 41.30    | 32.55| 38.62 | 56.29 | 13.37 |
| **Our Qwen Model (14b)**        | 90.61 | 94.82 | 88.53 | **90.70** | 75.00 | 87.93 | **52.63** | 36.24 | 44.84 | 64.97 | **25.08** |
| **Our Phi Model (14b)**         | **97.39** | 92.24 | 87.65 | 87.40 | **75.59** | **88.05** | 52.39 | **39.77** | **49.07** | **66.97** | 23.11 |

**Table 1: Metrics (.2f) of our models and other LLMs over several English benchmarks**

| Model                              | ARC-C | ARC-E | BoolQ | CMCQ  | MMLU  | Average |
|------------------------------------|-------|-------|-------|-------|-------|---------|
| AryaBhatta-GemmaUltra-8.5B         | 22.70 | 25.08 | 22.95 | 62.17 | 23.80 | 31.34   |
| Airavata-7B                        | 22.87 | 25.13 | 23.28 | 62.17 | 33.20 | 33.33   |
| sarvam-1-2B                        | 32.76 | 35.06 | 62.16 | 47.10 | 24.22 | 40.26   |
| Llama-3-Nanda-10B-Chat             | 45.99 | 60.56 | 71.96 | 54.70 | 36.35 | 53.91   |
| Nemotron-4-Mini-Hindi-4B-Instruct  | 50.68 | 63.72 | 68.74 | 51.30 | 37.18 | 54.32   |
| Krutrim-2-12b-instruct             | 56.83 | 70.66 | 78.86 | 64.10 | 46.51 | 63.39   |
| aya-expanse-8b                     | 57.42 | 72.90 | 80.42 | 69.00 | 43.39 | 64.63   |
| aya-expanse-32B                    | 73.29 | 85.48 | **87.73** | **79.70** | **56.96** | 76.63   |
| **Our Qwen Model (14b)**           | 74.06 | 81.23 | 84.07 | 78.20 | 53.85 | 74.82 |
| **Our Phi Model (14b)**            | **81.74** | **89.06** | 86.02 | 78.70 | 56.39 | **78.38** |

**Table 2: Metrics (.2f) of our models and other LLMs over several Hindi benchmarks**

| Benchmark       | Lang | Qwen-2.5-14B-Instruct | Our Qwen | Change | Phi-4 | Our Phi | Change |
|----------------|------|----------------------|----------|--------|-------|---------|--------|
| ARC-Easy      | En   | 95.45                | 94.82    | 🔻 0.63 | 97.31 | 97.39   | 🔼 0.08 |
|               | Hi   | 78.49                | 81.23    | 🔼 2.74 | 86.87 | 89.06   | 🔼 2.19 |
| ARC-Challenge | En   | 90.87                | 90.61    | 🔻 0.26 | 92.41 | 92.24   | 🔻 0.17 |
|               | Hi   | 69.62                | 74.06    | 🔼 4.44 | 79.18 | 81.74   | 🔼 2.56 |
| BoolQ         | En   | 86.09                | 88.53    | 🔼 2.44 | 86.30 | 87.65   | 🔼 1.35 |
|               | Hi   | 78.89                | 84.07    | 🔼 5.18 | 82.72 | 86.02   | 🔼 3.30 |
| Context-MCQ   | En   | 91.20                | 90.70    | 🔻 0.50 | 86.30 | 87.40   | 🔼 1.10 |
|               | Hi   | 77.40                | 78.20    | 🔼 0.80 | 75.70 | 78.70   | 🔼 3.00 |
| MMLU          | En   | 74.37                | 75.00    | 🔼 0.63 | 74.67 | 75.59   | 🔼 0.92 |
|               | Hi   | 52.16                | 53.85    | 🔼 1.69 | 53.24 | 56.39   | 🔼 3.15 |
| **Average**   | En   | **87.60**            | **87.93**| 🔼 0.33 | **87.40** | **88.05** | 🔼 0.65 |
|               | Hi   | **71.31**            | **74.82**| 🔼 3.51 | **75.54** | **78.38** | 🔼 2.84 |
| **Overall**   |      | **79.46**            | **81.38**| 🔼 1.92 | **81.47** | **83.22** | 🔼 1.75 |

**Table 3: Performance of our models compared to originals over each benchmark : evals through log likelihoods**

| Benchmark       | Lang | Qwen-2.5-14B-Instruct | Our Qwen | Change  | Phi-4 | Our Phi | Change  |
|----------------|------|----------------------|----------|---------|-------|---------|---------|
| MMLU-Pro      | En   | 49.04                | 52.63    | 🔼 3.59  | 53.78 | 52.39   | 🔻 1.39  |
| MATH hard     | En   | 00.00                | 25.08    | N/A   | 12.31 | 23.11   | 🔼 10.80 |
| GPQA          | En   | 32.21                | 36.24    | 🔼 4.03  | 33.72 | 39.77   | 🔼 6.05  |
| MuSR          | En   | 40.87                | 44.84    | 🔼 3.97  | 41.01 | 49.07   | 🔼 8.06  |
| BigBench-Hard | En   | 63.74                | 64.97    | 🔼 1.23  | 68.60 | 66.97   | 🔻 1.63  |
| **Average**   |      | **37.17**            | **44.75**| 🔼 7.58  | **41.88** | **46.26** | 🔼 4.38  |

**Table 4: Performance of our models compared to originals over each benchmark : evals through eval-harness**

### Recommendations

It is advisable for users to:
- Refrain from deploying the model in sensitive domains without human supervision.
- Cross-check factual information generated by the model for accuracy.
- Continuously assess the model to ensure compliance with ethical standards.
- Be mindful of potential biases and unintended outputs, especially in critical applications.

### Emissions

We belive our usage of shorter and compressed instruction-reponse pairs in training resulted in the model responding in simplified manner while meeting the requirements/ arriving at the correct answers. Hence the better scores while reducing emissions. 

Unlike distillation from reasoining or CoT models which produced unnecessarily long responses like "Next we proceed with...", "Ok lets do this...." during generation of step by step solutions of a math problem, we use only the step by step math part ignoring such fillers, for datasets with multiple step-by-step solutions which are correct, we chose the shortest one to train our models. 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/645c60dd7d655680b57ddbff/vgNk0bKthxNsxO0oAdaPD.png)

### Model Responses vs Order of Choices in MCQs

As benchmarks like MMLU-Pro have upto 10 choices, while most training datasets consist of typically 4-5 choices, we modified the ordering and labelling of choices i.e re-ordering choices to create an imbalance opposing the original model's choice distribution, replacement of labels from A/B/C/D to a/b/c/d or 1/2/3/4 or w/x/y/z etc.. in 5% of the MCQ samples for better robustness
This resulted in less bias towards the earlier choices among MCQs as compared to the original phi-4. The below images are a distution of choices selected by the model while being evaluated over MMLU-pro

![image/png](https://cdn-uploads.huggingface.co/production/uploads/645c60dd7d655680b57ddbff/5DYCkLHpdk2jaTsALcwN8.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/645c60dd7d655680b57ddbff/hhNNE4s8mALYsxdVf-UCq.png)

### Team

- `Ram Mohan Rao Kadiyala`
- `Siddartha Pullakhandam`
- `Siddhant Gupta`
- `Drishti Sharma`
- `Jebish Purbey`
- `Kanwal Mehreen`
- `Muhammad Arham`
- `Hamza Farooq`

### Correspondence

 [![Gmail](https://img.shields.io/badge/Gmail-D14836?style=for-the-badge&logo=gmail&logoColor=white)](mailto:[email protected])