manishiitg commited on
Commit
58f7dbc
1 Parent(s): 9006a83

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -11,6 +11,8 @@ Model trained on Hindi and English data.
11
 
12
  Try it out: https://colab.research.google.com/drive/1A_hbsq1vrCeAh3dEMvtwxxNxcNZ1BUyW?usp=sharing
13
 
 
 
14
 
15
  #### Language Hi
16
 
@@ -31,29 +33,50 @@ Try it out: https://colab.research.google.com/drive/1A_hbsq1vrCeAh3dEMvtwxxNxcNZ
31
  | Airavata | 0.0437 | 0.0277 | 0.1165 | 0.3586 | 0.4393 | 0.2534 | 0.1630 |
32
 
33
  Task: flores Metric: chrf
 
34
  Task: implicit_hate Metric: chrf
 
35
  Task: indicsentiment Metric: accuracy
 
36
  Task: indicxparaphrase Metric: accuracy
 
37
  Task: boolq-hi Metric: accuracy
 
38
  Task: truthfulqa-hi Metric: accuracy
 
39
  Task: indic-arc-easy Metric: accuracy
 
40
  Task: indicwikibio Metric: bleurt
 
41
  Task: xlsum-hi Metric: bleurt
 
42
  Task: indicheadline Metric: bleurt
 
43
  Task: indic-arc-challenge Metric: accuracy
 
44
  Task: mmlu_hi Metric: average_acc
 
45
  Task: indicqa Metric: accuracy
 
46
  Task: hellaswag-indic Metric: accuracy
 
47
  Task: arc-easy-exact Metric: accuracy
 
48
  Task: hellaswag Metric: accuracy
 
49
  Task: arc-challenge Metric: accuracy
 
50
  Task: mmlu Metric: average_acc
 
51
  Task: xlsum Metric: bleurt
 
52
  Task: boolq Metric: accuracy
 
53
  Task: truthfulqa Metric: accuracy
54
 
55
 
56
 
 
57
  Model evaluation on OpenLLM LeaderBoard
58
 
59
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5dfae476da6d0311fd3d5432/ENzZwV2Z98uNlpyUz3Blp.png)
@@ -61,5 +84,4 @@ Model evaluation on OpenLLM LeaderBoard
61
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5dfae476da6d0311fd3d5432/SpSiu5lzA6JKJx8ICX_zd.png)
62
 
63
 
64
- For detailed model evaluation: https://github.com/manishiitg/hi-llm-eval
65
 
 
11
 
12
  Try it out: https://colab.research.google.com/drive/1A_hbsq1vrCeAh3dEMvtwxxNxcNZ1BUyW?usp=sharing
13
 
14
+ For sample responose on different prompts checkout: https://github.com/manishiitg/hi-llm-eval
15
+
16
 
17
  #### Language Hi
18
 
 
33
  | Airavata | 0.0437 | 0.0277 | 0.1165 | 0.3586 | 0.4393 | 0.2534 | 0.1630 |
34
 
35
  Task: flores Metric: chrf
36
+
37
  Task: implicit_hate Metric: chrf
38
+
39
  Task: indicsentiment Metric: accuracy
40
+
41
  Task: indicxparaphrase Metric: accuracy
42
+
43
  Task: boolq-hi Metric: accuracy
44
+
45
  Task: truthfulqa-hi Metric: accuracy
46
+
47
  Task: indic-arc-easy Metric: accuracy
48
+
49
  Task: indicwikibio Metric: bleurt
50
+
51
  Task: xlsum-hi Metric: bleurt
52
+
53
  Task: indicheadline Metric: bleurt
54
+
55
  Task: indic-arc-challenge Metric: accuracy
56
+
57
  Task: mmlu_hi Metric: average_acc
58
+
59
  Task: indicqa Metric: accuracy
60
+
61
  Task: hellaswag-indic Metric: accuracy
62
+
63
  Task: arc-easy-exact Metric: accuracy
64
+
65
  Task: hellaswag Metric: accuracy
66
+
67
  Task: arc-challenge Metric: accuracy
68
+
69
  Task: mmlu Metric: average_acc
70
+
71
  Task: xlsum Metric: bleurt
72
+
73
  Task: boolq Metric: accuracy
74
+
75
  Task: truthfulqa Metric: accuracy
76
 
77
 
78
 
79
+
80
  Model evaluation on OpenLLM LeaderBoard
81
 
82
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5dfae476da6d0311fd3d5432/ENzZwV2Z98uNlpyUz3Blp.png)
 
84
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5dfae476da6d0311fd3d5432/SpSiu5lzA6JKJx8ICX_zd.png)
85
 
86
 
 
87