Taishi-N324 commited on
Commit
0b77619
1 Parent(s): 0db95cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -81,8 +81,8 @@ The website [https://swallow-llm.github.io/](https://swallow-llm.github.io/) pro
81
 
82
  |Model|coding|extraction|humanities|math|reasoning|roleplay|stem|writing|JMTAvg|
83
  |---|---|---|---|---|---|---|---|---|---|
84
- | Qwen2-72B-Instruct | 0.5699 | 0.7858 | 0.8222 | 0.5096 | 0.7032 | 0.7963 | 0.7728 | 0.8223 | 0.7228 |
85
- | Qwen2.5-72B-Instruct | 0.7060 | 0.7866 | 0.8122 | 0.6968 | 0.6536 | 0.8301 | 0.8060 | 0.7841 | 0.7594 |
86
  | Llama 3 70B Instruct | 0.5969 | 0.8410 | 0.7120 | 0.4481 | 0.4884 | 0.7117 | 0.6510 | 0.6900 | 0.6424 |
87
  | Llama 3.1 70B Instruct | 0.5252 | 0.7846 | 0.7086 | 0.5063 | 0.6979 | 0.6888 | 0.6402 | 0.6653 | 0.6521 |
88
  | Llama 3 Youko 70B Instruct | 0.6632 | 0.8387 | 0.8108 | 0.4655 | 0.7013 | 0.7778 | 0.7544 | 0.7662 | 0.7222 |
@@ -91,7 +91,7 @@ The website [https://swallow-llm.github.io/](https://swallow-llm.github.io/) pro
91
  | Llama 3 Swallow 70B Instruct | 0.5269 | 0.7250 | 0.5690 | 0.4669 | 0.6121 | 0.6238 | 0.5533 | 0.5698 | 0.5809 |
92
  | Llama 3.1 Swallow 70B Instruct | 0.5676 | 0.7859 | 0.7490 | 0.5437 | 0.6383 | 0.6870 | 0.6121 | 0.6540 | 0.6547 |
93
  | GPT-3.5 (gpt-3.5-turbo-0125) | 0.6851 | 0.7641 | 0.7414 | 0.5522 | 0.5128 | 0.7104 | 0.6266 | 0.7361 | 0.6661 |
94
- | GPT-4o (gpt-4o-2024-05-13) | 0.7296 | 0.8540 | 0.8646 | 0.6641 | 0.6661 | 0.8274 | 0.8184 | 0.8085 | 0.7791 |
95
 
96
  ## Evaluation Benchmarks
97
 
 
81
 
82
  |Model|coding|extraction|humanities|math|reasoning|roleplay|stem|writing|JMTAvg|
83
  |---|---|---|---|---|---|---|---|---|---|
84
+ | Qwen2-72B-Instruct | 0.5699 | 0.7858 | 0.8222 | 0.5096 | **0.7032** | 0.7963 | 0.7728 | **0.8223** | 0.7228 |
85
+ | Qwen2.5-72B-Instruct | 0.7060 | 0.7866 | 0.8122 | **0.6968** | 0.6536 | **0.8301** | 0.8060 | 0.7841 | 0.7594 |
86
  | Llama 3 70B Instruct | 0.5969 | 0.8410 | 0.7120 | 0.4481 | 0.4884 | 0.7117 | 0.6510 | 0.6900 | 0.6424 |
87
  | Llama 3.1 70B Instruct | 0.5252 | 0.7846 | 0.7086 | 0.5063 | 0.6979 | 0.6888 | 0.6402 | 0.6653 | 0.6521 |
88
  | Llama 3 Youko 70B Instruct | 0.6632 | 0.8387 | 0.8108 | 0.4655 | 0.7013 | 0.7778 | 0.7544 | 0.7662 | 0.7222 |
 
91
  | Llama 3 Swallow 70B Instruct | 0.5269 | 0.7250 | 0.5690 | 0.4669 | 0.6121 | 0.6238 | 0.5533 | 0.5698 | 0.5809 |
92
  | Llama 3.1 Swallow 70B Instruct | 0.5676 | 0.7859 | 0.7490 | 0.5437 | 0.6383 | 0.6870 | 0.6121 | 0.6540 | 0.6547 |
93
  | GPT-3.5 (gpt-3.5-turbo-0125) | 0.6851 | 0.7641 | 0.7414 | 0.5522 | 0.5128 | 0.7104 | 0.6266 | 0.7361 | 0.6661 |
94
+ | GPT-4o (gpt-4o-2024-05-13) | **0.7296** | **0.8540** | **0.8646** | 0.6641 | 0.6661 | 0.8274 | **0.8184** | 0.8085 | **0.7791** |
95
 
96
  ## Evaluation Benchmarks
97