update readme
Browse files
README.md
CHANGED
@@ -165,9 +165,9 @@ response, history = model.chat(tokenizer, "你好", history=None)
|
|
165 |
|
166 |
### 效果评测
|
167 |
|
168 |
-
我们对BF16和Int4
|
169 |
|
170 |
-
We illustrate the
|
171 |
|
172 |
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|
173 |
| ------------- | :--------: | :----------: | :----: | :--------: |
|
|
|
165 |
|
166 |
### 效果评测
|
167 |
|
168 |
+
我们对BF16和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示:
|
169 |
|
170 |
+
We illustrate the zero-shot performance of both BF16 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
|
171 |
|
172 |
| Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
|
173 |
| ------------- | :--------: | :----------: | :----: | :--------: |
|