Qwen
/

yangapku commited on
Commit
8867b2a
1 Parent(s): 5a5440a

update readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -165,9 +165,9 @@ response, history = model.chat(tokenizer, "你好", history=None)
165
 
166
  ### 效果评测
167
 
168
- 我们对BF16和Int4模型在基准评测上做了测试,发现量化模型效果损失较小,结果如下所示:
169
 
170
- We illustrate the model performance of both BF16 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
171
 
172
  | Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
173
  | ------------- | :--------: | :----------: | :----: | :--------: |
 
165
 
166
  ### 效果评测
167
 
168
+ 我们对BF16和Int4模型在基准评测上做了测试(使用zero-shot设置),发现量化模型效果损失较小,结果如下所示:
169
 
170
+ We illustrate the zero-shot performance of both BF16 and Int4 models on the benchmark, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
171
 
172
  | Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
173
  | ------------- | :--------: | :----------: | :----: | :--------: |