hfl
/

hfl-rc commited on
Commit
9e79ac7
1 Parent(s): 63dde78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -15
README.md CHANGED
@@ -7,8 +7,6 @@ language:
7
 
8
  # Llama-3-Chinese-8B-GGUF
9
 
10
- ## Warning: llama.cpp has [breaking changes on Llama-3 pre-tokenizer](https://github.com/ggerganov/llama.cpp/pull/6920), which significantly affect performance. We will update GGUF mdoels in the next few hours.
11
-
12
  <p align="center">
13
  <a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a>
14
  </p>
@@ -23,19 +21,20 @@ Further details (performance, usage, etc.) should refer to GitHub project page:
23
 
24
  Metric: PPL, **lower is better**
25
 
26
- The model name with `-im` suffix is generated with important matrix, which has generally better performance.
27
-
28
- | Quant | Size | PPL | PPL (`-im`) |
29
- | :---: | -------: | ------------------: | ----------------------: |
30
- | Q2_K | 2.96 GB | 17.7212 +/- 0.59814 | **14.9583 +/- 0.50455** |
31
- | Q3_K | 3.74 GB | 8.6303 +/- 0.28481 | **8.4423 +/- 0.28087** |
32
- | Q4_0 | 4.34 GB | 8.2513 +/- 0.27102 | **7.9077 +/- 0.25525** |
33
- | Q4_K | 4.58 GB | 7.8897 +/- 0.25830 | **7.8279 +/- 0.25542** |
34
- | Q5_0 | 5.21 GB | 7.7975 +/- 0.25639 | **7.7724 +/- 0.25625** |
35
- | Q5_K | 5.34 GB | 7.7062 +/- 0.25218 | **7.6902 +/- 0.25170** |
36
- | Q6_K | 6.14 GB | 7.6600 +/- 0.25043 | **7.6412 +/- 0.24949** |
37
- | Q8_0 | 7.95 GB | 7.6512 +/- 0.25064 | 7.6512 +/- 0.25064 |
38
- | F16 | 14.97 GB | 7.6389 +/- 0.25001 | N/A |
 
39
 
40
  ## Others
41
 
 
7
 
8
  # Llama-3-Chinese-8B-GGUF
9
 
 
 
10
  <p align="center">
11
  <a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a>
12
  </p>
 
21
 
22
  Metric: PPL, **lower is better**
23
 
24
+ *Note: Old models have been removed due to its inferior performance.*
25
+
26
+ | Quant | Size | PPL (old model) | 👍🏻 PPL (new model) |
27
+ | :---: | -------: | ------------------: | ------------------: |
28
+ | Q2_K | 2.96 GB | 17.7212 +/- 0.59814 | 11.8595 +/- 0.20061 |
29
+ | Q3_K | 3.74 GB | 8.6303 +/- 0.28481 | 5.7559 +/- 0.09152 |
30
+ | Q4_0 | 4.34 GB | 8.2513 +/- 0.27102 | 5.5495 +/- 0.08832 |
31
+ | Q4_K | 4.58 GB | 7.8897 +/- 0.25830 | 5.3126 +/- 0.08500 |
32
+ | Q5_0 | 5.21 GB | 7.7975 +/- 0.25639 | 5.2222 +/- 0.08317 |
33
+ | Q5_K | 5.34 GB | 7.7062 +/- 0.25218 | 5.1813 +/- 0.08264 |
34
+ | Q6_K | 6.14 GB | 7.6600 +/- 0.25043 | 5.1481 +/- 0.08205 |
35
+ | Q8_0 | 7.95 GB | 7.6512 +/- 0.25064 | 5.1350 +/- 0.08190 |
36
+ | F16 | 14.97 GB | 7.6389 +/- 0.25001 | 5.1302 +/- 0.08184 |
37
+
38
 
39
  ## Others
40