hfl
/

hfl-rc's picture
Update README.md
b9b2c44 verified
|
raw
history blame
2.11 kB
metadata
license: apache-2.0
language:
  - zh
  - en

Llama-3-Chinese-8B-GGUF

Warning: llama.cpp has breaking changes on Llama-3 pre-tokenizer, which significantly affect performance. We will update GGUF mdoels in the next few hours.

This repository contains Llama-3-Chinese-8B-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B.

Note: this is a foundation model, which is not suitable for conversation, QA, etc.

Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

Performance

Metric: PPL, lower is better

The model name with -im suffix is generated with important matrix, which has generally better performance.

Quant Size PPL PPL (-im)
Q2_K 2.96 GB 17.7212 +/- 0.59814 14.9583 +/- 0.50455
Q3_K 3.74 GB 8.6303 +/- 0.28481 8.4423 +/- 0.28087
Q4_0 4.34 GB 8.2513 +/- 0.27102 7.9077 +/- 0.25525
Q4_K 4.58 GB 7.8897 +/- 0.25830 7.8279 +/- 0.25542
Q5_0 5.21 GB 7.7975 +/- 0.25639 7.7724 +/- 0.25625
Q5_K 5.34 GB 7.7062 +/- 0.25218 7.6902 +/- 0.25170
Q6_K 6.14 GB 7.6600 +/- 0.25043 7.6412 +/- 0.24949
Q8_0 7.95 GB 7.6512 +/- 0.25064 7.6512 +/- 0.25064
F16 14.97 GB 7.6389 +/- 0.25001 N/A

Others