hfl
/

hfl-rc's picture
Update README.md
b79093c verified
|
raw
history blame
1.33 kB
metadata
license: apache-2.0
language:
  - zh
  - en

Llama-3-Chinese-8B-GGUF

This repository contains Llama-3-Chinese-8B-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B.

Note: this is a foundation model, which is not suitable for conversation, QA, etc.

Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

Performance

Metric: PPL, lower is better

Quant Size PPL
Q2_K 2.96 GB
Q3_K 3.74 GB
Q4_0 4.34 GB
Q4_K 4.58 GB
Q5_0 5.21 GB
Q5_K 5.34 GB
Q6_K 6.14 GB
Q8_0 7.95 GB
F16 14.97 GB

Others