Supa-AI/llama-7b-hf-32768-fpf-gguf

This model was converted to GGUF format from mesolitica/llama-7b-hf-32768-fpf using llama.cpp. Refer to the original model card for more details on the model.

Available Versions

  • llama-7b-hf-32768-fpf.q4_0.gguf (q4_0)
  • llama-7b-hf-32768-fpf.q4_1.gguf (q4_1)
  • llama-7b-hf-32768-fpf.q5_0.gguf (q5_0)
  • llama-7b-hf-32768-fpf.q5_1.gguf (q5_1)
  • llama-7b-hf-32768-fpf.q8_0.gguf (q8_0)
  • llama-7b-hf-32768-fpf.q3_k_s.gguf (q3_K_S)
  • llama-7b-hf-32768-fpf.q3_k_m.gguf (q3_K_M)
  • llama-7b-hf-32768-fpf.q3_k_l.gguf (q3_K_L)
  • llama-7b-hf-32768-fpf.q4_k_s.gguf (q4_K_S)
  • llama-7b-hf-32768-fpf.q4_k_m.gguf (q4_K_M)
  • llama-7b-hf-32768-fpf.q5_k_s.gguf (q5_K_S)
  • llama-7b-hf-32768-fpf.q5_k_m.gguf (q5_K_M)
  • llama-7b-hf-32768-fpf.q6_k.gguf (q6_K)

Use with llama.cpp

Replace FILENAME with one of the above filenames.

CLI:

llama-cli --hf-repo Supa-AI/llama-7b-hf-32768-fpf-gguf --hf-file FILENAME -p "Your prompt here"

Server:

llama-server --hf-repo Supa-AI/llama-7b-hf-32768-fpf-gguf --hf-file FILENAME -c 2048

Model Details

Downloads last month
236
GGUF
Model size
6.74B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Supa-AI/llama-7b-hf-32768-fpf-gguf

Quantized
(1)
this model