llama-2-chat-GGUF / README.md
bhlim's picture
Update README.md
83f2586
metadata
license: llama2
  • llama.cpp has changed the encoding from GGML to GGUF, breaking existing GGML model checkpoints/weights for llama.cpp users:
  • This is a temporary upload of GGUF encoded Llama-2 models using llama.cpp/convert-llama-ggmlv3-to-gguf.py on the GGML models while waiting for official uploads of natively produced GGUF model checkpoints