Edit model card
  • llama.cpp has changed the encoding from GGML to GGUF, breaking existing GGML model checkpoints/weights for llama.cpp users:
  • This is a temporary upload of GGUF encoded Llama-2 models using llama.cpp/convert-llama-ggmlv3-to-gguf.py on the GGML models while waiting for official uploads of natively produced GGUF model checkpoints
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .