YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

GGML 4-bit/5-bit quantized IDEA-CCNL/Ziya-LLaMA-13B-v1

  • You need the latest version of llama-cpp or llama-cpp-python (to support ggml format v3).
  • Currently llama-cpp can not tokenize '<human>', '<bot>' special tokens, I changed these to ๐Ÿค–๐Ÿง‘ emojis.
    • Promote like this:
    inputs = '๐Ÿง‘:' + query.strip() + '\n๐Ÿค–:'
    
  • If you wanna quantize Ziya to GGML yourself, you should override its 'add_tokens.json' file with ours, which is provided in this repository.

license: gpl-3.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.