llama.cpp format?

#5
by joorei - opened

Did anyone convert it to ggml for llama.cpp already?

Just add any tokens from 39410 to 39423 to added_tokens.json, and it will be able to convert to ggml for llama.cpp.
But llama.cpp can't input the token <human> and <bot>, so it doesn't seem to work very well.

I' ve quantized this model with GGML, changed and tokens to πŸ§‘πŸ€– emojis, works fine.
See https://huggingface.co./thatname/Ziya-LLaMA-13B-v1-ggml

Just add any tokens from 39410 to 39423 to added_tokens.json, and it will be able to convert to ggml for llama.cpp.
But llama.cpp can't input the token <human> and <bot>, so it doesn't seem to work very well.

Sign up or log in to comment