requesting gguf version

#16
by Hasaranga85 - opened

Hi, can you please provide the gguf version of this model? So, I can use it with ollama.

hi, thanks for reaching out! Yes, I can take a pass at this over the next few days 'manually' - I tried with the gguf my repo space and ran into issues, if that ends up working for you do let me know

check out the Q6_K in this repo (here)

edit: more options here: https://hf.co/pszemraj/flan-t5-large-grammar-synthesis-gguf

pszemraj changed discussion status to closed

tried "ggml-model-Q5_K_M.gguf" with llamacpp and it is repeating the system prompt.

https://hf.co/pszemraj/flan-t5-large-grammar-synthesis-gguf

please review the demo code and/or the colab notebook linked on the model card. this is a text2text model, it does not use a system prompt of any kind, you cannot talk to it, etc.

it does one thing and one thing only: whatever text you put in will be grammatically corrected (this is what its doing with your "system prompt")

Thank you very much for your explanation. So this model cannot be use with llama-cli right?

Sign up or log in to comment