Q8_0 GGUF

#14
by BRP - opened

Dear Matthew Andrews,
Can you please share a q8 file of this model?

I don't actually think I'm going to be able to do that. My internet is a bit crappy, and I doubt I'll be able to upload a file that large. If you notice the HF bin files are all 2gb each, I broke it up that way so my bad wireless internet could handle it. I was hoping thebloke would get to this one like he did trion. I'll try and get up a q5_k for you.

If you really need q8, download the full model, and use llamacpp to convert.

It runs something like:

py convert.py ./models/TimeCrystal-l2-13B --outfile ./models/TimeCrystal-l2-13B-f16.gguf --outtype f16
./quantize ./models/TimeCrystal-l2-13B-f16.gguf 7

7=q8. It's not a very gpu intensive task at all, and runs quite quickly, creating a gguf

Right so there's this. I might add a smaller one too. Just internet being a bit dodgy I try to avoid bigger uploads. But you can do it yourself pretty easy.

https://huggingface.co./BlueNipples/TimeCrystal-l2-13B-GGUF

Thank you so much for all your attention and for the instructions on the conversion as well.
For now, I'll download the version you uploaded (5_K_S) and test TimeCrystal l2 Model.

All the best and thanks again!

I only just saw this. I'm doing it now

(Feel free to ping me on future model uploads)

Sign up or log in to comment