wen 2 bit model version?

#5
by froilo - opened

wen 2 bit model version?

You can try this if you don't mind using a experimental version of llama cpp, I was able to get it running with 64GBs of RAM. However, I did have a little headroom to load 3 bits, but for some reason it wouldn't load likely a bug
https://huggingface.co./pmysl/c4ai-command-r-plus-GGUF

Sign up or log in to comment