IQ3_XS and IQ3_M missing in ollama deployment

#2
by notmebug - opened

Thank you for your great work.
For some reason, in the "Use this model" > "Ollama" menu the IQ3_XS and IQ3_M quant .gguf files seem to be missing altough they are present in the files from the repo.

Is there any way to add them ? It's a great quant for 16 Gb VRAM GPU.

Thank you.

Sign up or log in to comment