https://huggingface.co./johnsnowlabs/JSL-MedLlama-3-8B-v17-8bits
#233
by
blankreg
- opened
Can you please make ggufs of this medical model?
I can try, but from the name I would assume it is already quantized, and therefore not supported by llama.cpp in its current form.
Yes, unfortunately, pre-quantized models are not supported by llama.cpp.
mradermacher
changed discussion status to
closed
Thanks anyway