Model Size

#1
by hsjoo2442 - opened

Hello,
I'd like to ask why the model size is stated as 11.3B in the model card.
From what I'm aware of, functionary-medium-v3.0 is based on llama 3, which has 70B parameters.
Does the parameter size shrink with 4bit quantization?
Any help would be appreciated.

MeetKai org

Hi, I am unsure why the quantized model has 11.3B only but looking at the following models quantized by the original author and major contributors of AutoAWQ, it seems like 4-bit quantization by AWQ reduces the model size of Llama-3 70B by approximately 6.2x.

https://huggingface.co./casperhansen/llama-3-70b-instruct-awq
https://huggingface.co./TechxGenus/Meta-Llama-3-70B-Instruct-AWQ

We may need to read the original AWQ paper for more details and clues as to why this is so. This is interesting. Do let me know if you have any findings!

Sign up or log in to comment