Add Smaug 72B
Tried it, could not quantize it myself, used this Q_5_K_S quant. The outputs weren't good. There are two options:
- Model is overtrained.
- Tokenizer is broken
I am thinking it is very likely overtrained and overaligned especially with the base model they use
Well, Smaug 34b didn't perform well either.
Update: quantized Smaug myself, the issue seems to persist. Seems to be inherited from https://huggingface.co./moreh/MoMo-72B-lora-1.8.6-DPO/discussions/7 . Will do full tests later.
Tested it again, still not very good. Too overfitted.
I think I found the reasons for such shitty performance...
I am thinking it is very likely overtrained and overaligned especially with the base model they use
It's not just the base model.
https://huggingface.co./datasets/abacusai/HellaSwag_DPO_FewShot
https://huggingface.co./datasets/abacusai/ARC_DPO_FewShot
They literally trained it on the test dataset. That's why it's so shit on actual human tests.
LOL