Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
sometimesanotion 
posted an update 4 days ago
Post
2462
I've managed a #1 score of 41.22% average for 14B parameter models on the Open LLM Leaderboard. As of this writing, sometimesanotion/Lamarck-14B-v0.7 is #8 for all models up to 70B parameters.

It took a custom toolchain around Arcee AI's mergekit to manage the complex merges, gradients, and LoRAs required to make this happen. I really like seeing features of many quality finetunes in one solid generalist model.

Appreciate your work!

·

Yours too! You get a lot out of the 7B parameter models!

Congratulations! BTW, I'm still waiting for your response to https://huggingface.co./bamec66557/Qwen-2.5-14B-MINUS/discussions/1#678250364248fde89ea918f7 :)

·

Thank you, I somehow missed that notification! You can hit me up to discuss model merges anytime. You too, @CultriX .

This is an impressive model. Try deploying it to your Friendli endpoints via the "Deploy" button at https://huggingface.co./sometimesanotion/Lamarck-14B-v0.7 and experimenting!