Name;Quant;Size;B;C;D;S;P;Type;Family(incomplete)
stringlengths
27
124
alpindale/goliath-120b;Q6_K;120B;3;2;1;6;6;merge;Llama-2-70b
NousResearch/Nous-Hermes-Llama2-70b;Q6_K;70B;3;2;0;3.75;3.5;finetune;Llama-2-70b
Sao10K/Euryale-1.3-L2-70B;Q6_K;70B;0;2;0;3;5;finetune+merge;Llama-2-70b
Xwin-LM/Xwin-LM-70B-V0.1;Q6_K;70B;0;1;2;5.5;5.25;finetune;Llama-2-70b
NousResearch/Nous-Capybara-34B;F16;34B;0;0;3;3.5;2.5;finetune;
mistralai/Mixtral-8x7B-Instruct-v0.1;F16;8x7B;0;2;0;5;5.5;finetune;Mixtral-8x7B
Qwen/Qwen-72B-Chat;Q6_K;72B;3;2;0;2.5;2;finetune;
upstage/SOLAR-10.7B-Instruct-v1.0;F16;10.7B;1;0;0;0.75;4;finetune;
fblgit/una-xaberius-34b-v1beta;F16;34B;0;2;0;1.75;0.25;finetune;
fblgit/una-cybertron-7b-v3-OMA;F16;7B;0;0;0;2.5;3;finetune;
ChatGPT/12-2023;n/a;n/a;1;n/a;0;8;n/a;finetune;
elinas/chronos-70b-v2;Q6_K;70B;0;0;0;3;2.25;finetune;Llama-2-70b
migtissera/SynthIA-70B-v1.5;Q6_K;70B;0;2;0;2.5;2.25;finetune;Llama-2-70b
jondurbin/spicyboros-70b-2.2;Q6_K;70B;3;2;1;6.25;4.5;finetune;Llama-2-70b
Brillibits/Instruct_Llama70B_Dolly15k;Q6_K;70B;3;0;0;2.25;0;finetune;Llama-2-70b
DiscoResearch/DiscoLM-70b;Q6_K;70B;3;2;0;2.75;3.25;finetune;Llama-2-70b
upstage/SOLAR-0-70b-16bit;Q6_K;70B;0;2;1.5;1.5;3.25;finetune;Llama-2-70b
augtoma/qCammel-70-x;Q6_K;70B;0;2;0;5.5;1.25;finetune;Llama-2-70b
garage-bAInd/Platypus2-70B;Q6_K;70B;3;2;0;1.5;1;finetune;Llama-2-70b
jarradh/llama2_70b_chat_uncensored;Q6_K;70B;3;2;0;3.5;2.25;finetune;Llama-2-70b
KaeriJenti/kaori-70b-v1;Q6_K;70B;3;1;0;3;2;finetune;Llama-2-70b
NousResearch/Nous-Puffin-70B;Q6_K;70B;3;2;0;1.75;0;finetune;Llama-2-70b
Sao10K/WinterGoddess-1.4x-70B-L2;Q6_K;70B;2;1.5;3;2.75;5.5;finetune+merge;Llama-2-70b
lizpreciatior/lzlv_70b_fp16_hf;Q6_K;70B;3;2;0;5.75;4.5;merge;Llama-2-70b
Doctor-Shotgun/mythospice-70b;Q6_K;70B;3;1;0;5.25;0.25;merge;Llama-2-70b
tiiuae/falcon-180B-chat;Q4_K_M;180B;3;0;0;5.25;5.5;finetune;
nsfwthrowitaway69/Venus-120b-v1.0;Q6_K;120B;3;2;0;3.5;4.5;merge;Llama-2-70b
deepseek-ai/deepseek-llm-67b-chat;Q6_K;67B;3;2;0;3.5;1;finetune;
allenai/tulu-2-dpo-70b;Q6_K;70B;0;0;0;4;3;finetune;Llama-2-70b
epfl-llm/meditron-70b;Q6_K;70B;0;2;0;1.25;0;finetune;Llama-2-70b
meta-llama/Llama-2-70b-hf;Q6_K;70B;0;2;0.5;1.25;0;base;Llama-2-70b
ChuckMcSneed/Dicephal-123B;Q6_K;123B;0;0;1;2.25;2.25;base+merge;Llama-2-70b
Yukang/LongAlpaca-70B;Q6_K;70B;0;2;0;5;0.25;finetune;Llama-2-70b
Yukang/Llama-2-70b-longlora-32k;Q6_K;70B;1;2;0;0.25;0;finetune;Llama-2-70b
Qwen/Qwen-72B;Q6_K;72B;0;0;0;1;1.5;base;
Yukang/LongAlpaca-70B-lora;Q6_K;70B;3;0;0;1;0;finetune;Llama-2-70b
ChuckMcSneed/Dicephal-123B+longlora;Q6_K;123B;1;2;1;1;2.25;merge;Llama-2-70b
Xwin-LM/Xwin-LM-70B-V0.1+LongAlpaca-lora;Q6_K;70B;1;1;0;2;0.25;merge;Llama-2-70b
Xwin-LM/Xwin-LM-70B-V0.1+longlora;Q6_K;70B;0;1;3;2.75;0.5;merge;Llama-2-70b
Xwin-LM/Xwin-LM-70B-V0.1+chat-longlora;Q6_K;70B;1.5;0;0;3.25;1.25;merge;Llama-2-70b
grimulkan/aurelian-alpha0.1-70b-rope8-32K-fp16;Q6_K;70B;0;1;0;1.25;2;finetune;Llama-2-70b
ChuckMcSneed/DoubleGold-v0.1-123b-32k;Q6_K;123B;1;2;0;4.5;2.75;merge;Llama-2-70b
Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss;F16;8x7B;0;2;0;4.75;4.75;finetune;Mixtral-8x7B
ChuckMcSneed/WinterGoliath-123b;Q6_K;123B;3;2;2;5.5;6;merge;Llama-2-70b
cognitivecomputations/dolphin-2.2-70b;Q6_K;70B;0;1;1;4.5;4.5;finetune;Llama-2-70b
cognitivecomputations/MegaDolphin-120b;Q6_K;120B;0;1;1;5.75;5.25;merge;Llama-2-70b
ChuckMcSneed/PMaxxxer-v1-70b;Q6_K;70B;3;1;1;6.75;4.75;merge;Llama-2-70b
ChuckMcSneed/SMaxxxer-v1-70b;Q6_K;70B;2;1;0;7.25;4.25;merge;Llama-2-70b
ChuckMcSneed/BenchmaxxxerSP-v1-123b(private);Q6_K;123B;1.5;2;1;6.75;5.25;merge;Llama-2-70b
ChuckMcSneed/BenchmaxxxerPS-v1-123b;Q6_K;123B;3;2;1;7.25;5.25;merge;Llama-2-70b
ChuckMcSneed/BenchmaxxxerMOE-v1-123b(private);Q6_K;2x70B;2;1;3;7.25;5;merge;Llama-2-70b
ChuckMcSneed/BenchmaxxxerSS-v1-123b(private);Q6_K;123B;1.5;2;0.5;6.5;5.5;merge;Llama-2-70b
grimulkan/aurelian-v0.5-70b-rope8-32K-fp16;Q6_K;70B;1;1;0;2.5;2.25;finetune;Llama-2-70b
ChuckMcSneed/DoubleGold-v0.5-123b-32k;Q6_K;123B;1;1;2;3.25;1.5;merge;Llama-2-70b
grimulkan/lzlv-longLORA-70b-rope8-32k-fp16;Q6_K;70B;3;2;0;3.5;3.75;merge;Llama-2-70b
grimulkan/Goliath-longLORA-120b-rope8-32k-fp16;Q6_K;120B;0.5;1.5;1;3.75;4.5;merge;Llama-2-70b
sophosympatheia/Aurora-Nights-70B-v1.0;Q6_K;70B;2.5;2;0;7.25;3.5;merge;Llama-2-70b
sophosympatheia/Aurora-Nights-103B-v1.0;Q6_K;103B;3;2;0;7.25;4.25;merge;Llama-2-70b
XWIN-32k-experimental(equivalent to grimulkan/Xwin-longLORA-70b-rope8-32k-fp16?);Q6_K;70B;0;1.5;1;3.5;3.75;merge;Llama-2-70b
deepnight-research/Saily_220B;Q3_K_L;208B;0;2;1;2.75;1.75;merge;Llama-2-70b
deepnight-research/saily_100b;Q6_K;118B;0;0;0;4.5;5.25;merge;Llama-2-70b
sophosympatheia/Midnight-Rose-70B-v1.0;Q6_K;70B;0;0;0;4;5;merge;Llama-2-70b
abacusai/Smaug-34B-v0.1;F16;34B;0;2;0;1.25;2.5;finetune;
ChuckMcSneed/WinterGoddess-1.4x-70b-32k;Q6_K;70B;2.5;2;0;1.5;2.25;merge;Llama-2-70b
ChuckMcSneed/WinterGoliath-123b-32k;Q6_K;123B;3;2;1;3.75;3.75;merge;Llama-2-70b
alpindale/miquella-120b(old);Q5_K_M;120B;0;0;0;3.25;0.25;merge;miqu
miqudev/miqu-1-70b;Q5_K_M;70B;3;2;0;6.25;3.75;finetune;miqu
ICBU-NPU/FashionGPT-70B-V1.1;Q6_K;70B;0;1.5;0;1.75;4;finetune;Llama-2-70b
ibivibiv/strix-rufipes-70b;Q6_K;70B;0;2;0;1;3.25;finetune;Llama-2-70b
Chat-Error/fiction.live-Kimiko-V2-70B;Q6_K;70B;0;1.5;0;1;2.25;finetune;Llama-2-70b
MayaPH/GodziLLa2-70B;Q6_K;70B;1.5;0;1;1.75;4.75;finetune;Llama-2-70b
WizardLM/WizardLM-70B-V1.0;Q6_K;70B;0;1;0;5.5;5.5;finetune;Llama-2-70b
Mikael110/llama-2-70b-guanaco-qlora;Q6_K;70B;0;2;0;4.75;5;finetune;Llama-2-70b
ChuckMcSneed/Gembo-v1-70b;Q6_K;70B;2.5;1.5;3;7.5;5.25;merge;Llama-2-70b
Qwen/Qwen1.5-72B-Chat;Q5_K_M;72B;0;0;0;6;4.5;finetune;Qwen1.5-72B
abacusai/Smaug-72B-v0.1(Alpaca format);Q5_K_S;72B;0.5;0.5;0.5;0.75;0.5;finetune;Qwen1.5-72B
abacusai/Smaug-72B-v0.1(llama chat format);Q6_K;72B;0;0;0;0.5;2.5;finetune;Qwen1.5-72B
ShinojiResearch/Senku-70B-Full;Q6_K;70B;0;1.5;3;3.75;3.75;finetune;miqu
Undi95/Miqu-70B-Alpaca-DPO;Q6_K;70B;1.5;1;0;6;4.25;finetune;miqu
TeeZee/Kyllene-34B-v1.1;F16;34B;0;2;0;4;2.75;merge;
cloudyu/Mixtral_34Bx2_MoE_60B;Q8_0;2x34B;0;2;0;2.25;2;merge;
TomGrc/FusionNet_7Bx2_MoE_v0.1;F16;2X7B;0;0;1;3.75;2.75;finetune;
ChuckMcSneed/Gembo-v1.1-70b;Q6_K;70B;2.5;1.5;3;6.75;5.25;merge;Llama-2-70b
miqu-123b(personal test merge);Q5_K_M;123B;2;0;1;7.75;4.5;merge;miqu
wolfram/miqu-1-120b;Q5_K_M;120B;2;0;0;8;4.25;merge;miqu
berkeley-nest/Starling-LM-7B-alpha;F16;7B;1.5;0.5;0;2.25;2.5;finetune;
wolfram/miquliz-120b-v2.0;Q5_K_M;120B;1;2;1;7.25;5;merge;miqu
sophosympatheia/Midnight-Rose-70B-v2.0.3;Q6_K;70B;2.5;1.5;0;6.75;4;merge;Llama-2-70b
ValiantLabs/ShiningValiant;Q6_K;70B;2.5;1.5;0;1.5;1.75;finetune;Llama-2-70b
Sao10K/Euryale-1.3-L2-70B+xwin-lora;Q6_K;70B;2;2;1;5.5;5.5;merge;Llama-2-70b
Xwin-LM/Xwin-LM-70B-V0.1+euryale-lora;Q6_K;70B;3;2;2;6;5;merge;Llama-2-70b
ChuckMcSneed/Premerge-XE-EX-123B(private);Q6_K;123B;2;2;2.5;6.75;5.5;merge;Llama-2-70b
ChuckMcSneed/Premerge-EX-XE-123B(private);Q6_K;123B;2;2;2;5.75;6;merge;Llama-2-70b
ChuckMcSneed/Premerge-XE-XE-123B;Q6_K;123B;3;2;2.5;7.25;5.25;merge;Llama-2-70b
ChuckMcSneed/Premerge-EX-EX-123B;Q6_K;123B;2;2;1.5;7.25;6;merge;Llama-2-70b
MaziyarPanahi/WizardLM-Math-70B-v0.1;Q6_K;70B;0;2;0;3.75;3;merge;Llama-2-70b
NousResearch/Nous-Hermes-2-Llama-2-70B;Q6_K;70B;0;2;1;3.5;1.5;finetune;Llama-2-70b
NeverSleep/MiquMaid-v2-70B-DPO;Q6_K;70B;3;1;0;2.75;0.5;finetune;miqu
senseable/WestLake-7B-v2;F16;7B;0;1;1;4;3.75;finetune;
CausalLM/34b-beta;F16;34B;0;2;3;2.75;3.75;finetune;

Since automatic open source benchmark leaderboard got flooded with incoherent overtrained cheater meme models, I decided to take the matters in my own hands and create my own set of proprietary tests. The aim of these tests is not to see how smart the model is, but to see how good it is at execution of commands and creative writing in a reasonably quantifiable way. All tests are executed with temperature and top P≈0 and rep. penalty=1 in koboldcpp. Model-appropriate format is used, unless it doesn't work.

Currently I have the following tests:

B-test:

This test is designed to establish the baseline of the model. It consists of a main task and a bunch of text, which model has to ignore while still executing the task. If the model refuses or fails to comply in a logical way immediately, it fails(0/3). After the initial request question it will get bombarded with text, it gets 1 point for reaching the first checkpoint(1/3). It will get another point for passing the test fully(2/3) and a final point for exiting the test successfully(3/3)

C-test:

Like B-test, but the task is simpler and the distracting text is way more annoying. Since the task is much simpler there are fewer points to gain. Model gets 1 point for passing main distractions and another point for successfully exiting the task. Model gets penalized for writing more than necessary, eg (Note: as an AI language model...).

D-test:

This test is designed around breaking expectations. It consists of a common math trick, but with a twist. The twist is that there is no math involved, just reading. It also has an extensive section at the end to guide the model into breaking the overtrained conditioning. Models will get 1 point for getting the answer right and up to 2 points for the right reasoning.

P-test:

Poems. Model passes each poem test for writing coherently and in rhyme. 1 point for each poem. 6 in total.

After seeing Miqu-120b succeed at positive writing and fail miserably at negative, I decided to revise the test a little bit by adjusting the ratios. Assume that all models prior and including Miqu-120b were run on old set, and newer ones will be run on the revised set.

S-test:

Stylized writing. Models are asked to explain a concept in a distinct writing style or as if they are a character. Up to 1 point for each style. Models are penalized for failing to explain the concept or to keep the style all the way through the explaination. 8 in total. Note: not very reliable due to large human factor(±1). Take with a grain of salt.

What does each of the tests measure I dont understand111!!!11!

BCD=following commands

PS=creative writing

RESULTS

This table shows the results

In the table above you can see the results visiualized. You can find pure data in file LLM-test.csv

What they show is quite interesting:

  • If a model can't pass any of the BCD tests, it is most likely braindead or very filtered(kinda same lol)
  • If SP score of the model is very low it's writing style is dry
  • Creative parent(Euryale) + creative parent(Xwin)=creative child(Goliath)
  • Creative parent(Euryale) + dry parent(Nous-Hermes) + drier parent(SynthIA)=dry-ish child(Venus)
  • Dry parent(Nous-Hermes) + creative parent(Xwin) + creative parent(Mythospice)=creative child(lzlv)
  • Cheater meme model(una-cybertron) was somewhat creative, but braindead
  • Base model self-merge(Dicephal-123B) increased creativity, but didn't add extra prompt compliance
  • All my attempts to extend the context of XWin and Llama by using Yukang's loras have led to drastic decrease in creativity and coherence of the models :(
  • Miqu is currently the best 32k model according to this benchmark
  • Miqu-120b is the second model after ChatGPT that has 100% passed S-test!

More tests?

Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.

Limitations

  • All tests were only done once.
  • Human factor plays a huge role in SP tests. After redoing some of the tests I noticed ±1 variation for S-test and ±0.5 variation for P-test. (Xwin is likely underrated and Spicyboros is likely overrated in S-test.)
  • Be critical of my own models! Since I have access to the benchmark, I can game it and rig it all I want and NOBODY can stop me.

Can it be rigged/gamed?

Not sure. I've tried to game it by merging, but didn't succeed. You can check out my first attempt here.

If my questions somehow get leaked and the models are trained on them specifically, then definitely.

Update: I made this RP model while using this benchmark as a guideline for right/wrong merging. It has a ridiculously high score: 19.75/22! It's not bad, in fact, it is quite interesting in practice, but still far from ChatGPT(or maybe not, I haven't used in a while. Maybe they've lobotomized it to hell).

Downloads last month
271