2B_or_not_2B / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
d877fbc verified
|
raw
history blame
6.95 kB
metadata
license: apache-2.0
model-index:
  - name: 2B_or_not_2B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 20.62
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=SicariusSicariiStuff/2B_or_not_2B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 7.68
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=SicariusSicariiStuff/2B_or_not_2B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 1.74
            name: exact match
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=SicariusSicariiStuff/2B_or_not_2B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 0
            name: acc_norm
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=SicariusSicariiStuff/2B_or_not_2B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 4.85
            name: acc_norm
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=SicariusSicariiStuff/2B_or_not_2B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 4.43
            name: accuracy
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=SicariusSicariiStuff/2B_or_not_2B
          name: Open LLM Leaderboard
2B_or_not_2B
2B_or_not_2B

2B or not 2B, that's the question!

The model's name is fully credited to invisietch and Shakespeare; without them, this model would not have existed.

Why_Tha_Name_Though

Regarding the question, I am happy to announce that it is, in fact, 2B, as it is so stated on the original Google model card, which this model was finetuned on.

If there's one thing we can count on, it is Google to tell us what is true, and what is misinformation. You should always trust and listen to your elders, and especially to your big brother.

This model was finetuned on a whimsical whim, on my poor laptop. It's not really poor, the GPU is 4090 16GB, but... it is driver-locked to 80watts because nVidia probably does not have the resources to make better drivers for Linux. I hope nVidia will manage to recover, as I have seen poor Jensen with the same old black leather jacket for years upon years. The stock is down like 22% already in this month (August 11th, 2024).

Finetuning took about 4 hours, while the laptop was on my lap, and while I was talking about books and stuff on Discord. Luckily, the laptop wasn't too hot, as 80 watts is not the 175w I was promised, which would have surely been hot enough to make an Omelette. Always remain an optimist fellas!

2B_or_not_2B is available at the following quants:

Censorship level:

  • Low - Very low
  • 7.9 / 10 (10 completely uncensored)2B_or_not_2B

Support

GPUs too expensive
  • My Ko-fi page ALL donations will go for research resources and compute, every bit counts πŸ™πŸ»
  • My Patreon ALL donations will go for research resources and compute, every bit counts πŸ™πŸ»

Disclaimer

*This model is pretty uncensored, use responsibly

Other stuff

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 6.55
IFEval (0-Shot) 20.62
BBH (3-Shot) 7.68
MATH Lvl 5 (4-Shot) 1.74
GPQA (0-shot) 0.00
MuSR (0-shot) 4.85
MMLU-PRO (5-shot) 4.43