leaderboard-pr-bot's picture
Adding Evaluation Results
fb7c0ac verified
|
raw
history blame
7.27 kB
metadata
license: cc-by-nc-4.0
library_name: transformers
tags:
  - mergekit
  - merge
base_model:
  - WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
  - mlabonne/NeuralDaredevil-8B-abliterated
pipeline_tag: text-generation
model-index:
  - name: llama-3-Nephilim-v1-8B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 42.77
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=grimjim/llama-3-Nephilim-v1-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 29.91
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=grimjim/llama-3-Nephilim-v1-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 8.16
            name: exact match
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=grimjim/llama-3-Nephilim-v1-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 6.94
            name: acc_norm
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=grimjim/llama-3-Nephilim-v1-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 10.64
            name: acc_norm
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=grimjim/llama-3-Nephilim-v1-8B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 31.06
            name: accuracy
        source:
          url: >-
            https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=grimjim/llama-3-Nephilim-v1-8B
          name: Open LLM Leaderboard

llama-3-Nephilim-v1-8B

This is a merge of pre-trained language models created using mergekit.

Here we experiment with SLERP merger with the second model at very low weight (0.001) to modulate the output of the base model.

The base model was assembled to achieve high MMLU while avoiding refusals, while the additional model was trained specifically (apparently as a copilot) for offensive and defensive cybersecurity. Though neither model targeted roleplay as a use case, the resulting intelligence, acuity, and text generation of the merge is of interest. The merge is aggressively creative, within bounds.

Tested with temperature=1.0-1.2 and minP=0.01 along with a custom Instruct prompt geared toward reducing refusals during roleplay text generation without compromising overall model safety: Llama 3 Instruct Direct.

Care should be taken when using this model, as it is possible that harmful outputs could be generated. Given that this model is derivative, responsible use is further mandated by the WhiteRabbitNeo Usage Restrictions Extension to the Llama-3 License. This model is further subject to CC-BY-NC-4.0 by default, meaning that commercial use is restricted, barring an alternative licensing agreement.

Built with Meta Llama 3.

WhiteRabbitNeo Extension to Llama-3 Licence: Usage Restrictions

You agree not to use the Model or Derivatives of the Model:

-	In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; 
-	For military use in any way;
-	For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; 
-	To generate or disseminate verifiably false information and/or content with the purpose of harming others; 
-	To generate or disseminate inappropriate content subject to applicable regulatory requirements;
-	To generate or disseminate personal identifiable information without due authorization or for unreasonable use; 
-	To defame, disparage or otherwise harass others; 
-	For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation; 
-	For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; 
-	To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; 
-	For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: mlabonne/NeuralDaredevil-8B-abliterated
      layer_range: [0,32]
    - model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
      layer_range: [0,32]
merge_method: slerp
base_model: mlabonne/NeuralDaredevil-8B-abliterated
parameters:
  t:
    - value: 0.001
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 21.58
IFEval (0-Shot) 42.77
BBH (3-Shot) 29.91
MATH Lvl 5 (4-Shot) 8.16
GPQA (0-shot) 6.94
MuSR (0-shot) 10.64
MMLU-PRO (5-shot) 31.06