Fett-Eris-Mix-7B / README.md
saishf's picture
Adding Evaluation Results (#1)
3519ee5 verified
metadata
language:
  - en
license: cc-by-nc-4.0
library_name: transformers
tags:
  - mergekit
  - merge
base_model:
  - Epiculous/Fett-uccine-7B
  - eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
  - OpenPipe/mistral-ft-optimized-1227
  - ChaoticNeutrals/Eris_7B
pipeline_tag: text-generation
model-index:
  - name: Fett-Eris-Mix-7B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 68.77
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fett-Eris-Mix-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 87.33
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fett-Eris-Mix-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 63.65
            name: accuracy
        source:
          url: >-
            https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fett-Eris-Mix-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 71.91
        source:
          url: >-
            https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fett-Eris-Mix-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 80.82
            name: accuracy
        source:
          url: >-
            https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fett-Eris-Mix-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 57.47
            name: accuracy
        source:
          url: >-
            https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fett-Eris-Mix-7B
          name: Open LLM Leaderboard

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

  • This model is an attempt at making a smart rp model with the finesse of Epiculous/Fett-uccine-7B.
  • From limited testing i've found it to be my favourite of my personal 7B models. It stays pretty coherent at 8k+ ctx.
  • I like to use "Alpaca" format with "Universal-Light" for longer messages. Switching to ChatML causes the messages to be much shorter? I haven't a clue why but sometimes it's nice.
  • It doesn't seem to show many issues but i'd be willing to try to fix any problems or bugs as it shows some potential.

Merge Method

This model was merged using the DARE TIES merge method using OpenPipe/mistral-ft-optimized-1227 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: OpenPipe/mistral-ft-optimized-1227
    # No parameters necessary for base model
  - model: Epiculous/Fett-uccine-7B
    parameters:
      density: 0.53
      weight: 0.4
  - model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
    parameters:
      density: 0.53
      weight: 0.35
  - model: ChaoticNeutrals/Eris_7B
    parameters:
      density: 0.53
      weight: 0.25
merge_method: dare_ties
base_model: OpenPipe/mistral-ft-optimized-1227
parameters:
  int8_mask: true
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.66
AI2 Reasoning Challenge (25-Shot) 68.77
HellaSwag (10-Shot) 87.33
MMLU (5-Shot) 63.65
TruthfulQA (0-shot) 71.91
Winogrande (5-shot) 80.82
GSM8k (5-shot) 57.47