baebee's picture
Upload folder using huggingface_hub
074a3c6 verified
metadata
base_model:
  - nvidia/AceMath-7B-Instruct
  - jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0
  - jeffmeloy/Qwen2.5-7B-olm-v1.0
  - Aashraf995/Qwen-Evo-7B
  - Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
  - Qwen/Qwen2.5-7B-Instruct
  - Krystalan/DRT-o1-7B
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SCE merge method using Qwen/Qwen2.5-7B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2  # Best for Benchmark 1 (emphasized)
    parameters:
      density: 0.2
      weight: 0.25  # Increased weight for more influence
  - model: Aashraf995/Qwen-Evo-7B  # Best for Benchmark 2
    parameters:
      density: 0.15
      weight: 0.125
  - model: nvidia/AceMath-7B-Instruct  # Best for Benchmark 3 (math focus)
    parameters:
      density: 0.2
      weight: 0.25 # Increased weight for better math performance
  - model: Krystalan/DRT-o1-7B  # Best for Benchmark 4
    parameters:
      density: 0.15
      weight: 0.125
  - model: jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0  # Best for Benchmark 5
    parameters:
      density: 0.15
      weight: 0.125
  - model: jeffmeloy/Qwen2.5-7B-olm-v1.0  # Best for Benchmark 6
    parameters:
      density: 0.15
      weight: 0.125

merge_method: sce
base_model: Qwen/Qwen2.5-7B-Instruct  # Replace if using a different base model
parameters:
  normalize: false
  int8_mask: true
  select_topk: 0.314  # Retains 40% of high-variance elements for better performance in math and key areas
dtype: bfloat16
allow_crimes: true