baebee's picture
Upload folder using huggingface_hub
8a144b6 verified
metadata
base_model:
  - jeffmeloy/Qwen2.5-7B-olm-v1.0
  - Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
  - Aashraf995/Qwen-Evo-7B
  - fblgit/cybertron-v4-qw7B-UNAMGS
  - Qwen/Qwen2.5-7B-Instruct
  - nvidia/AceMath-7B-Instruct
  - jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.5
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SCE merge method using Qwen/Qwen2.5-7B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2 # Best for Benchmark 1
    parameters:
      density: 0.167
      weight: 0.167
  - model: Aashraf995/Qwen-Evo-7B  # Best for Benchmark 2
    parameters:
      density: 0.167
      weight: 0.167
  - model: nvidia/AceMath-7B-Instruct # Best for Benchmark 3
    parameters:
      density: 0.167
      weight: 0.167
  - model: fblgit/cybertron-v4-qw7B-UNAMGS  # Best for Benchmark 4
    parameters:
      density: 0.167
      weight: 0.167
  - model: jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.5  # Best for Benchmark 5
    parameters:
      density: 0.167
      weight: 0.167
  - model: jeffmeloy/Qwen2.5-7B-olm-v1.0 # Best for Benchmark 6
    parameters:
      density: 0.167
      weight: 0.167

merge_method: sce
base_model: Qwen/Qwen2.5-7B-Instruct  # Replace if using a different base model
parameters:
  normalize: false
  int8_mask: true
  select_topk: 0.1  # Retains top 10% highest variance elements (adjust for better results)
dtype: bfloat16
allow_crimes: true