merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview
  - model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2  # Best for Benchmark 1 (emphasized)
    parameters:
      density: 0.2
      weight: 0.2  # Increased weight for more influence
  - model: Aashraf995/Qwen-Evo-7B  # Best for Benchmark 2
    parameters:
      density: 0.2
      weight: 0.2
  - model: nvidia/AceMath-7B-Instruct  # Best for Benchmark 3 (math focus)
    parameters:
      density: 0.2
      weight: 0.2 # Increased weight for better math performance
  - model: Krystalan/DRT-o1-7B  # Best for Benchmark 4
    parameters:
      density: 0.2
      weight: 0.2
  - model: jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0  # Best for Benchmark 5
    parameters:
      density: 0.2
      weight: 0.2
  - model: jeffmeloy/Qwen2.5-7B-olm-v1.0  # Best for Benchmark 6
    parameters:
      density: 0.2
      weight: 0.2

merge_method: dare_ties
base_model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview  # Replace if using a different base model
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
allow_crimes: true
Downloads last month
9
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for marcuscedricridia/cursorr-o1.2-7b