shadow-clown-7B-dare
shadow-clown-7B-dare is a DARE merge of the following models using mergekit:
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- CultriX/NeuralTrix-7B-dpo
- CorticalStack/neurotic-crown-clown-7b-ties
See the paper Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch for more on the method.
🧩 Configuration
models:
- model: yam-peleg/Experiment26-7B
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
density: 0.52
weight: 0.4
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.52
weight: 0.2
- model: CorticalStack/neurotic-crown-clown-7b-ties
parameters:
density: 0.52
weight: 0.3
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
int8_mask: true
dtype: bfloat16
- Downloads last month
- 65
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.