File size: 1,567 Bytes
2cc0b8e d6afdd9 88d5b63 2cc0b8e 1a77d1d d6afdd9 2dbd6ff 4ce4cea 1a77d1d f1bc3fd 56bc85c f1bc3fd 7710ce1 b257fc5 d6afdd9 88d5b63 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
base_model:
- SanjiWatsuki/Kunoichi-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# kuno-kunoichi-v1-DPO-v2-SLERP-7B
kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently.
I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format prompts.
[GGUF-IQ-Imatrix quants helpfully provided by Lewdiculous.](https://huggingface.co./Lewdiculous/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF-IQ-Imatrix)
[GGUF quants](https://huggingface.co./grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF)
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co./SanjiWatsuki/Kunoichi-7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co./SanjiWatsuki/Kunoichi-DPO-v2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [0,32]
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0,32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-7B
parameters:
t:
- value: 0.5
dtype: float16
``` |