metadata
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- SanjiWatsuki/Kunoichi-7B
library_name: transformers
tags:
- mergekit
- merge
franken-kunoichi-IDUS-11B
This is a merge of pre-trained language models created using mergekit.
The Interwoven Depth Up-Scaling merge formula was adapted from Sanji Watsuki's longcat-10.7B.
I consider this to be a negative result, but perhaps an interesting one. I've tested casually with temperature 0.7-1.2 and minP 0.01-0.03, with both Alpaca and ChatML prompts. The resulting generated text is interesting for RP, mostly sticking to grammatically correct output, but too easily veers chaotically while having difficulty tracking things. Given that, the inherited 8K context length is of dubious benefit.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [0, 8]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [8, 9]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [8, 9]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [9, 10]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [9, 10]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [10, 11]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [10, 11]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [11, 12]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [11, 12]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [12, 13]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [12, 13]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [13, 14]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [13, 14]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [14, 15]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [14, 15]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [15, 16]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [15, 16]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [16, 17]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [16, 17]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [17, 18]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [17, 18]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [18, 19]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [18, 19]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [19, 20]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [19, 20]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [20, 21]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [20, 21]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [21, 22]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [21, 22]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [22, 23]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [22, 23]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [23, 24]
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [23, 24]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [24, 32]
merge_method: passthrough
dtype: float16