base_model:
- Endevor/EndlessRP-v3-7B
- sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- undi95/Toppy-M-7B
- yam-peleg/Experiment30-7B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: apache-2.0
Pippafeet-11B-0.2
This model is a mix of some of the "best 7B roleplaying LLMs". I selected a few models based on "creativity" from a random benchmark, and a final roleplaying LLM based on "IQ," and finally another LLM merged twice that "excels at general tasks" according to a separate benchmark for it's size. My goal was to combine the "most creative" smaller roleplaying LLMs, merge them, and enhance the intelligence by incorporating "decent general model" twice, along with a "smarter" roleplaying LLM. I don't really trust benchmarks much, but I thought it would at least give it some alignment perhaps, even if it is overfitted to a dataset to score well, I thought since it's a merge so it might negate overfitting somewhat, seems to have worked to some extent luckily. Also, this is a iteration of my previous model with only thing changed being the merge method. This iteration seems to be somewhat of a sidegrade rather than a pure upgrade, but for the most part it is more stable and generally better.
In my limited testing, this model performs really well, giving decent replies most of the time.... That is if you ignore the fatal flaws, which are inherent to how this model was created unfortunately. Since it's made by stacking the weights of other models, it likes to constantly create new words and stutter and generally act stange, however if you ignore this and fill in the blanks yourself the model is quite decent. I plan to try to remove this weirdness if possible by using a LoRA but I am not sure if I will be able to, no promisses. If you have the compute to fine tune this model, I emplore you to because I think it is a promissing base.
Edit: Fine tune is pretty much impossible because ROCm is hot garbage and I should have never bought a AMD GPU, if someone has a functional GPU please fine tune it for me. Might be able to do it on CPU somehow but likely not FP16 and slow as fuck and in GGUF
Artwork source, please contact for me to remove it if wished: https://twitter.com/Kumaartsu/status/1756793643384402070
Note: this model is in no way affiliated with Phase Connect, Pipkin Pippa, or the artists artwork.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
- Endevor/EndlessRP-v3-7B
- yam-peleg/Experiment30-7B
- undi95/Toppy-M-7B
Configuration
The following YAML configuration was used to produce this model:
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: yam-peleg/Experiment30-7B
layer_range: [0, 16]
- sources:
- model: Endevor/EndlessRP-v3-7B
layer_range: [8, 24]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [17, 24]
- sources:
- model: undi95/Toppy-M-7B
layer_range: [20, 28]
- sources:
- model: sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
layer_range: [28, 30]
- sources:
- model: yam-peleg/Experiment30-7B
layer_range: [29, 32]