metadata
base_model:
- alpindale/Mistral-7B-v0.2-hf
- cgato/Thespis-7b-v0.5-SFTTest-2Epoch
- NeverSleep/Noromaid-7B-0.4-DPO
- NurtureAI/neural-chat-7b-v3-1-16k
- cgato/Thespis-CurtainCall-7b-v0.2.2
- tavtav/eros-7b-test
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-4.0
lemonade-rebase-32k-7B
This is a rebase merge using the formula from KatyTheCutie/LemonadeRP-4.5.3 on Mistral v0.2 7B base (instead of v0.1), for 32K context length (eliminating the 4K sliding window), with rope theta set to 100K. No other changes were made.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using alpindale/Mistral-7B-v0.2-hf as a base.
Models Merged
The following models were included in the merge:
- cgato/Thespis-7b-v0.5-SFTTest-2Epoch
- NeverSleep/Noromaid-7B-0.4-DPO
- NurtureAI/neural-chat-7b-v3-1-16k
- cgato/Thespis-CurtainCall-7b-v0.2.2
- tavtav/eros-7b-test
Configuration
The following YAML configuration was used to produce this model:
base_model: alpindale/Mistral-7B-v0.2-hf
dtype: float16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: alpindale/Mistral-7B-v0.2-hf
- layer_range: [0, 32]
model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
weight: 0.37
- layer_range: [0, 32]
model: cgato/Thespis-CurtainCall-7b-v0.2.2
parameters:
weight: 0.32
- layer_range: [0, 32]
model: NurtureAI/neural-chat-7b-v3-1-16k
parameters:
weight: 0.15
- layer_range: [0, 32]
model: cgato/Thespis-7b-v0.5-SFTTest-2Epoch
parameters:
weight: 0.38
- layer_range: [0, 32]
model: tavtav/eros-7b-test
parameters:
weight: 0.18