metadata
base_model:
- alpindale/Mistral-7B-v0.2-hf
- mistralai/Mistral-7B-Instruct-v0.2
- SanjiWatsuki/Kunoichi-DPO-v2-7B
library_name: transformers
tags:
- mergekit
- merge
This is an ExLlamaV2 quantized model in 4bpw of mpasila/Kunoichi-DPO-v2-Instruct-32k-7B using the default calibration dataset.
Original Model card:
Kunoichi-DPO-v2-Instruct-32k-7B
This is a merge of pre-trained language models created using mergekit.
This hopefully gives 32k context for Kunoichi-DPO-v2 model though since it also uses the instruct model it might change its behavior somewhat.
Merge script copied from this ichigoberry/pandafish-2-7b-32k.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using alpindale/Mistral-7B-v0.2-hf as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: alpindale/Mistral-7B-v0.2-hf
# No parameters necessary for base model
- model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
density: 0.53
weight: 0.4
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: alpindale/Mistral-7B-v0.2-hf
parameters:
int8_mask: true
dtype: bfloat16