File size: 2,588 Bytes
7e3d4af 21c30b8 24d6536 21c30b8 7e3d4af 24d6536 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
base_model: grimjim/fireblossom-32K-7B
library_name: transformers
quanted_by: grimjim
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# Fireblossom-32K-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
For this merge, I went back to Mistral 7B v0.1 for the literal base model for task arithmetic merger, which can be pushed to at least 16K context length after adjusting rope theta from 10K to 100K. With the original (true) base model, the models merged in should be mathematically equivalent to LoRA adapters. I left the original 32K context claimed by Mistral 7B v0.1.
The goal was a merge model more varied in its outputs, a goal which inherently harms accuracy in favor of creativity. To this end, I chose a model trained to be strong at narrative roleplay (cgato's work) along with three models that were good at reasoning (fine-tunes by HuggingFaceH4 and SanjiWatsuki). The result appears to be good at following card instructions, perhaps to a fault.
Sampler settings: Tested lightly with temperature=0.7 and minP=0.01. For greater creativity, boost temperature.
Download options:
* [full weights](https://huggingface.co./grimjim/fireblossom-32K-7B)
* [Q8_0 GGUF](https://huggingface.co./grimjim/fireblossom-32K-7B-GGUF)
* [8.0bpw h8 exl2](https://huggingface.co./grimjim/fireblossom-32K-7B-8.0bpw_h8_exl2)
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co./mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co./HuggingFaceH4/zephyr-7b-beta)
* [cgato/TheSpice-7b-v0.1.1](https://huggingface.co./cgato/TheSpice-7b-v0.1.1)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co./SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co./SanjiWatsuki/Kunoichi-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
weight: 0.45
- model: cgato/TheSpice-7b-v0.1.1
parameters:
weight: 0.05
- model: HuggingFaceH4/zephyr-7b-beta
parameters:
weight: 0.05
- model: SanjiWatsuki/Kunoichi-7B
parameters:
weight: 0.45
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
dtype: float16
```
|