File size: 1,517 Bytes
8566031 71f01db 8566031 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
base_model:
- grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
- lucyknada/microsoft_WizardLM-2-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
pipeline_tag: text-generation
---
# wizard-elem-to-32k-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
In theory, context length has been extended to 32K tokens. In practice? Degradation above 8K context length.
Tested with ChatML instruct prompts, temperature 1.0, and minP 0.01, but feel free to experiment.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B](https://huggingface.co./grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B) as a base.
### Models Merged
The following models were included in the merge:
* [grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B](https://huggingface.co./grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B)
* [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co./lucyknada/microsoft_WizardLM-2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
- layer_range: [0, 32]
model: lucyknada/microsoft_WizardLM-2-7B
parameters:
weight: 1.00
```
|