File size: 1,953 Bytes
6923904 f2ea86b 6923904 f2ea86b 179a174 f2ea86b fb7804f 3e738b7 ce51acf f9e44cf ce51acf a8969a8 1baea02 636fc42 b154d61 fb7804f f2ea86b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
license: apache-2.0
tags:
- MoE
- merge
- mergekit
- Mistral
- Microsoft/WizardLM-2-7B
---
# WizardLM-2-4x7B-MoE
WizardLM-2-4x7B-MoE is an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). It was made by combining four [WizardLM-2-7B](https://huggingface.co./microsoft/WizardLM-2-7B) models using the random gate mode.
Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
# Quanitized versions
EXL2 (for fast GPU-only inference): <br />
8_0bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-8_0bpw (~ 25 GB vram) <br />
6_0bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-6_0bpw (~ 19 GB vram) <br />
5_0bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-5_0bpw (~ 16 GB vram) <br />
4_25bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-4_25bpw (~ 14 GB vram) <br />
3_5bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-3_5bpw (~ 12 GB vram) <br />
3_0bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-3_0bpw (~ 11 GB vram)
GGUF (for mixed GPU+CPU inference or CPU-only inference): <br />
https://huggingface.co./mradermacher/WizardLM-2-4x7B-MoE-GGUF <br />
Thanks to [Michael Radermacher](https://huggingface.co./mradermacher) for making these quants!
# Evaluation
I don't expect this model to be that great since it's something that I made as an experiment. However, I will submit it to the Open LLM Leaderboard to see how it matches up against some other models (particularly WizardLM-2-7B and WizardLM-2-70B).
# Mergekit config
```
base_model: models/WizardLM-2-7B
gate_mode: random
dtype: float16
experts_per_token: 4
experts:
- source_model: models/WizardLM-2-7B
- source_model: models/WizardLM-2-7B
- source_model: models/WizardLM-2-7B
- source_model: models/WizardLM-2-7B
``` |