File size: 1,672 Bytes
6923904
 
f2ea86b
 
 
 
 
 
6923904
f2ea86b
 
 
179a174
 
 
f2ea86b
fb7804f
 
3e738b7
 
 
85d5346
a8969a8
1baea02
636fc42
b154d61
fb7804f
 
 
 
 
 
f2ea86b
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: apache-2.0
tags:
- MoE
- merge
- mergekit
- Mistral
- Microsoft/WizardLM-2-7B
---

# WizardLM-2-4x7B-MoE

WizardLM-2-4x7B-MoE is an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). It was made by combining four [WizardLM-2-7B](https://huggingface.co./microsoft/WizardLM-2-7B) models using the random gate mode. 

Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.

# Quanitized versions

EXL2 (for fast GPU-only inference): <br />
6_0bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-6_0bpw (for GPU's with 20+ GB of vram) <br />
4_25bpw: [coming soon] (for GPU's with 16+ GB of vram)  <br />
3_0bpw: https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE-exl2-3_0bpw (for GPU's with 12+ GB of vram)

GGUF (for mixed GPU+CPU inference or CPU-only inference):  <br />
https://huggingface.co./mradermacher/WizardLM-2-4x7B-MoE-GGUF <br />
Thanks to [Michael Radermacher](https://huggingface.co./mradermacher) for making these quants!

# Evaluation

I don't expect this model to be that great since it's something that I made as an experiment. However, I will submit it to the Open LLM Leaderboard to see how it matches up against some other models (particularly WizardLM-2-7B and WizardLM-2-70B). 

# Mergekit config
```
base_model: models/WizardLM-2-7B
gate_mode: random
dtype: float16
experts_per_token: 4
experts:
  - source_model: models/WizardLM-2-7B
  - source_model: models/WizardLM-2-7B
  - source_model: models/WizardLM-2-7B
  - source_model: models/WizardLM-2-7B
```