File size: 726 Bytes
e5d075b
 
9d30918
 
 
 
 
 
e5d075b
9d30918
 
 
 
 
 
 
4fe71d2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
license: apache-2.0
tags:
- MoE
- merge
- mergekit
- Mistral
- Microsoft/WizardLM-2-7B
---

# WizardLM-2-4x7B-MoE-exl2-6_0bpw

This is a quantized version of [WizardLM-2-4x7B-MoE](https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE) an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). Quantization was done using version 0.0.18 of [ExLlamaV2](https://github.com/turboderp/exllamav2). 

Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.

For more information see the [original repository](https://huggingface.co./Skylaude/WizardLM-2-4x7B-MoE).