WizardLM-2-4x7B-MoE / README.md
Skylaude's picture
Update README.md
54a5eda verified
|
raw
history blame
1.21 kB
metadata
license: apache-2.0
tags:
  - MoE
  - merge
  - mergekit
  - Mistral
  - Microsoft/WizardLM-2-7B

WizardLM-2-4x7B-MoE

Some files are still uploading. The model should be available in a couple of hours!

This is an experimental MoE model made with Mergekit. It was made by combining four WizardLM-2-7B models using the random gate mode. Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.

Quanitized versions

Hopefully coming soon.

Evaluation

I don't expect this model to be that great since it's something that I made as an experiment. However, I will submit it to the Open LLM Leaderboard to see how it matches up against some other models (particularly WizardLM-2-7B and WizardLM-2-70B).

Mergekit config

base_model: models/WizardLM-2-7B
gate_mode: random
dtype: float16
experts_per_token: 4
experts:
  - source_model: models/WizardLM-2-7B
  - source_model: models/WizardLM-2-7B
  - source_model: models/WizardLM-2-7B
  - source_model: models/WizardLM-2-7B