metadata
language:
- fr
- it
- de
- es
- en
license: apache-2.0
tags:
- moe
inference: false
Model Card for Mixtral-Fusion-4x7B-Instruct-v0.1
This model is an experimental model created by merging mistralai/Mixtral-8x7B-Instruct-v0.1 experts.
How we merged experts
Changed to merge using slerp.
Discussion
old merge versionWe simply take the average of every two experts.weight.The same goes for gate.weight.
How To Convert
use colab cpu-high-memory.
convert_mixtral_8x7b_to_4x7b.ipynb
OtherModels
mmnga/Mixtral-Extraction-4x7B-Instruct-v0.1
Usage
pip install git+https://github.com/huggingface/transformers --upgrade
pip install torch accelerate bitsandbytes flash_attn
from transformers import AutoTokenizer, AutoModelForCausalLM, MixtralForCausalLM
import torch
model_name_or_path = "mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = MixtralForCausalLM.from_pretrained(model_name_or_path, load_in_8bit=True)
text = "[INST] What was John Holt's vision on education? [/INST] "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))