mixtral-7b-8expert / README.md
bjoernp's picture
Update README.md
62ae115
|
raw
history blame
1.2 kB
metadata
license: apache-2.0

Mixtral 7b 8 Expert

image/png

This is a preliminary HuggingFace implementation of the newly released MoE model by MistralAi. Make sure to load with trust_remote_code=True.

Thanks to @dzhulgakov for his early implementation (https://github.com/dzhulgakov/llama-mistral) that helped me find a working setup.

Basic Inference setup

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("DiscoResearch/mixtral-7b-8expert", low_cpu_mem_usage=True, device_map="auto", trust_remote_code=True)
tok = AutoTokenizer.from_pretrained("DiscoResearch/mixtral-7b-8expert")
x = tok.encode("The mistral wind in is a phenomenon ", return_tensors="pt").cuda()
x = model.generate(x, max_new_tokens=128).cpu()
print(tok.batch_decode(x))

Conversion

Use convert_mistral_moe_weights_to_hf.py --input_dir ./input_dir --model_size 7B --output_dir ./output to convert the original consolidated weights to this HF setup.

Come chat about this in our Disco(rd)! :)