File size: 1,642 Bytes
ea56a7c
 
 
 
 
 
 
 
 
 
 
b707346
ea56a7c
 
 
 
 
 
6f4637c
e477028
ea56a7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3a99bfa
ea56a7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
inference: false
---
# Model Card for Mixtral-Fusion-4x7B-Instruct-v0.1
This model is an experimental model created by merging [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1) experts.

# How we merged experts
We simply take the average of every two experts.weight.  
The same goes for gate.weight.

# How To Convert
use colab cpu-high-memory.  
[convert_mixtral_8x7b_to_4x7b.ipynb](https://huggingface.co./mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1/blob/main/notebook/convert_mixtral_8x7b_to_4x7b.ipynb)

# Usage
~~~python
pip install git+https://github.com/huggingface/transformers --upgrade
pip install torch accelerate bitsandbytes flash_attn
~~~

~~~python
from transformers import AutoTokenizer, AutoModelForCausalLM, MixtralForCausalLM
import torch

model_name_or_path = "mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = MixtralForCausalLM.from_pretrained(model_name_or_path, load_in_8bit=True)

# set num_experts_per_tok 1 or 2 ?
model.config.num_experts_per_tok = 2

# message
messages = [
    {"role": "user", "content": "Tell me what's for dinner tonight."},
]

with torch.no_grad():
    token_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
    output_ids = model.generate(
        token_ids.to(model.device),
        temperature=0.5,
        do_sample=True,
        top_p=0.95,
        top_k=40,
        max_new_tokens=128,
        repetition_penalty=1.5
    )
output = tokenizer.decode(output_ids[0][token_ids.size(1) :])
print(output)

~~~