File size: 323 Bytes
9aab7ae
23836cf
9aab7ae
 
23836cf
b9633b2
 
 
 
1
2
3
4
5
6
7
8
9
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
language:
- en
license: mit
---

# Llama 3 8b Instruct MOE
Llama 3 8b Instruct base model converted to MOE style by randomly partitioning the FFN layers of each decoder layer into 8 experts of the same size. Weights are directly taken from the llama3 instruct base model.