leonzhou286's picture
Update README.md
b9633b2 verified
|
raw
history blame
323 Bytes
metadata
license: mit
language:
  - en
base_model: meta-llama/Meta-Llama-3-8B-Instruct

Llama 3 8b Instruct MOE

Llama 3 8b Instruct base model converted to MOE style by randomly partitioning the FFN layers of each decoder layer into 8 experts of the same size. Weights are directly taken from the llama3 instruct base model.