Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ language:
|
|
3 |
- pl
|
4 |
datasets:
|
5 |
- s3nh/alpaca-dolly-instruction-only-polish
|
|
|
6 |
---
|
7 |
# Model Card for Mixtral-8x7B
|
8 |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|
|
|
3 |
- pl
|
4 |
datasets:
|
5 |
- s3nh/alpaca-dolly-instruction-only-polish
|
6 |
+
inference: false
|
7 |
---
|
8 |
# Model Card for Mixtral-8x7B
|
9 |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|