rdk31 commited on
Commit
31656da
1 Parent(s): 1bfbfd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -3,6 +3,7 @@ language:
3
  - pl
4
  datasets:
5
  - s3nh/alpaca-dolly-instruction-only-polish
 
6
  ---
7
  # Model Card for Mixtral-8x7B
8
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
 
3
  - pl
4
  datasets:
5
  - s3nh/alpaca-dolly-instruction-only-polish
6
+ inference: false
7
  ---
8
  # Model Card for Mixtral-8x7B
9
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.