Text Generation
Transformers
Safetensors
mixtral
mergekit
lazymergekit
jiayihao03/mistral-7b-instruct-Javascript-4bit
akameswa/mistral-7b-instruct-java-4bit
akameswa/mistral-7b-instruct-go-4bit
jiayihao03/mistral-7b-instruct-python-4bit
conversational
text-generation-inference
Inference Endpoints
4-bit precision
bitsandbytes
File size: 856 Bytes
fa66fae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
---
license: apache-2.0
tags:
- mergekit
- lazymergekit
- jiayihao03/mistral-7b-instruct-Javascript-4bit
- akameswa/mistral-7b-instruct-java-4bit
- akameswa/mistral-7b-instruct-go-4bit
- jiayihao03/mistral-7b-instruct-python-4bit
---
# mixtral-4x7b-instruct-code
mixtral-4x7b-instruct-code is a MoE of the following models using [mergekit](https://github.com/cg123/mergekit):
* [jiayihao03/mistral-7b-instruct-Javascript-4bit](https://huggingface.co./jiayihao03/mistral-7b-instruct-Javascript-4bit)
* [akameswa/mistral-7b-instruct-java-4bit](https://huggingface.co./akameswa/mistral-7b-instruct-java-4bit)
* [akameswa/mistral-7b-instruct-go-4bit](https://huggingface.co./akameswa/mistral-7b-instruct-go-4bit)
* [jiayihao03/mistral-7b-instruct-python-4bit](https://huggingface.co./jiayihao03/mistral-7b-instruct-python-4bit)
## 🧩 Configuration
```yaml``` |