base_model: mistralai/Mistral-7B-Instruct-v0.2
    gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
    dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
    experts:
      - source_model: SanjiWatsuki/Silicon-Maid-7B
        positive_prompts:
            - "roleplay"
      - source_model: mistralai/Mistral-7B-Instruct-v0.2
        positive_prompts:
            - "chat"
    

chatml format

Downloads last month
0
GGUF
Model size
12.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .