File size: 1,423 Bytes
1360ae3
 
499974f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec58dc9
499974f
1360ae3
499974f
 
 
 
 
 
 
 
 
 
 
 
93a34d0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---

license: mit
datasets:
- Vikhrmodels/GrandMaster-PRO-MAX
language:
- en
- ru
tags:
- mistral
- chat
- conversational
- transformers
inference:
  parameters:
    temperature: 0
pipeline_tag: text-generation
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
- ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501
library_name: vllm
---

# Zero-Mistral-Small-24B-Instruct-2501 Q8_0 GGUF version



Zero-Mistral-Small is an improved version of [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co./mistralai/Mistral-Small-24B-Instruct-2501), primarily adapted for Russian and English languages.

The training involved SFT stage on [GrandMaster-PRO-MAX](https://huggingface.co./datasets/Vikhrmodels/GrandMaster-PRO-MAX) dataset.



## 📚 Model versions



- [Merged 16-bit](https://huggingface.co./ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501) - original 16bit merged version.

- [LoRa adapter](https://huggingface.co./ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501-lora) for mistralai/Mistral-Small-24B-Instruct-2501

- [F16 GGUF](https://huggingface.co./ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501-F16)

- [BF16 GGUF](https://huggingface.co./ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501-BF16)

- [Q8_0 GGUF](https://huggingface.co./ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501-Q8_0)

- [Q4_K_M GGUF](https://huggingface.co./ZeroAgency/Zero-Mistral-Small-24B-Instruct-2501-Q4_K_M)