File size: 600 Bytes
4176e53
f84632d
 
 
 
 
 
 
 
 
 
4176e53
 
f84632d
4176e53
f84632d
4176e53
f84632d
4176e53
f84632d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
base_model: Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24
base_model_relation: quantized
license: apache-2.0
pipeline_tag: text-generation
quantized_by: qilowoq
tags:
- gptq
language:
- en
- ru
---

# Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24-4Bit-GPTQ

- Original Model: [Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24](https://huggingface.co./Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24)

## Quantization

- This model was quantized with the Auto-GPTQ library and dataset containing english and russian wikipedia articles. It has lower perplexity on russian data then other GPTQ models.