LakoMoor commited on
Commit
dd9d8ec
1 Parent(s): 6ef0534

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: llamacpp
3
+ model_name: Vikhr-Gemma-2B-instruct
4
+ base_model:
5
+ - Vikhrmodels/Vikhr-Llama-3.2-1B
6
+ language:
7
+ - ru
8
+ - en
9
+ license: llama3.2
10
+ tags:
11
+ - instruct
12
+ datasets:
13
+ - Vikhrmodels/GrandMaster-PRO-MAX
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # 💨📱 Vikhr-Llama-3.2-1B-instruct
18
+
19
+ #### RU
20
+
21
+ Инструктивная модель на основе Llama-3.2-1B-Instruct, обученная на русскоязычном датасете GrandMaster-PRO-MAX. В 5 раз эффективнее базовой модели, и идеально подходит для запуска на слабых или мобильных устройствах.
22
+
23
+ #### EN
24
+
25
+ Instructive model based on Llama-3.2-1B-Instruct, trained on the Russian-language dataset GrandMaster-PRO-MAX. It is 5 times more efficient than the base model, making it perfect for deployment on low-power or mobile devices.
26
+ - [HF model](https://huggingface.co/Vikhrmodels/Vikhr-Llama-3.2-1B)
27
+
28
+ **Рекомендуемая температура для генерации: 0.3** / **Recommended generation temperature: 0.3**.
29
+
30
+ ## Метрики на ru_arena_general / Metrics on ru_arena_general
31
+
32
+ | **Model** | **Score** | **95% CI** | **Avg Tokens** | **Std Tokens** | **LC Score** |
33
+ | ------------------------------------------- | --------- | --------------- | -------------- | -------------- | ------------ |
34
+ | kolibri-vikhr-mistral-0427 | 22.41 | +1.6 / -1.6 | 489.89 | 566.29 | 46.04 |
35
+ | storm-7b | 20.62 | +2.0 / -1.6 | 419.32 | 190.85 | 45.78 |
36
+ | neural-chat-7b-v3-3 | 19.04 | +2.0 / -1.7 | 927.21 | 1211.62 | 45.56 |
37
+ | **Vikhrmodels-Vikhr-Llama-3.2-1B-instruct** | **19.04** | **+1.3 / -1.6** | **958.63** | **1297.33** | **45.56** |
38
+ | gigachat_lite | 17.2 | +1.4 / -1.4 | 276.81 | 329.66 | 45.29 |
39
+ | Vikhrmodels-vikhr-qwen-1.5b-it | 13.19 | +1.4 / -1.6 | 2495.38 | 741.45 | 44.72 |
40
+ | meta-llama-Llama-3.2-1B-Instruct | 4.04 | +0.8 / -0.6 | 1240.53 | 1783.08 | 43.42 |
41
+
42
+ ### Авторы / Authors
43
+ - Sergei Bratchikov, [NLP Wanderer](https://t.me/nlpwanderer), [Vikhr Team](https://t.me/vikhrlabs)
44
+ - Nikolay Kompanets, [LakoMoor](https://t.me/lakomoor), [Vikhr Team](https://t.me/vikhrlabs)
45
+ - Konstantin Korolev, [Vikhr Team](https://t.me/vikhrlabs)
46
+ - Aleksandr Nikolich, [Vikhr Team](https://t.me/vikhrlabs)
47
+ ```
48
+ @article{nikolich2024vikhr,
49
+ title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian},
50
+ author={Aleksandr Nikolich and Konstantin Korolev and Sergey Bratchikov and Nikolay Kompanets and Artem Shelmanov},
51
+ journal={arXiv preprint arXiv:2405.13929},
52
+ year={2024},
53
+ url={https://arxiv.org/pdf/2405.13929}
54
+ }