|
--- |
|
language: |
|
- ru |
|
license: apache-2.0 |
|
datasets: |
|
- 0x7194633/fialka-v3-data |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: fialka-7B-v3 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 48.55 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=0x7194633/fialka-7B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 71.05 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=0x7194633/fialka-7B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 43.06 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=0x7194633/fialka-7B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 44.79 |
|
source: |
|
url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=0x7194633/fialka-7B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 69.46 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=0x7194633/fialka-7B-v3 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 1.52 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard?query=0x7194633/fialka-7B-v3 |
|
name: Open LLM Leaderboard |
|
--- |
|
# Fialka v3.0 7B |
|
|
|
![Violet](https://i.imgur.com/EDwW6t6.png) |
|
## Description |
|
Fialka language models are trained to follow instructions and maintain communication in Russian. The third version of the model was based on the [Llama 2 model](https://huggingface.co./rccmsu/ruadapt_llama2_7b_v0.1), which was specifically trained on a large corpus of Russian, allowing Fialka to generate its responses more accurately. |
|
## Usage |
|
The model has a query format as in zephyr. |
|
``` |
|
<|user|> |
|
Что такое мем?</s> |
|
<|assistant|> |
|
Мем - это культурный феномен, который может быть описан как информационный вирус. Он представляет собой абстрактный элемент культуры или идеи в виде изображения (обычно с изображением человека) и его краткого текста на основе которого можно создать мемы различных форматов: от графических до текстовых шуток; мема могут использоваться для передачи определенных идей через интернет-сообщество без необходимости использовать слова напрямую!</s> |
|
``` |
|
Check out the [space](https://huggingface.co./spaces/0x7194633/fialka) to use the model in UI without downloading. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_0x7194633__fialka-7B-v3) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |46.40| |
|
|AI2 Reasoning Challenge (25-Shot)|48.55| |
|
|HellaSwag (10-Shot) |71.05| |
|
|MMLU (5-Shot) |43.06| |
|
|TruthfulQA (0-shot) |44.79| |
|
|Winogrande (5-shot) |69.46| |
|
|GSM8k (5-shot) | 1.52| |
|
|
|
|