Update README.md
Browse files
README.md
CHANGED
@@ -14,12 +14,13 @@ tags:
|
|
14 |
<img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>
|
15 |
|
16 |
|
17 |
-
# Trurl 2 -- Polish Llama 2
|
18 |
|
19 |
The new OPEN TRURL is a finetuned Llama 2, trained on over 1.7b tokens (970k conversational **Polish** and **English** samples) with a large context of 4096 tokens.
|
20 |
TRURL was trained on a large number of Polish data.
|
21 |
TRURL 2 is a collection of fine-tuned generative text models with 7 billion and 13 billion parameters.
|
22 |
This is the repository for the 13B fine-tuned model, optimized for dialogue use cases.
|
|
|
23 |
|
24 |
|
25 |
# Overview
|
|
|
14 |
<img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>
|
15 |
|
16 |
|
17 |
+
# Academic Trurl 2 -- Polish Llama 2
|
18 |
|
19 |
The new OPEN TRURL is a finetuned Llama 2, trained on over 1.7b tokens (970k conversational **Polish** and **English** samples) with a large context of 4096 tokens.
|
20 |
TRURL was trained on a large number of Polish data.
|
21 |
TRURL 2 is a collection of fine-tuned generative text models with 7 billion and 13 billion parameters.
|
22 |
This is the repository for the 13B fine-tuned model, optimized for dialogue use cases.
|
23 |
+
This model was trained without MMLU dataset.
|
24 |
|
25 |
|
26 |
# Overview
|