Update README.md
Browse files
README.md
CHANGED
@@ -9,13 +9,15 @@ inference: false
|
|
9 |
|
10 |
# LLaMA 7B trained on the ru_turbo_alpaca, Russian instructions dataset
|
11 |
|
|
|
|
|
12 |
An adapter only version. Merged version: [link](https://huggingface.co/IlyaGusev/llama_7b_ru_turbo_alpaca_lora_merged).
|
13 |
|
14 |
Warning! The model was trained with a target capped at 256 tokens. We will update it once a version with 512 tokens is ready.
|
15 |
|
16 |
Colab: [link](https://colab.research.google.com/drive/1JLoHOjDJQIa8SDqsEXrGHj4Z4aTnaajN)
|
17 |
|
18 |
-
Training code: [
|
19 |
|
20 |
```python
|
21 |
from peft import PeftModel, PeftConfig
|
|
|
9 |
|
10 |
# LLaMA 7B trained on the ru_turbo_alpaca, Russian instructions dataset
|
11 |
|
12 |
+
Based on [LLaMA 7B](https://huggingface.co/decapoda-research/llama-7b-hf).
|
13 |
+
|
14 |
An adapter only version. Merged version: [link](https://huggingface.co/IlyaGusev/llama_7b_ru_turbo_alpaca_lora_merged).
|
15 |
|
16 |
Warning! The model was trained with a target capped at 256 tokens. We will update it once a version with 512 tokens is ready.
|
17 |
|
18 |
Colab: [link](https://colab.research.google.com/drive/1JLoHOjDJQIa8SDqsEXrGHj4Z4aTnaajN)
|
19 |
|
20 |
+
Training code: [link](https://github.com/IlyaGusev/rulm/tree/master/self_instruct)
|
21 |
|
22 |
```python
|
23 |
from peft import PeftModel, PeftConfig
|