Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: text-generation
|
|
14 |
The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
|
15 |
- learning-rate=2e-4,
|
16 |
- maximum number of tokens=512,
|
17 |
-
-
|
18 |
Lora perf arguments:
|
19 |
- rank = 32,
|
20 |
- lora-alpha=16,
|
|
|
14 |
The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
|
15 |
- learning-rate=2e-4,
|
16 |
- maximum number of tokens=512,
|
17 |
+
- 15 epochs.
|
18 |
Lora perf arguments:
|
19 |
- rank = 32,
|
20 |
- lora-alpha=16,
|