SGaleshchuk commited on
Commit
8af07d6
·
verified ·
1 Parent(s): 335e816

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: text-generation
14
  The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
15
  - learning-rate=2e-4,
16
  - maximum number of tokens=512,
17
- - 5 epochs.
18
  Lora perf arguments:
19
  - rank = 32,
20
  - lora-alpha=16,
 
14
  The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
15
  - learning-rate=2e-4,
16
  - maximum number of tokens=512,
17
+ - 15 epochs.
18
  Lora perf arguments:
19
  - rank = 32,
20
  - lora-alpha=16,