ArunIcfoss commited on
Commit
ff08e48
1 Parent(s): e28e70b

End of training

Browse files
Files changed (2) hide show
  1. README.md +69 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: facebook/nllb-200-1.3B
7
+ metrics:
8
+ - bleu
9
+ - rouge
10
+ model-index:
11
+ - name: nllb-200-1.3B-ICFOSS-malayalam_Hindi_Translator
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # nllb-200-1.3B-ICFOSS-malayalam_Hindi_Translator
19
+
20
+ This model is a fine-tuned version of [facebook/nllb-200-1.3B](https://huggingface.co/facebook/nllb-200-1.3B) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.3788
23
+ - Bleu: 62.5154
24
+ - Rouge: {'rouge1': 0.42504662037099206, 'rouge2': 0.2891987093258279, 'rougeL': 0.4211514655126128, 'rougeLsum': 0.42156526904087943}
25
+ - Chrf: {'score': 79.24933104383702, 'char_order': 6, 'word_order': 0, 'beta': 2}
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 0.0002
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - num_epochs: 5
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Chrf |
55
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------:|
56
+ | 0.5095 | 1.0 | 4698 | 0.4099 | 59.5376 | {'rouge1': 0.4220305313233426, 'rouge2': 0.2866519629954242, 'rougeL': 0.41646494668344247, 'rougeLsum': 0.4167340351207185} | {'score': 77.52631821685847, 'char_order': 6, 'word_order': 0, 'beta': 2} |
57
+ | 0.4213 | 2.0 | 9396 | 0.3842 | 61.7541 | {'rouge1': 0.4247871478803683, 'rouge2': 0.28898946927686797, 'rougeL': 0.42099815319030365, 'rougeLsum': 0.4209781732451786} | {'score': 78.54007352748269, 'char_order': 6, 'word_order': 0, 'beta': 2} |
58
+ | 0.3888 | 3.0 | 14094 | 0.3785 | 62.2691 | {'rouge1': 0.42665978089706913, 'rouge2': 0.28916951694997156, 'rougeL': 0.42136280849134333, 'rougeLsum': 0.4219221144613403} | {'score': 79.11003191466068, 'char_order': 6, 'word_order': 0, 'beta': 2} |
59
+ | 0.3764 | 4.0 | 18792 | 0.3785 | 62.4514 | {'rouge1': 0.42373682879235186, 'rouge2': 0.2891987093258279, 'rougeL': 0.41970156954196886, 'rougeLsum': 0.4201735443294585} | {'score': 79.20088697777769, 'char_order': 6, 'word_order': 0, 'beta': 2} |
60
+ | 0.3741 | 5.0 | 23490 | 0.3788 | 62.5154 | {'rouge1': 0.42504662037099206, 'rouge2': 0.2891987093258279, 'rougeL': 0.4211514655126128, 'rougeLsum': 0.42156526904087943} | {'score': 79.24933104383702, 'char_order': 6, 'word_order': 0, 'beta': 2} |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - PEFT 0.10.0
66
+ - Transformers 4.40.2
67
+ - Pytorch 2.3.0+cu121
68
+ - Datasets 2.19.0
69
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4f82e86621486b30306112f3b1a4f30a0332d63a64e186824ce701836577ad73
3
  size 47294200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69508232d132e780f6aeee215edef7373f9d9c4a93f145d44e2a3914826d2e2c
3
  size 47294200