himanshubeniwal commited on
Commit
3572dc8
1 Parent(s): 0795d1c

End of training

Browse files
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/bart-base
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - wmt16
8
+ metrics:
9
+ - bleu
10
+ model-index:
11
+ - name: bart-base-finetuned-ro-to-en-clean
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: wmt16
18
+ type: wmt16
19
+ config: ro-en
20
+ split: validation
21
+ args: ro-en
22
+ metrics:
23
+ - name: Bleu
24
+ type: bleu
25
+ value: 15.7437
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # bart-base-finetuned-ro-to-en-clean
32
+
33
+ This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the wmt16 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 1.5226
36
+ - Bleu: 15.7437
37
+ - Gen Len: 18.4167
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 2e-05
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 16
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 1
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
68
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
69
+ | 1.0182 | 1.0 | 38145 | 1.5226 | 15.7437 | 18.4167 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.35.2
75
+ - Pytorch 2.1.1+cu121
76
+ - Datasets 2.15.0
77
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "forced_bos_token_id": 0,
7
+ "forced_eos_token_id": 2,
8
+ "no_repeat_ngram_size": 3,
9
+ "num_beams": 4,
10
+ "pad_token_id": 1,
11
+ "transformers_version": "4.35.2"
12
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c666d522232f3ab5a5a1ceb2e0ea1f9f543e711f872cd66d34b4e5c48870da1
3
  size 557912620
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:260dabfbefc5dd582064458bd20c02238b40650cee5aa2d0a76e043717a0814b
3
  size 557912620
runs/Nov24_04-15-09_lingolexico/events.out.tfevents.1700779511.lingolexico.2426091.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3f6b54ea43870b712863d4c75a9f5c6845afb264572b684e054dc81631f0153
3
- size 17555
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0f68619f80b2328f370dd62dbb5cecc93a034f6765f37d78c35796623c5ea77
3
+ size 18292