silmi224 commited on
Commit
f379d0b
1 Parent(s): 8b8af60

Training complete

Browse files
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: silmi224/finetune-led-35000
3
+ tags:
4
+ - summarization
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: led-risalah_data_v11
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # led-risalah_data_v11
15
+
16
+ This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.6843
19
+ - Rouge1 Precision: 0.7035
20
+ - Rouge1 Recall: 0.1205
21
+ - Rouge1 Fmeasure: 0.2038
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 4
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - num_epochs: 8
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure |
54
+ |:-------------:|:------:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
55
+ | 2.6071 | 0.9714 | 17 | 1.8938 | 0.6021 | 0.1074 | 0.1803 |
56
+ | 1.745 | 2.0 | 35 | 1.7661 | 0.7095 | 0.1174 | 0.1994 |
57
+ | 1.5717 | 2.9714 | 52 | 1.7251 | 0.6704 | 0.1176 | 0.1968 |
58
+ | 1.4921 | 4.0 | 70 | 1.6772 | 0.7014 | 0.1175 | 0.1986 |
59
+ | 1.3932 | 4.9714 | 87 | 1.6745 | 0.7008 | 0.1187 | 0.2011 |
60
+ | 1.3002 | 6.0 | 105 | 1.6869 | 0.6913 | 0.1196 | 0.2012 |
61
+ | 1.2784 | 6.9714 | 122 | 1.6857 | 0.7114 | 0.1246 | 0.2097 |
62
+ | 1.1779 | 7.7714 | 136 | 1.6843 | 0.7035 | 0.1205 | 0.2038 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.41.2
68
+ - Pytorch 2.1.2
69
+ - Datasets 2.19.2
70
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 2,
4
+ "early_stopping": true,
5
+ "eos_token_id": 2,
6
+ "length_penalty": 2.0,
7
+ "max_length": 128,
8
+ "min_length": 40,
9
+ "no_repeat_ngram_size": 3,
10
+ "num_beams": 2,
11
+ "pad_token_id": 1,
12
+ "transformers_version": "4.41.2",
13
+ "use_cache": false
14
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c88407518acf84cfb4c8a7bb61526fab64989ae23da7fe8c63058267914e710
3
  size 647614116
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d903390094f68774b9310b86e0e259a39606acb0fa40f74d56f3d1db914b7b59
3
  size 647614116
runs/Jul06_15-20-55_e73e6eb1ed3c/events.out.tfevents.1720279363.e73e6eb1ed3c.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9859e49ef265efa20467a23ce186c77a90494043cdaa8ab557f38eaed756c0b6
3
- size 11361
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07afb198b58a3d59813a8c50b47cb62145b0136c149aa2e5bd290587ec9efd28
3
+ size 12162
runs/Jul06_15-20-55_e73e6eb1ed3c/events.out.tfevents.1720281920.e73e6eb1ed3c.34.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54a8107773d8708c3bb740142954bda1a3097e5e8108a5f56e4a9c54b883e81e
3
+ size 535