ahmedmbutt commited on
Commit
af9f58b
1 Parent(s): 1528a2d

End of training

Browse files
README.md CHANGED
@@ -7,34 +7,7 @@ metrics:
7
  - rouge
8
  model-index:
9
  - name: PTS-Bart-Large-CNN
10
- results:
11
- - task:
12
- type: summarization
13
- name: Summarization
14
- dataset:
15
- name: PTS Dataset
16
- type: PTS-Dataset
17
- metrics:
18
- - name: Rouge1
19
- type: rouge
20
- value: 0.6376
21
- - name: Rouge2
22
- type: rouge
23
- value: 0.4143
24
- - name: Rougel
25
- type: rouge
26
- value: 0.538
27
- - name: Rougelsum
28
- type: rouge
29
- value: 0.5387
30
- pipeline_tag: summarization
31
- datasets:
32
- - ahmedmbutt/PTS-Dataset
33
- language:
34
- - en
35
- library_name: transformers
36
- widget:
37
- - text: "I have to say that I do miss talking to a good psychiatrist- however. I could sit and argue for ages with a psychiatrist who is intelligent and kind (quite hard to find- but they do exist). Especially now that I have a PhD in philosophy and have read everything that can be found on madness- including the notes they wrote about me when I was in the hospital. Nowadays- psychiatrists have a tendency to sign me off pretty quickly when I come onto their radar. They don’t wish to deal with me- I tire them out."
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -42,16 +15,16 @@ should probably proofread and complete it, then remove this comment. -->
42
 
43
  # PTS-Bart-Large-CNN
44
 
45
- This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the PTS dataset.
46
  It achieves the following results on the evaluation set:
47
- - Loss: 1.2638
48
- - Rouge1: 0.6376
49
- - Rouge2: 0.4143
50
- - Rougel: 0.538
51
- - Rougelsum: 0.5387
52
- - Gen Len: 76.8417
53
 
54
- <!-- ## Model description
55
 
56
  More information needed
57
 
@@ -61,7 +34,7 @@ More information needed
61
 
62
  ## Training and evaluation data
63
 
64
- More information needed -->
65
 
66
  ## Training procedure
67
 
@@ -81,14 +54,14 @@ The following hyperparameters were used during training:
81
 
82
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
83
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
84
- | No log | 1.0 | 180 | 0.8748 | 0.6166 | 0.3827 | 0.5058 | 0.5055 | 77.6583 |
85
- | No log | 2.0 | 360 | 0.8774 | 0.6307 | 0.4064 | 0.5302 | 0.531 | 77.5111 |
86
- | 0.6761 | 3.0 | 540 | 0.9064 | 0.635 | 0.4052 | 0.5309 | 0.5311 | 76.2833 |
87
- | 0.6761 | 4.0 | 720 | 1.0386 | 0.6329 | 0.4038 | 0.5261 | 0.5262 | 78.4889 |
88
- | 0.6761 | 5.0 | 900 | 1.0993 | 0.6285 | 0.4016 | 0.5239 | 0.5246 | 77.0083 |
89
- | 0.2016 | 6.0 | 1080 | 1.2025 | 0.6351 | 0.4126 | 0.5351 | 0.5356 | 76.0722 |
90
- | 0.2016 | 7.0 | 1260 | 1.2399 | 0.6356 | 0.4108 | 0.5362 | 0.5368 | 78.5361 |
91
- | 0.2016 | 8.0 | 1440 | 1.2638 | 0.6376 | 0.4143 | 0.538 | 0.5387 | 76.8417 |
92
 
93
 
94
  ### Framework versions
@@ -96,4 +69,4 @@ The following hyperparameters were used during training:
96
  - Transformers 4.41.2
97
  - Pytorch 2.3.0+cu121
98
  - Datasets 2.20.0
99
- - Tokenizers 0.19.1
 
7
  - rouge
8
  model-index:
9
  - name: PTS-Bart-Large-CNN
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # PTS-Bart-Large-CNN
17
 
18
+ This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.1442
21
+ - Rouge1: 0.6591
22
+ - Rouge2: 0.449
23
+ - Rougel: 0.5635
24
+ - Rougelsum: 0.5633
25
+ - Gen Len: 78.7977
26
 
27
+ ## Model description
28
 
29
  More information needed
30
 
 
34
 
35
  ## Training and evaluation data
36
 
37
+ More information needed
38
 
39
  ## Training procedure
40
 
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
56
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
57
+ | No log | 1.0 | 220 | 0.8235 | 0.6279 | 0.4019 | 0.5268 | 0.5267 | 82.8295 |
58
+ | No log | 2.0 | 440 | 0.8053 | 0.6461 | 0.4278 | 0.5486 | 0.5484 | 78.6318 |
59
+ | 0.7147 | 3.0 | 660 | 0.8889 | 0.6471 | 0.4324 | 0.5491 | 0.5488 | 79.4432 |
60
+ | 0.7147 | 4.0 | 880 | 0.9679 | 0.6533 | 0.4391 | 0.5538 | 0.5534 | 80.2023 |
61
+ | 0.2566 | 5.0 | 1100 | 0.9734 | 0.6563 | 0.4422 | 0.5574 | 0.5571 | 78.9727 |
62
+ | 0.2566 | 6.0 | 1320 | 1.0504 | 0.6538 | 0.4436 | 0.559 | 0.5585 | 78.5682 |
63
+ | 0.1136 | 7.0 | 1540 | 1.1172 | 0.6591 | 0.4474 | 0.5646 | 0.5647 | 78.6068 |
64
+ | 0.1136 | 8.0 | 1760 | 1.1442 | 0.6591 | 0.449 | 0.5635 | 0.5633 | 78.7977 |
65
 
66
 
67
  ### Framework versions
 
69
  - Transformers 4.41.2
70
  - Pytorch 2.3.0+cu121
71
  - Datasets 2.20.0
72
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:003aca389b0b6cd06bacf11784b3edf3ed4ba62cc2b1db939c81a459af5cfc12
3
  size 1625422896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cae43481aa38024bdcee30122c7131fea498f6e9724d3739ddd8666d8411fa4a
3
  size 1625422896
runs/Jun24_14-58-37_bca9004d7cc1/events.out.tfevents.1719241118.bca9004d7cc1.270.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:68275b19d79a08f3353ec7480eebf98cccec39a1c86ee0f3fee85219d7faa93d
3
- size 9727
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49d02f42f59f636db4899eff44d9f54ed3d559b2656ce74d47b071d6563ed3d1
3
+ size 11131