Chung-Fan commited on
Commit
a4a9f03
1 Parent(s): a74d290

Training done for longformer-pubmed-20k

Browse files
Files changed (2) hide show
  1. README.md +9 -8
  2. generation_config.json +1 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: hyesunyun/update-summarization-bart-large-longformer
4
  tags:
@@ -15,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [hyesunyun/update-summarization-bart-large-longformer](https://huggingface.co/hyesunyun/update-summarization-bart-large-longformer) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.9952
19
 
20
  ## Model description
21
 
@@ -47,14 +48,14 @@ The following hyperparameters were used during training:
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:-----:|:----:|:---------------:|
52
- | 1.0912 | 0.75 | 500 | 0.9952 |
53
 
54
 
55
  ### Framework versions
56
 
57
- - Transformers 4.38.2
58
- - Pytorch 2.2.1+cu121
59
- - Datasets 2.19.0
60
- - Tokenizers 0.15.2
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: hyesunyun/update-summarization-bart-large-longformer
5
  tags:
 
16
 
17
  This model is a fine-tuned version of [hyesunyun/update-summarization-bart-large-longformer](https://huggingface.co/hyesunyun/update-summarization-bart-large-longformer) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.9906
20
 
21
  ## Model description
22
 
 
48
 
49
  ### Training results
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 1.0835 | 0.7477 | 500 | 0.9906 |
54
 
55
 
56
  ### Framework versions
57
 
58
+ - Transformers 4.44.2
59
+ - Pytorch 2.4.1+cu121
60
+ - Datasets 3.0.1
61
+ - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -10,6 +10,6 @@
10
  "no_repeat_ngram_size": 3,
11
  "num_beams": 4,
12
  "pad_token_id": 1,
13
- "transformers_version": "4.38.2",
14
  "use_cache": false
15
  }
 
10
  "no_repeat_ngram_size": 3,
11
  "num_beams": 4,
12
  "pad_token_id": 1,
13
+ "transformers_version": "4.44.2",
14
  "use_cache": false
15
  }