sentientconch commited on
Commit
7cabe6b
1 Parent(s): 0e899ab

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ - bleu
8
+ model-index:
9
+ - name: reddit_gen_final
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # reddit_gen_final
17
+
18
+ This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 2.5050
21
+ - Rouge: {'rouge1': 0.5318334167452433, 'rouge2': 0.3266503490464716, 'rougeL': 0.4940196552424935, 'rougeLsum': 0.49965823775029017}
22
+ - Perplexity: 810.2161
23
+ - Bleu: {'bleu': 0.3233116246700081, 'precisions': [0.5456588886510291, 0.3399931653275477, 0.273607307447275, 0.2384403661808989], 'brevity_penalty': 0.9747575251310703, 'length_ratio': 0.975070821529745, 'translation_length': 130796, 'reference_length': 134140}
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.001
43
+ - train_batch_size: 1024
44
+ - eval_batch_size: 8
45
+ - seed: 42
46
+ - gradient_accumulation_steps: 32
47
+ - total_train_batch_size: 32768
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - lr_scheduler_warmup_steps: 100
51
+ - training_steps: 1077
52
+ - mixed_precision_training: Native AMP
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Rouge | Perplexity | Bleu |
57
+ |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------------------:|:----------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
58
+ | 3.9872 | 18.02 | 320 | 3.1407 | {'rouge1': 0.4236605668840905, 'rouge2': 0.17676647154634773, 'rougeL': 0.37369585565755137, 'rougeLsum': 0.38069226779493875} | 1007.2803 | {'bleu': 0.1795660368641941, 'precisions': [0.4476203532341719, 0.1996849620846328, 0.12947506323794772, 0.097952801903731], 'brevity_penalty': 0.9786100068791068, 'length_ratio': 0.9788355449530342, 'translation_length': 131301, 'reference_length': 134140} |
59
+ | 3.0112 | 37.02 | 640 | 2.6693 | {'rouge1': 0.5006690963461402, 'rouge2': 0.2845737029774397, 'rougeL': 0.4598926127632702, 'rougeLsum': 0.46623659707701914} | 891.6387 | {'bleu': 0.28259351848586683, 'precisions': [0.5153005174673647, 0.2977358252901072, 0.22869830241856198, 0.19400129812455164], 'brevity_penalty': 0.9838352619991267, 'length_ratio': 0.9839645146861488, 'translation_length': 131989, 'reference_length': 134140} |
60
+ | 2.5776 | 56.02 | 960 | 2.5050 | {'rouge1': 0.5318334167452433, 'rouge2': 0.3266503490464716, 'rougeL': 0.4940196552424935, 'rougeLsum': 0.49965823775029017} | 810.2161 | {'bleu': 0.3233116246700081, 'precisions': [0.5456588886510291, 0.3399931653275477, 0.273607307447275, 0.2384403661808989], 'brevity_penalty': 0.9747575251310703, 'length_ratio': 0.975070821529745, 'translation_length': 130796, 'reference_length': 134140} |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - Transformers 4.28.1
66
+ - Pytorch 1.13.1+cu117
67
+ - Datasets 2.10.1
68
+ - Tokenizers 0.13.2