lucyknada commited on
Commit
1b07f93
·
verified ·
1 Parent(s): 5c4eebb

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ datasets:
5
+ - jondurbin/gutenberg-dpo-v0.1
6
+ ---
7
+ ### exl2 quant (measurement.json in main branch)
8
+ ---
9
+ ### check revisions for quants
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # ifable/gemma-2-Ifable-9B
16
+ This model ranked first on the Creative Writing Benchmark (https://eqbench.com/creative_writing.html) on September 10, 2024
17
+
18
+ ## Training and evaluation data
19
+
20
+ - Gutenberg: https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1
21
+ - Carefully curated proprietary creative writing dataset
22
+
23
+ ## Training procedure
24
+
25
+ Training method: SimPO (GitHub - princeton-nlp/SimPO: SimPO: Simple Preference Optimization with a Reference-Free Reward)
26
+
27
+ It achieves the following results on the evaluation set:
28
+ - Loss: 1.0163
29
+ - Rewards/chosen: -21.6822
30
+ - Rewards/rejected: -47.8754
31
+ - Rewards/accuracies: 0.9167
32
+ - Rewards/margins: 26.1931
33
+ - Logps/rejected: -4.7875
34
+ - Logps/chosen: -2.1682
35
+ - Logits/rejected: -17.0475
36
+ - Logits/chosen: -12.0041
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 8e-07
42
+ - train_batch_size: 1
43
+ - eval_batch_size: 1
44
+ - seed: 42
45
+ - distributed_type: multi-GPU
46
+ - num_devices: 8
47
+ - gradient_accumulation_steps: 16
48
+ - total_train_batch_size: 128
49
+ - total_eval_batch_size: 8
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: cosine
52
+ - lr_scheduler_warmup_ratio: 0.1
53
+ - num_epochs: 1.0
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss |
58
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|
59
+ | 1.4444 | 0.9807 | 35 | 1.0163 | -21.6822 | -47.8754 | 0.9167 | 26.1931 | -4.7875 | -2.1682 | -17.0475 | -12.0041 | 0.0184 |
60
+
61
+
62
+ ### Framework versions
63
+
64
+ - Transformers 4.43.4
65
+ - Pytorch 2.3.0a0+ebedce2
66
+ - Datasets 2.20.0
67
+ - Tokenizers 0.19.1
68
+
69
+
70
+ We are looking for product manager and operations managers to build applications through our model, and also open for business cooperation, and also AI engineer to join us, contact with : [email protected]