jingyeom commited on
Commit
4a52241
1 Parent(s): d411254

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -48
README.md CHANGED
@@ -10,51 +10,3 @@ model-index:
10
  results: []
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
-
16
- # nhn_dpo_v3_leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT_DPO
17
-
18
- This model is a fine-tuned version of [ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT](https://huggingface.co/ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT) on an unknown dataset.
19
-
20
- ## Model description
21
-
22
- More information needed
23
-
24
- ## Intended uses & limitations
25
-
26
- More information needed
27
-
28
- ## Training and evaluation data
29
-
30
- More information needed
31
-
32
- ## Training procedure
33
-
34
- ### Training hyperparameters
35
-
36
- The following hyperparameters were used during training:
37
- - learning_rate: 5e-07
38
- - train_batch_size: 1
39
- - eval_batch_size: 8
40
- - seed: 42
41
- - distributed_type: multi-GPU
42
- - num_devices: 7
43
- - gradient_accumulation_steps: 8
44
- - total_train_batch_size: 56
45
- - total_eval_batch_size: 56
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
- - lr_scheduler_type: cosine
48
- - lr_scheduler_warmup_ratio: 0.1
49
- - num_epochs: 1
50
-
51
- ### Training results
52
-
53
-
54
-
55
- ### Framework versions
56
-
57
- - Transformers 4.38.1
58
- - Pytorch 2.2.1+cu118
59
- - Datasets 2.17.1
60
- - Tokenizers 0.15.2
 
10
  results: []
11
  ---
12