jingyeom commited on
Commit
d0ac706
โ€ข
1 Parent(s): a0198e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -29
README.md CHANGED
@@ -17,44 +17,36 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [davidkim205/nox-solar-10.7b-v4](https://huggingface.co/davidkim205/nox-solar-10.7b-v4) on an unknown dataset.
19
 
20
- ## Model description
 
 
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
 
25
 
26
- More information needed
 
 
27
 
28
- ## Training and evaluation data
 
 
 
29
 
30
- More information needed
 
31
 
32
- ## Training procedure
33
 
34
- ### Training hyperparameters
 
35
 
36
- The following hyperparameters were used during training:
37
- - learning_rate: 5e-07
38
- - train_batch_size: 1
39
- - eval_batch_size: 8
40
- - seed: 42
41
- - distributed_type: multi-GPU
42
- - num_devices: 6
43
- - gradient_accumulation_steps: 8
44
- - total_train_batch_size: 48
45
- - total_eval_batch_size: 48
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
- - lr_scheduler_type: cosine
48
- - lr_scheduler_warmup_ratio: 0.1
49
- - num_epochs: 1
50
 
51
- ### Training results
52
 
53
 
54
 
55
- ### Framework versions
56
-
57
- - Transformers 4.36.2
58
- - Pytorch 2.0.1+cu117
59
- - Datasets 2.16.1
60
- - Tokenizers 0.15.1
 
17
 
18
  This model is a fine-tuned version of [davidkim205/nox-solar-10.7b-v4](https://huggingface.co/davidkim205/nox-solar-10.7b-v4) on an unknown dataset.
19
 
20
+ ### Our Team
21
+ * Youjin Chung
22
+ * Jingyeom Kim
23
 
24
+ ## Model
25
 
26
+ ### Base Model
27
+ * [davidkim205/nox-solar-10.7b-v4](https://huggingface.co/davidkim205/nox-solar-10.7b-v4)
28
 
29
+ ### Hardware and Software
30
+ * Hardware: A100 * 8 for training our model
31
+ * Deepspeed library & Huggingface TRL Trainer
32
 
33
+ ### Dataset
34
+ * DPO_dataset
35
+ * ์ž์ฒด ์ œ์ž‘ dpo dataset(AI-hub dataset ํ™œ์šฉ)
36
+ * OpenOrca DPO ๋“ฑ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹ ๋ฒˆ์—ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์ž์ฒด๋ชจ๋ธ ํ™œ์šฉ)
37
 
38
+ ### Training Method
39
+ * [DPO](https://arxiv.org/abs/2305.18290)
40
 
41
+ ## Benchmark
42
 
43
+ **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
44
+ ### 0 shot (macro f1)
45
 
46
+ | kobest_boolq | kobest_copa | kobest_hellaswag | kobest_sentineg |
47
+ | ------: | -----: | -----------: | ------: |
48
+ | 0.931613 | 0.740751 | 0.468602 | 0.488465 |
 
 
 
 
 
 
 
 
 
 
 
49
 
 
50
 
51
 
52