File size: 739 Bytes
eb86fc8 762de8b eb86fc8 e6b75f6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: apache-2.0
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- llama-2-ko
- KoQuality
base_model: hyunseoki/ko-ref-llama2-7b
---
This model is a instruct-tuned ko-ref-llama2-7b model, using only 10% of [Kullm, OIG, KoAlpaca] Instruction dataset.
len10_k100_mppl_n0.1.json -> 152step
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A30 24G) + CPU Offloading(160GB)
- num_devices: 2
- gradient_accumulation_steps: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
## Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5
- |