--- pipeline_tag: text-generation license: apache-2.0 language: - en tags: - T3Q-ko-solar-sft-v2.0 - nlpai-lab/kullm-v2 base_model: davidkim205/nox-solar-10.7b-v4 datasets: - nlpai-lab/kullm-v2 model-index: - name: T3Q-ko-solar-sft-v2.0 results: [] --- Update @ 2024.03.18 ## T3Q-ko-solar-sft-v2.0 This model is a SFT fine-tuned version of davidkim205/nox-solar-10.7b-v4 **Model Developers** Chihoon Lee(chlee10), T3Q ## Training hyperparameters The following hyperparameters were used during training: ```python # 데이터셋과 훈련 횟수와 관련된 하이퍼 파라미터 batch_size = 16 num_epochs = 1 micro_batch = 1 gradient_accumulation_steps = batch_size // micro_batch # 훈련 방법에 대한 하이퍼 파라미터 cutoff_len = 4096 lr_scheduler = 'cosine' warmup_ratio = 0.06 # warmup_steps = 100 learning_rate = 4e-4 optimizer = 'adamw_torch' weight_decay = 0.01 max_grad_norm = 1.0 # LoRA config(QLoRA) lora_r = 16 lora_alpha = 16 lora_dropout = 0.05 lora_target_modules = ["gate_proj", "down_proj", "up_proj"] # Tokenizer에서 나오는 input값 설정 옵션 train_on_inputs = False add_eos_token = False # NEFTune params noise_alpha: int = 5 ``` ## Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu121 - Datasets 2.13.0 - Tokenizers 0.14.1