lora-midm-7b-nsmc-understanding
This model is a fine-tuned version of KT-AI/midm-bitext-S-7B-inst-v1 on an unknown dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1500
- mixed_precision_training: Native AMP
Training results
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
test accuracy
kt-ai-midm
- Confusion Matrix:
Predicted 0 Predicted 1 Actual 0 443 49 Actual 1 46 462 Accuracy: 0.905
llama-2
- Confusion Matrix:
Predicted 0 Predicted 1 Actual 0 450 42 Actual 1 56 452 Accuracy: 0.902
์์ ๋ถ๋ถ
- ๋ฐ์ดํฐ๋ก๋ฉ
- prepare_sample_text() : ์์คํ ๋ฉ์์ง ๋ณ๊ฒฝ ๋ฐ ํ๋กฌํํธ ํฌ๋ฉง ์ค์
- create_datasets() : train ๋ฐ์ดํฐ ์์ 2000๊ฐ ์ ํ
- ๋ฏธ์ธํ๋์ฉ ๋ชจ๋ธ ๋ก๋ฉ
- script_args : ์ฌ์ฉ ๋ฐ์ดํฐ๋ช nsmc ์ค์ ๋ฐ ๋ชจ๋ธ๋ช KT-AI/midm-bitext-S-7B-inst-v1 ์ค์
- max_steps : ์ต๋ ํ๋ จ ๋จ๊ณ 1500 ์ค์ (300->1000->1500 ์์ ๊ฒฐ๊ณผ ๋์ ์ ํ๋)
- save : ์ฒดํฌํฌ์ธํธ ์ธ์ด๋ธ๋ฅผ ์ํ ํ๋ผ๋ฏธํฐ ์ง์
- ํ๊น ํ์ด์ค push_to_hub ๋ก push
- ์ถ๋ก ํ
์คํธ
- ํ๋กฌํํธ ํ ํ๋ฆฟ ์์ ๋ฐ ์์คํ ๋ฉ์์ง ๋ณ๊ฒฝ
- valid_dataset : test ๋ฐ์ดํฐ ์์ 1000๊ฐ ์ ํ
- ๋ฏธ์ธํ๋๋ ๋ชจ๋ธ ๋ก๋ฉ ํ ํ
์คํธ
- eval_dic : valid_dataset ํ์ตํ ๊ฒฐ๊ณผ ์ถ๋ ฅ
- ์ ํ๋
- valid_dataset ๊ณผ ๋ชจ๋ธ ํ๋ จ ๊ฒฐ๊ณผ true_labels ๋ฅผ ์ด์ฉํ ์ ํ๋ ๋ถ์
Model tree for ehekaanldk/lora-midm-7b-nsmc-understanding
Base model
KT-AI/midm-bitext-S-7B-inst-v1