yaeeun's picture
Update README.md
f4ae7a4
|
raw
history blame
1.79 kB
metadata
license: cc-by-nc-4.0
base_model: KT-AI/midm-bitext-S-7B-inst-v1
tags:
  - generated_from_trainer
model-index:
  - name: lora-midm-7b-nsmc-review-understanding
    results: []
datasets:
  - nsmc

lora-midm-7b-nsmc-review-understanding

This model is a fine-tuned version of KT-AI/midm-bitext-S-7B-inst-v1 on an unknown dataset.

Model description

nsmc data ๊ธฐ๋ฐ˜ ๋ฏธ์„ธํŠœ๋‹ ๋ชจ๋ธ

Intended uses & limitations

More information needed

Training and evaluation data

training data๋กœ nsmc train data ์•ž์ชฝ 2000๊ฐœ, evaluation data๋กœ nsmc test data ์•ž์ชฝ 1000๊ฐœ๋ฅผ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

์ด 200step ๋Œ๋ ธ์Šต๋‹ˆ๋‹ค. 50step๋งˆ๋‹ค checkํ•œ ๊ฒฐ๊ณผ๋Š” ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค.
50 step training loss: 1.6881
100 step training loss: 1.1443
150 step training loss: 1.0563
200 step training loss: 1.0446

์‹คํ—˜ ๋‚ด์šฉ ๋ฐ ๋ถ„๋ฅ˜ ๊ฒฐ๊ณผ

๋ฏธ์„ธํŠœ๋‹ํ•œ ๋ชจ๋ธ์— nsmc test data 1000๊ฐœ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์ฃผ์–ด ๊ธ์ • ๋˜๋Š” ๋ถ€์ • ๋‹จ์–ด๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก ํ–ˆ์Šต๋‹ˆ๋‹ค.
๋‹จ์–ด ์ƒ์„ฑ ๊ฒฐ๊ณผ '๊ธ์ •' 444๊ฐœ, '๋ถ€์ •' 532๊ฐœ, ',' 4๊ฐœ, '์ •' 20๊ฐœ ์ž…๋‹ˆ๋‹ค.
์ •ํ™•๋„๋Š” ์ •๋‹ต์ˆ˜ / 1000 * 100์œผ๋กœ ๊ณ„์‚ฐํ–ˆ์œผ๋ฉฐ, ๊ฒฐ๊ณผ๋Š” 87.80% ์ž…๋‹ˆ๋‹ค.

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0