--- license: mit base_model: gogamza/kobart-base-v2 tags: - generated_from_trainer model-index: - name: KoBART_base_v2-trial results: [] --- # KoBART_base_v2-trial This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co./gogamza/kobart-base-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4147 | 0.11 | 50 | 0.5490 | | 0.5457 | 0.22 | 100 | 0.4810 | | 0.4642 | 0.32 | 150 | 0.3971 | | 0.4364 | 0.43 | 200 | 0.3955 | | 0.4111 | 0.54 | 250 | 0.3851 | | 0.3888 | 0.65 | 300 | 0.3438 | | 0.3586 | 0.76 | 350 | 0.3290 | | 0.3304 | 0.87 | 400 | 0.3201 | | 0.3337 | 0.97 | 450 | 0.2992 | | 0.2677 | 1.08 | 500 | 0.3161 | | 0.2576 | 1.19 | 550 | 0.2981 | | 0.2467 | 1.3 | 600 | 0.2846 | | 0.2369 | 1.41 | 650 | 0.2674 | | 0.226 | 1.52 | 700 | 0.2529 | | 0.2204 | 1.62 | 750 | 0.2446 | | 0.204 | 1.73 | 800 | 0.2400 | | 0.2071 | 1.84 | 850 | 0.2262 | | 0.1911 | 1.95 | 900 | 0.2153 | | 0.1591 | 2.06 | 950 | 0.2121 | | 0.1338 | 2.16 | 1000 | 0.2090 | | 0.1312 | 2.27 | 1050 | 0.1986 | | 0.1336 | 2.38 | 1100 | 0.1947 | | 0.1205 | 2.49 | 1150 | 0.1903 | | 0.1162 | 2.6 | 1200 | 0.1867 | | 0.1187 | 2.71 | 1250 | 0.1840 | | 0.1171 | 2.81 | 1300 | 0.1821 | | 0.1149 | 2.92 | 1350 | 0.1815 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0