indobert-sentiment-nanda
This model is a fine-tuned version of mdhugol/indonesia-bert-sentiment-classification on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4355
- Accuracy: 0.8569
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 8
- eval_batch_size: 8
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 200
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
4.3321 | 2.2422 | 500 | 4.1707 | 0.1667 |
3.8125 | 4.4843 | 1000 | 3.6117 | 0.2037 |
3.1757 | 6.7265 | 1500 | 3.0306 | 0.2475 |
2.6361 | 8.9686 | 2000 | 2.4769 | 0.3215 |
2.1065 | 11.2108 | 2500 | 1.9885 | 0.3872 |
1.677 | 13.4529 | 3000 | 1.5943 | 0.4848 |
1.3258 | 15.6951 | 3500 | 1.3057 | 0.5673 |
1.1192 | 17.9372 | 4000 | 1.1166 | 0.6229 |
0.9727 | 20.1794 | 4500 | 0.9917 | 0.6633 |
0.8813 | 22.4215 | 5000 | 0.9079 | 0.6852 |
0.8143 | 24.6637 | 5500 | 0.8496 | 0.7239 |
0.7987 | 26.9058 | 6000 | 0.8057 | 0.7475 |
0.7569 | 29.1480 | 6500 | 0.7709 | 0.7626 |
0.7165 | 31.3901 | 7000 | 0.7430 | 0.7710 |
0.7174 | 33.6323 | 7500 | 0.7192 | 0.7929 |
0.676 | 35.8744 | 8000 | 0.6983 | 0.7997 |
0.6645 | 38.1166 | 8500 | 0.6795 | 0.8030 |
0.6448 | 40.3587 | 9000 | 0.6620 | 0.8047 |
0.6547 | 42.6009 | 9500 | 0.6464 | 0.8081 |
0.6099 | 44.8430 | 10000 | 0.6319 | 0.8148 |
0.6292 | 47.0852 | 10500 | 0.6188 | 0.8182 |
0.6004 | 49.3274 | 11000 | 0.6072 | 0.8232 |
0.5795 | 51.5695 | 11500 | 0.5967 | 0.8266 |
0.5803 | 53.8117 | 12000 | 0.5867 | 0.8316 |
0.5738 | 56.0538 | 12500 | 0.5772 | 0.8333 |
0.5677 | 58.2960 | 13000 | 0.5684 | 0.8367 |
0.5475 | 60.5381 | 13500 | 0.5606 | 0.8367 |
0.5692 | 62.7803 | 14000 | 0.5535 | 0.8367 |
0.5249 | 65.0224 | 14500 | 0.5468 | 0.8384 |
0.5288 | 67.2646 | 15000 | 0.5402 | 0.8384 |
0.5382 | 69.5067 | 15500 | 0.5337 | 0.8401 |
0.519 | 71.7489 | 16000 | 0.5278 | 0.8401 |
0.5219 | 73.9910 | 16500 | 0.5225 | 0.8401 |
0.5051 | 76.2332 | 17000 | 0.5175 | 0.8401 |
0.5289 | 78.4753 | 17500 | 0.5122 | 0.8418 |
0.4888 | 80.7175 | 18000 | 0.5085 | 0.8401 |
0.5165 | 82.9596 | 18500 | 0.5039 | 0.8418 |
0.505 | 85.2018 | 19000 | 0.4996 | 0.8418 |
0.482 | 87.4439 | 19500 | 0.4960 | 0.8418 |
0.4926 | 89.6861 | 20000 | 0.4923 | 0.8434 |
0.4916 | 91.9283 | 20500 | 0.4889 | 0.8418 |
0.4811 | 94.1704 | 21000 | 0.4861 | 0.8434 |
0.4963 | 96.4126 | 21500 | 0.4822 | 0.8485 |
0.4684 | 98.6547 | 22000 | 0.4789 | 0.8502 |
0.4907 | 100.8969 | 22500 | 0.4763 | 0.8502 |
0.4704 | 103.1390 | 23000 | 0.4741 | 0.8485 |
0.4807 | 105.3812 | 23500 | 0.4714 | 0.8502 |
0.4806 | 107.6233 | 24000 | 0.4691 | 0.8502 |
0.462 | 109.8655 | 24500 | 0.4669 | 0.8502 |
0.4747 | 112.1076 | 25000 | 0.4648 | 0.8502 |
0.4674 | 114.3498 | 25500 | 0.4628 | 0.8502 |
0.4689 | 116.5919 | 26000 | 0.4607 | 0.8502 |
0.4667 | 118.8341 | 26500 | 0.4587 | 0.8502 |
0.4566 | 121.0762 | 27000 | 0.4569 | 0.8535 |
0.4679 | 123.3184 | 27500 | 0.4551 | 0.8519 |
0.4714 | 125.5605 | 28000 | 0.4535 | 0.8519 |
0.4532 | 127.8027 | 28500 | 0.4520 | 0.8519 |
0.4621 | 130.0448 | 29000 | 0.4505 | 0.8519 |
0.4458 | 132.2870 | 29500 | 0.4491 | 0.8535 |
0.4693 | 134.5291 | 30000 | 0.4480 | 0.8535 |
0.443 | 136.7713 | 30500 | 0.4471 | 0.8535 |
0.4503 | 139.0135 | 31000 | 0.4460 | 0.8535 |
0.4449 | 141.2556 | 31500 | 0.4451 | 0.8535 |
0.4587 | 143.4978 | 32000 | 0.4440 | 0.8535 |
0.4445 | 145.7399 | 32500 | 0.4432 | 0.8535 |
0.4465 | 147.9821 | 33000 | 0.4423 | 0.8552 |
0.4483 | 150.2242 | 33500 | 0.4414 | 0.8552 |
0.4392 | 152.4664 | 34000 | 0.4409 | 0.8552 |
0.4514 | 154.7085 | 34500 | 0.4401 | 0.8552 |
0.4444 | 156.9507 | 35000 | 0.4394 | 0.8569 |
0.457 | 159.1928 | 35500 | 0.4388 | 0.8552 |
0.434 | 161.4350 | 36000 | 0.4383 | 0.8569 |
0.458 | 163.6771 | 36500 | 0.4380 | 0.8569 |
0.4369 | 165.9193 | 37000 | 0.4375 | 0.8569 |
0.4442 | 168.1614 | 37500 | 0.4371 | 0.8569 |
0.4487 | 170.4036 | 38000 | 0.4369 | 0.8569 |
0.4388 | 172.6457 | 38500 | 0.4366 | 0.8569 |
0.451 | 174.8879 | 39000 | 0.4364 | 0.8569 |
0.4446 | 177.1300 | 39500 | 0.4362 | 0.8569 |
0.4288 | 179.3722 | 40000 | 0.4360 | 0.8569 |
0.4577 | 181.6143 | 40500 | 0.4359 | 0.8569 |
0.438 | 183.8565 | 41000 | 0.4358 | 0.8569 |
0.4319 | 186.0987 | 41500 | 0.4357 | 0.8569 |
0.4457 | 188.3408 | 42000 | 0.4357 | 0.8569 |
0.4312 | 190.5830 | 42500 | 0.4356 | 0.8569 |
0.4557 | 192.8251 | 43000 | 0.4355 | 0.8569 |
0.4401 | 195.0673 | 43500 | 0.4356 | 0.8569 |
0.4468 | 197.3094 | 44000 | 0.4355 | 0.8569 |
0.4492 | 199.5516 | 44500 | 0.4355 | 0.8569 |
Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.