--- language: - id license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: sentiment-pt-pl10-4 results: [] --- # sentiment-pt-pl10-4 This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co./indolem/indobert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2951 - Accuracy: 0.8847 - Precision: 0.8609 - Recall: 0.8609 - F1: 0.8609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 30 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.5448 | 1.0 | 122 | 0.5047 | 0.7243 | 0.6629 | 0.6524 | 0.6568 | | 0.4527 | 2.0 | 244 | 0.4320 | 0.7945 | 0.7667 | 0.8121 | 0.7752 | | 0.3603 | 3.0 | 366 | 0.3370 | 0.8471 | 0.8393 | 0.7768 | 0.7985 | | 0.3081 | 4.0 | 488 | 0.2995 | 0.8722 | 0.8453 | 0.8471 | 0.8462 | | 0.2793 | 5.0 | 610 | 0.3008 | 0.8747 | 0.8537 | 0.8388 | 0.8457 | | 0.2526 | 6.0 | 732 | 0.2987 | 0.8697 | 0.8449 | 0.8378 | 0.8412 | | 0.2478 | 7.0 | 854 | 0.3030 | 0.8772 | 0.8609 | 0.8356 | 0.8467 | | 0.2337 | 8.0 | 976 | 0.2974 | 0.8672 | 0.8463 | 0.8260 | 0.8351 | | 0.217 | 9.0 | 1098 | 0.2774 | 0.8722 | 0.8562 | 0.8271 | 0.8395 | | 0.1966 | 10.0 | 1220 | 0.2846 | 0.8697 | 0.8411 | 0.8478 | 0.8443 | | 0.199 | 11.0 | 1342 | 0.2910 | 0.8822 | 0.8639 | 0.8467 | 0.8545 | | 0.187 | 12.0 | 1464 | 0.2871 | 0.8772 | 0.8609 | 0.8356 | 0.8467 | | 0.1812 | 13.0 | 1586 | 0.2813 | 0.8797 | 0.8585 | 0.8474 | 0.8526 | | 0.1633 | 14.0 | 1708 | 0.2957 | 0.8822 | 0.8555 | 0.8642 | 0.8596 | | 0.1607 | 15.0 | 1830 | 0.2875 | 0.8922 | 0.8706 | 0.8687 | 0.8697 | | 0.1584 | 16.0 | 1952 | 0.2859 | 0.8822 | 0.8610 | 0.8517 | 0.8561 | | 0.1535 | 17.0 | 2074 | 0.2924 | 0.8847 | 0.8609 | 0.8609 | 0.8609 | | 0.1432 | 18.0 | 2196 | 0.2966 | 0.8847 | 0.8599 | 0.8634 | 0.8616 | | 0.1466 | 19.0 | 2318 | 0.2947 | 0.8822 | 0.8596 | 0.8542 | 0.8568 | | 0.1411 | 20.0 | 2440 | 0.2951 | 0.8847 | 0.8609 | 0.8609 | 0.8609 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.15.2