--- language: - id license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: sentiment-pt-pl5-4 results: [] --- # sentiment-pt-pl5-4 This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co./indolem/indobert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3030 - Accuracy: 0.8847 - Precision: 0.8589 - Recall: 0.8659 - F1: 0.8623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 30 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.5491 | 1.0 | 122 | 0.5238 | 0.7118 | 0.6569 | 0.6636 | 0.6598 | | 0.467 | 2.0 | 244 | 0.4078 | 0.7895 | 0.7486 | 0.7685 | 0.7564 | | 0.3738 | 3.0 | 366 | 0.3612 | 0.8496 | 0.8538 | 0.7711 | 0.7971 | | 0.321 | 4.0 | 488 | 0.3049 | 0.8596 | 0.8406 | 0.8107 | 0.8233 | | 0.2799 | 5.0 | 610 | 0.2869 | 0.8772 | 0.8693 | 0.8256 | 0.8431 | | 0.2619 | 6.0 | 732 | 0.2957 | 0.8747 | 0.8523 | 0.8413 | 0.8465 | | 0.2475 | 7.0 | 854 | 0.2894 | 0.8722 | 0.8528 | 0.8321 | 0.8413 | | 0.2336 | 8.0 | 976 | 0.3088 | 0.8596 | 0.8374 | 0.8157 | 0.8253 | | 0.2233 | 9.0 | 1098 | 0.2827 | 0.8772 | 0.8561 | 0.8431 | 0.8492 | | 0.2053 | 10.0 | 1220 | 0.2771 | 0.8897 | 0.8659 | 0.8695 | 0.8676 | | 0.1926 | 11.0 | 1342 | 0.2792 | 0.8847 | 0.8573 | 0.8709 | 0.8636 | | 0.1837 | 12.0 | 1464 | 0.2857 | 0.8872 | 0.8687 | 0.8552 | 0.8615 | | 0.1748 | 13.0 | 1586 | 0.2900 | 0.8922 | 0.8706 | 0.8687 | 0.8697 | | 0.1664 | 14.0 | 1708 | 0.3100 | 0.8872 | 0.8587 | 0.8802 | 0.8681 | | 0.1575 | 15.0 | 1830 | 0.3073 | 0.8872 | 0.8593 | 0.8777 | 0.8675 | | 0.1553 | 16.0 | 1952 | 0.3023 | 0.8972 | 0.8781 | 0.8723 | 0.8751 | | 0.1439 | 17.0 | 2074 | 0.3054 | 0.8847 | 0.8581 | 0.8684 | 0.8629 | | 0.1509 | 18.0 | 2196 | 0.3081 | 0.8847 | 0.8599 | 0.8634 | 0.8616 | | 0.1489 | 19.0 | 2318 | 0.3018 | 0.8897 | 0.8670 | 0.8670 | 0.8670 | | 0.146 | 20.0 | 2440 | 0.3030 | 0.8847 | 0.8589 | 0.8659 | 0.8623 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1