--- language: - id license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: sentiment-pt-pl20-3 results: [] --- # sentiment-pt-pl20-3 This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co./indolem/indobert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2974 - Accuracy: 0.8847 - Precision: 0.8609 - Recall: 0.8609 - F1: 0.8609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 30 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.551 | 1.0 | 122 | 0.5009 | 0.7243 | 0.6557 | 0.6074 | 0.6144 | | 0.4528 | 2.0 | 244 | 0.4118 | 0.7820 | 0.7487 | 0.7857 | 0.7578 | | 0.3588 | 3.0 | 366 | 0.3428 | 0.8521 | 0.8440 | 0.7854 | 0.8064 | | 0.3192 | 4.0 | 488 | 0.3117 | 0.8546 | 0.8246 | 0.8246 | 0.8246 | | 0.2714 | 5.0 | 610 | 0.3037 | 0.8672 | 0.8359 | 0.8560 | 0.8446 | | 0.257 | 6.0 | 732 | 0.2833 | 0.8772 | 0.8592 | 0.8381 | 0.8475 | | 0.2405 | 7.0 | 854 | 0.2861 | 0.8847 | 0.8599 | 0.8634 | 0.8616 | | 0.2163 | 8.0 | 976 | 0.2954 | 0.8797 | 0.8539 | 0.8574 | 0.8556 | | 0.2135 | 9.0 | 1098 | 0.2942 | 0.8747 | 0.8510 | 0.8438 | 0.8473 | | 0.2001 | 10.0 | 1220 | 0.3002 | 0.8822 | 0.8656 | 0.8442 | 0.8537 | | 0.1825 | 11.0 | 1342 | 0.3011 | 0.8922 | 0.8749 | 0.8612 | 0.8676 | | 0.1765 | 12.0 | 1464 | 0.2858 | 0.8897 | 0.8695 | 0.8620 | 0.8656 | | 0.1674 | 13.0 | 1586 | 0.2932 | 0.8947 | 0.8698 | 0.8805 | 0.8749 | | 0.1597 | 14.0 | 1708 | 0.2937 | 0.8872 | 0.8599 | 0.8752 | 0.8669 | | 0.1564 | 15.0 | 1830 | 0.2963 | 0.8947 | 0.8757 | 0.8680 | 0.8717 | | 0.142 | 16.0 | 1952 | 0.3025 | 0.8922 | 0.8734 | 0.8637 | 0.8683 | | 0.143 | 17.0 | 2074 | 0.2951 | 0.8897 | 0.8649 | 0.8720 | 0.8683 | | 0.1315 | 18.0 | 2196 | 0.3013 | 0.8822 | 0.8574 | 0.8592 | 0.8583 | | 0.1378 | 19.0 | 2318 | 0.3038 | 0.8872 | 0.8658 | 0.8602 | 0.8629 | | 0.1333 | 20.0 | 2440 | 0.2974 | 0.8847 | 0.8609 | 0.8609 | 0.8609 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.15.2