--- license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_trainer datasets: - indolem_sentiment metrics: - accuracy - f1 model-index: - name: scenario-normal-finetune-clf-data-indolem_sentiment-model-indolem-indobert-base-uncased results: - task: name: Text Classification type: text-classification dataset: name: indolem_sentiment type: indolem_sentiment config: indolem_sentiment_nusantara_text split: validation args: indolem_sentiment_nusantara_text metrics: - name: Accuracy type: accuracy value: 0.8922305764411027 - name: F1 type: f1 value: 0.8154506437768241 --- # scenario-normal-finetune-clf-data-indolem_sentiment-model-indolem-indobert-base-uncased This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co./indolem/indobert-base-uncased) on the indolem_sentiment dataset. It achieves the following results on the evaluation set: - Loss: 0.7311 - Accuracy: 0.8922 - F1: 0.8155 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 0.44 | 200 | 0.5133 | 0.7544 | 0.3718 | | No log | 0.88 | 400 | 0.4239 | 0.7995 | 0.6875 | | 0.4818 | 1.32 | 600 | 0.3889 | 0.8647 | 0.7523 | | 0.4818 | 1.76 | 800 | 0.3263 | 0.8872 | 0.8069 | | 0.291 | 2.2 | 1000 | 0.3933 | 0.8847 | 0.8067 | | 0.291 | 2.64 | 1200 | 0.4703 | 0.8847 | 0.7982 | | 0.291 | 3.08 | 1400 | 0.5284 | 0.8622 | 0.7843 | | 0.2432 | 3.52 | 1600 | 0.4924 | 0.8897 | 0.8136 | | 0.2432 | 3.96 | 1800 | 0.4952 | 0.9023 | 0.8219 | | 0.1982 | 4.4 | 2000 | 0.5157 | 0.9098 | 0.8421 | | 0.1982 | 4.84 | 2200 | 0.6454 | 0.8847 | 0.8099 | | 0.1982 | 5.27 | 2400 | 0.5636 | 0.9048 | 0.8348 | | 0.1441 | 5.71 | 2600 | 0.6147 | 0.8872 | 0.8193 | | 0.1441 | 6.15 | 2800 | 0.6280 | 0.8997 | 0.8198 | | 0.1147 | 6.59 | 3000 | 0.6505 | 0.8947 | 0.8205 | | 0.1147 | 7.03 | 3200 | 0.6547 | 0.8972 | 0.8285 | | 0.1147 | 7.47 | 3400 | 0.7311 | 0.8922 | 0.8155 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3