--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall base_model: salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD model-index: - name: ABSA-SentencePair-DAPT-HARDARABS-bert-base-Camel-MSA-ru2 results: [] --- # ABSA-SentencePair-DAPT-HARDARABS-bert-base-Camel-MSA-ru2 This model is a fine-tuned version of [salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD](https://huggingface.co./salohnana2018/CAMEL-BERT-MSA-domianAdaption-Single-ABSA-HARD) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7587 - Accuracy: 0.8941 - F1: 0.8941 - Precision: 0.8941 - Recall: 0.8941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.5254 | 1.0 | 265 | 0.4268 | 0.8483 | 0.8483 | 0.8483 | 0.8483 | | 0.3572 | 2.0 | 530 | 0.3457 | 0.8563 | 0.8563 | 0.8563 | 0.8563 | | 0.2477 | 3.0 | 795 | 0.5427 | 0.8795 | 0.8795 | 0.8795 | 0.8795 | | 0.1905 | 4.0 | 1060 | 0.8314 | 0.8899 | 0.8899 | 0.8899 | 0.8899 | | 0.1353 | 5.0 | 1325 | 1.0504 | 0.8852 | 0.8852 | 0.8852 | 0.8852 | | 0.12 | 6.0 | 1590 | 0.7891 | 0.8842 | 0.8842 | 0.8842 | 0.8842 | | 0.0749 | 7.0 | 1855 | 1.3696 | 0.8894 | 0.8894 | 0.8894 | 0.8894 | | 0.097 | 8.0 | 2120 | 0.9817 | 0.8904 | 0.8904 | 0.8904 | 0.8904 | | 0.0624 | 9.0 | 2385 | 1.0450 | 0.8847 | 0.8847 | 0.8847 | 0.8847 | | 0.0582 | 10.0 | 2650 | 1.3148 | 0.8970 | 0.8970 | 0.8970 | 0.8970 | | 0.0599 | 11.0 | 2915 | 1.4069 | 0.8946 | 0.8946 | 0.8946 | 0.8946 | | 0.0451 | 12.0 | 3180 | 1.0183 | 0.8889 | 0.8889 | 0.8889 | 0.8889 | | 0.0309 | 13.0 | 3445 | 1.3034 | 0.8932 | 0.8932 | 0.8932 | 0.8932 | | 0.0251 | 14.0 | 3710 | 1.5148 | 0.8946 | 0.8946 | 0.8946 | 0.8946 | | 0.0245 | 15.0 | 3975 | 1.5136 | 0.8946 | 0.8946 | 0.8946 | 0.8946 | | 0.0153 | 16.0 | 4240 | 1.3876 | 0.8927 | 0.8927 | 0.8927 | 0.8927 | | 0.0161 | 17.0 | 4505 | 1.6176 | 0.8885 | 0.8885 | 0.8885 | 0.8885 | | 0.0166 | 18.0 | 4770 | 1.6110 | 0.8937 | 0.8937 | 0.8937 | 0.8937 | | 0.0137 | 19.0 | 5035 | 1.7113 | 0.8960 | 0.8960 | 0.8960 | 0.8960 | | 0.0111 | 20.0 | 5300 | 1.7241 | 0.8946 | 0.8946 | 0.8946 | 0.8946 | | 0.0101 | 21.0 | 5565 | 1.6722 | 0.8970 | 0.8970 | 0.8970 | 0.8970 | | 0.0142 | 22.0 | 5830 | 1.6423 | 0.8904 | 0.8904 | 0.8904 | 0.8904 | | 0.0118 | 23.0 | 6095 | 1.6384 | 0.8904 | 0.8904 | 0.8904 | 0.8904 | | 0.0083 | 24.0 | 6360 | 1.6616 | 0.8922 | 0.8922 | 0.8922 | 0.8922 | | 0.0124 | 25.0 | 6625 | 1.9046 | 0.8951 | 0.8951 | 0.8951 | 0.8951 | | 0.0154 | 26.0 | 6890 | 1.6547 | 0.8974 | 0.8974 | 0.8974 | 0.8974 | | 0.0086 | 27.0 | 7155 | 1.6440 | 0.8932 | 0.8932 | 0.8932 | 0.8932 | | 0.0077 | 28.0 | 7420 | 1.7566 | 0.8941 | 0.8941 | 0.8941 | 0.8941 | | 0.0076 | 29.0 | 7685 | 1.7419 | 0.8937 | 0.8937 | 0.8937 | 0.8937 | | 0.0078 | 30.0 | 7950 | 1.7587 | 0.8941 | 0.8941 | 0.8941 | 0.8941 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2