--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: final-lr2e-5-bs16-fullprecision results: [] --- # final-lr2e-5-bs16-fullprecision This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4633 - F1 Macro: 0.8276 - F1 Weighted: 0.8754 - F1: 0.7348 - Accuracy: 0.8775 - Confusion Matrix: [[2831 199] [ 291 679]] - Confusion Matrix Norm: [[0.93432343 0.06567657] [0.3 0.7 ]] - Classification Report: precision recall f1-score support 0 0.906791 0.934323 0.920351 3030.0000 1 0.773349 0.700000 0.734848 970.0000 accuracy 0.877500 0.877500 0.877500 0.8775 macro avg 0.840070 0.817162 0.827600 4000.0000 weighted avg 0.874431 0.877500 0.875367 4000.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 12345 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Weighted | F1 | Accuracy | Confusion Matrix | Confusion Matrix Norm | Classification Report | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:------:|:--------:|:--------------------------:|:--------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 0.3362 | 1.0 | 1000 | 0.3034 | 0.8182 | 0.8693 | 0.7191 | 0.8722 | [[2835 195] [ 316 654]] | [[0.93564356 0.06435644] [0.3257732 0.6742268 ]] | precision recall f1-score support 0 0.899714 0.935644 0.917327 3030.00000 1 0.770318 0.674227 0.719076 970.00000 accuracy 0.872250 0.872250 0.872250 0.87225 macro avg 0.835016 0.804935 0.818202 4000.00000 weighted avg 0.868336 0.872250 0.869251 4000.00000 | | 0.2352 | 2.0 | 2000 | 0.3730 | 0.8270 | 0.8730 | 0.7374 | 0.8732 | [[2781 249] [ 258 712]] | [[0.91782178 0.08217822] [0.26597938 0.73402062]] | precision recall f1-score support 0 0.915104 0.917822 0.916461 3030.00000 1 0.740895 0.734021 0.737442 970.00000 accuracy 0.873250 0.873250 0.873250 0.87325 macro avg 0.827999 0.825921 0.826951 4000.00000 weighted avg 0.872858 0.873250 0.873049 4000.00000 | | 0.1566 | 3.0 | 3000 | 0.4633 | 0.8276 | 0.8754 | 0.7348 | 0.8775 | [[2831 199] [ 291 679]] | [[0.93432343 0.06567657] [0.3 0.7 ]] | precision recall f1-score support 0 0.906791 0.934323 0.920351 3030.0000 1 0.773349 0.700000 0.734848 970.0000 accuracy 0.877500 0.877500 0.877500 0.8775 macro avg 0.840070 0.817162 0.827600 4000.0000 weighted avg 0.874431 0.877500 0.875367 4000.0000 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2