simonycl's picture
update model card README.md
98c15ea
|
raw
history blame
10.6 kB
metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: best_model-sst-2-16-13
    results: []

best_model-sst-2-16-13

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5420
  • Accuracy: 0.8125

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 0.6945 0.5938
No log 2.0 2 0.6945 0.5938
No log 3.0 3 0.6945 0.5938
No log 4.0 4 0.6944 0.5938
No log 5.0 5 0.6944 0.5938
No log 6.0 6 0.6944 0.5938
No log 7.0 7 0.6944 0.5938
No log 8.0 8 0.6943 0.5938
No log 9.0 9 0.6943 0.5938
0.7032 10.0 10 0.6943 0.5938
0.7032 11.0 11 0.6942 0.5938
0.7032 12.0 12 0.6942 0.5938
0.7032 13.0 13 0.6941 0.5938
0.7032 14.0 14 0.6940 0.5938
0.7032 15.0 15 0.6940 0.5938
0.7032 16.0 16 0.6939 0.5938
0.7032 17.0 17 0.6938 0.5938
0.7032 18.0 18 0.6937 0.5938
0.7032 19.0 19 0.6936 0.5938
0.709 20.0 20 0.6935 0.5938
0.709 21.0 21 0.6934 0.5938
0.709 22.0 22 0.6933 0.5938
0.709 23.0 23 0.6932 0.5938
0.709 24.0 24 0.6931 0.5938
0.709 25.0 25 0.6930 0.5938
0.709 26.0 26 0.6928 0.5938
0.709 27.0 27 0.6927 0.5938
0.709 28.0 28 0.6926 0.5938
0.709 29.0 29 0.6924 0.5938
0.6984 30.0 30 0.6923 0.5938
0.6984 31.0 31 0.6921 0.5938
0.6984 32.0 32 0.6920 0.5938
0.6984 33.0 33 0.6918 0.5938
0.6984 34.0 34 0.6916 0.5938
0.6984 35.0 35 0.6915 0.5938
0.6984 36.0 36 0.6913 0.5938
0.6984 37.0 37 0.6911 0.5938
0.6984 38.0 38 0.6909 0.5938
0.6984 39.0 39 0.6907 0.5938
0.6833 40.0 40 0.6905 0.5938
0.6833 41.0 41 0.6903 0.5938
0.6833 42.0 42 0.6901 0.5938
0.6833 43.0 43 0.6899 0.5938
0.6833 44.0 44 0.6897 0.5938
0.6833 45.0 45 0.6895 0.5938
0.6833 46.0 46 0.6893 0.5938
0.6833 47.0 47 0.6890 0.5938
0.6833 48.0 48 0.6888 0.5938
0.6833 49.0 49 0.6885 0.5938
0.6831 50.0 50 0.6882 0.5938
0.6831 51.0 51 0.6879 0.5938
0.6831 52.0 52 0.6876 0.5938
0.6831 53.0 53 0.6873 0.5938
0.6831 54.0 54 0.6870 0.5938
0.6831 55.0 55 0.6867 0.625
0.6831 56.0 56 0.6863 0.625
0.6831 57.0 57 0.6860 0.625
0.6831 58.0 58 0.6856 0.625
0.6831 59.0 59 0.6852 0.625
0.669 60.0 60 0.6848 0.625
0.669 61.0 61 0.6844 0.625
0.669 62.0 62 0.6839 0.625
0.669 63.0 63 0.6835 0.625
0.669 64.0 64 0.6830 0.625
0.669 65.0 65 0.6824 0.625
0.669 66.0 66 0.6819 0.625
0.669 67.0 67 0.6814 0.625
0.669 68.0 68 0.6808 0.625
0.669 69.0 69 0.6802 0.625
0.6556 70.0 70 0.6796 0.625
0.6556 71.0 71 0.6789 0.625
0.6556 72.0 72 0.6782 0.625
0.6556 73.0 73 0.6774 0.625
0.6556 74.0 74 0.6766 0.6562
0.6556 75.0 75 0.6757 0.6562
0.6556 76.0 76 0.6747 0.6562
0.6556 77.0 77 0.6736 0.6562
0.6556 78.0 78 0.6725 0.6562
0.6556 79.0 79 0.6713 0.6562
0.6248 80.0 80 0.6700 0.6562
0.6248 81.0 81 0.6687 0.6562
0.6248 82.0 82 0.6673 0.6562
0.6248 83.0 83 0.6660 0.6562
0.6248 84.0 84 0.6647 0.6562
0.6248 85.0 85 0.6635 0.6562
0.6248 86.0 86 0.6622 0.6562
0.6248 87.0 87 0.6603 0.5938
0.6248 88.0 88 0.6586 0.5938
0.6248 89.0 89 0.6574 0.5938
0.6013 90.0 90 0.6565 0.5938
0.6013 91.0 91 0.6557 0.5938
0.6013 92.0 92 0.6548 0.5938
0.6013 93.0 93 0.6541 0.625
0.6013 94.0 94 0.6538 0.625
0.6013 95.0 95 0.6537 0.625
0.6013 96.0 96 0.6531 0.625
0.6013 97.0 97 0.6522 0.625
0.6013 98.0 98 0.6518 0.625
0.6013 99.0 99 0.6515 0.6562
0.5622 100.0 100 0.6502 0.6562
0.5622 101.0 101 0.6481 0.6562
0.5622 102.0 102 0.6457 0.6562
0.5622 103.0 103 0.6434 0.6562
0.5622 104.0 104 0.6411 0.6562
0.5622 105.0 105 0.6384 0.7188
0.5622 106.0 106 0.6362 0.6875
0.5622 107.0 107 0.6338 0.6875
0.5622 108.0 108 0.6311 0.6875
0.5622 109.0 109 0.6281 0.6562
0.5022 110.0 110 0.6236 0.6875
0.5022 111.0 111 0.6193 0.6875
0.5022 112.0 112 0.6141 0.6562
0.5022 113.0 113 0.6088 0.6875
0.5022 114.0 114 0.6046 0.6875
0.5022 115.0 115 0.6024 0.6875
0.5022 116.0 116 0.6014 0.6875
0.5022 117.0 117 0.6004 0.6875
0.5022 118.0 118 0.5993 0.6875
0.5022 119.0 119 0.5982 0.6875
0.4576 120.0 120 0.5969 0.6875
0.4576 121.0 121 0.5957 0.6875
0.4576 122.0 122 0.5944 0.7188
0.4576 123.0 123 0.5929 0.7188
0.4576 124.0 124 0.5916 0.7188
0.4576 125.0 125 0.5903 0.7188
0.4576 126.0 126 0.5887 0.7188
0.4576 127.0 127 0.5873 0.7188
0.4576 128.0 128 0.5857 0.75
0.4576 129.0 129 0.5837 0.75
0.4105 130.0 130 0.5819 0.75
0.4105 131.0 131 0.5797 0.75
0.4105 132.0 132 0.5781 0.75
0.4105 133.0 133 0.5770 0.75
0.4105 134.0 134 0.5756 0.75
0.4105 135.0 135 0.5734 0.75
0.4105 136.0 136 0.5714 0.75
0.4105 137.0 137 0.5694 0.75
0.4105 138.0 138 0.5673 0.75
0.4105 139.0 139 0.5651 0.75
0.3744 140.0 140 0.5628 0.75
0.3744 141.0 141 0.5605 0.7812
0.3744 142.0 142 0.5581 0.7812
0.3744 143.0 143 0.5555 0.7812
0.3744 144.0 144 0.5532 0.7812
0.3744 145.0 145 0.5510 0.7812
0.3744 146.0 146 0.5489 0.7812
0.3744 147.0 147 0.5470 0.7812
0.3744 148.0 148 0.5453 0.7812
0.3744 149.0 149 0.5435 0.7812
0.3294 150.0 150 0.5420 0.8125

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3