metadata
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: sample_data
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: validation
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.7290715372907154
- name: Recall
type: recall
value: 0.5729665071770335
- name: F1
type: f1
value: 0.6416610850636303
- name: Accuracy
type: accuracy
value: 0.9602644796236252
sample_data
This model is a fine-tuned version of bert-large-uncased on the wnut_17 dataset. It achieves the following results on the evaluation set:
- Loss: 0.2684
- Precision: 0.7291
- Recall: 0.5730
- F1: 0.6417
- Accuracy: 0.9603
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.6403 | 0.12 | 25 | 0.4914 | 0.0 | 0.0 | 0.0 | 0.9205 |
0.334 | 0.23 | 50 | 0.4539 | 0.0 | 0.0 | 0.0 | 0.9205 |
0.2346 | 0.35 | 75 | 0.3556 | 0.4118 | 0.0419 | 0.0760 | 0.9236 |
0.2352 | 0.47 | 100 | 0.2936 | 0.4337 | 0.2464 | 0.3143 | 0.9341 |
0.1725 | 0.59 | 125 | 0.2898 | 0.4983 | 0.3421 | 0.4057 | 0.9372 |
0.1449 | 0.7 | 150 | 0.2858 | 0.4606 | 0.3493 | 0.3973 | 0.9399 |
0.1548 | 0.82 | 175 | 0.2487 | 0.5699 | 0.3900 | 0.4631 | 0.9435 |
0.1429 | 0.94 | 200 | 0.3071 | 0.6888 | 0.3469 | 0.4614 | 0.9415 |
0.1506 | 1.06 | 225 | 0.2252 | 0.4820 | 0.4952 | 0.4885 | 0.9465 |
0.1196 | 1.17 | 250 | 0.2512 | 0.5463 | 0.4940 | 0.5188 | 0.9485 |
0.1062 | 1.29 | 275 | 0.2916 | 0.6395 | 0.4605 | 0.5355 | 0.9495 |
0.0983 | 1.41 | 300 | 0.2402 | 0.6199 | 0.5443 | 0.5796 | 0.9497 |
0.1068 | 1.53 | 325 | 0.2470 | 0.6018 | 0.4773 | 0.5324 | 0.9504 |
0.0879 | 1.64 | 350 | 0.2360 | 0.6468 | 0.5586 | 0.5995 | 0.9511 |
0.0928 | 1.76 | 375 | 0.2267 | 0.6126 | 0.5467 | 0.5777 | 0.9514 |
0.1045 | 1.88 | 400 | 0.2258 | 0.6934 | 0.5060 | 0.5851 | 0.9542 |
0.0933 | 2.0 | 425 | 0.2403 | 0.6954 | 0.5108 | 0.5890 | 0.9547 |
0.0497 | 2.11 | 450 | 0.2539 | 0.6460 | 0.5371 | 0.5865 | 0.9554 |
0.0607 | 2.23 | 475 | 0.3065 | 0.7293 | 0.4737 | 0.5743 | 0.9523 |
0.0857 | 2.35 | 500 | 0.2565 | 0.6770 | 0.4964 | 0.5728 | 0.9545 |
0.0513 | 2.46 | 525 | 0.2569 | 0.6931 | 0.5323 | 0.6022 | 0.9569 |
0.0697 | 2.58 | 550 | 0.2273 | 0.7193 | 0.5670 | 0.6341 | 0.9566 |
0.0446 | 2.7 | 575 | 0.2361 | 0.6348 | 0.5634 | 0.5970 | 0.9580 |
0.0498 | 2.82 | 600 | 0.2544 | 0.7109 | 0.5323 | 0.6088 | 0.9579 |
0.0464 | 2.93 | 625 | 0.2576 | 0.7237 | 0.5514 | 0.6259 | 0.9589 |
0.0441 | 3.05 | 650 | 0.2691 | 0.7321 | 0.5490 | 0.6275 | 0.9586 |
0.0524 | 3.17 | 675 | 0.2368 | 0.6947 | 0.5825 | 0.6337 | 0.9603 |
0.0335 | 3.29 | 700 | 0.2488 | 0.6991 | 0.5670 | 0.6262 | 0.9594 |
0.0349 | 3.4 | 725 | 0.2564 | 0.7084 | 0.5347 | 0.6094 | 0.9580 |
0.026 | 3.52 | 750 | 0.2523 | 0.7085 | 0.5610 | 0.6262 | 0.9594 |
0.0314 | 3.64 | 775 | 0.2647 | 0.7335 | 0.5467 | 0.6265 | 0.9584 |
0.0213 | 3.76 | 800 | 0.2551 | 0.7032 | 0.5754 | 0.6329 | 0.9603 |
0.0312 | 3.87 | 825 | 0.2470 | 0.7034 | 0.5957 | 0.6451 | 0.9606 |
0.0313 | 3.99 | 850 | 0.2693 | 0.7421 | 0.5610 | 0.6390 | 0.9598 |
0.0243 | 4.11 | 875 | 0.2699 | 0.7345 | 0.5658 | 0.6392 | 0.9598 |
0.0289 | 4.23 | 900 | 0.2535 | 0.7143 | 0.5682 | 0.6329 | 0.9603 |
0.0226 | 4.34 | 925 | 0.2581 | 0.7205 | 0.5706 | 0.6368 | 0.9602 |
0.0173 | 4.46 | 950 | 0.2644 | 0.7145 | 0.5718 | 0.6352 | 0.9601 |
0.0139 | 4.58 | 975 | 0.2705 | 0.7164 | 0.5682 | 0.6338 | 0.9600 |
0.0243 | 4.69 | 1000 | 0.2615 | 0.7116 | 0.5813 | 0.6399 | 0.9606 |
0.0222 | 4.81 | 1025 | 0.2642 | 0.7229 | 0.5742 | 0.64 | 0.9606 |
0.0112 | 4.93 | 1050 | 0.2684 | 0.7291 | 0.5730 | 0.6417 | 0.9603 |
Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3