--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased-tajik-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann config: tg split: train+test args: tg metrics: - name: Precision type: precision value: 0.5042016806722689 - name: Recall type: recall value: 0.5769230769230769 - name: F1 type: f1 value: 0.5381165919282511 - name: Accuracy type: accuracy value: 0.848129958443521 --- # bert-base-uncased-tajik-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 1.2137 - Precision: 0.5042 - Recall: 0.5769 - F1: 0.5381 - Accuracy: 0.8481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 2.0 | 50 | 0.9499 | 0.0450 | 0.0962 | 0.0613 | 0.6626 | | No log | 4.0 | 100 | 0.7348 | 0.1549 | 0.2115 | 0.1789 | 0.7401 | | No log | 6.0 | 150 | 0.6685 | 0.1916 | 0.3077 | 0.2362 | 0.8017 | | No log | 8.0 | 200 | 0.7875 | 0.3923 | 0.4904 | 0.4359 | 0.8036 | | No log | 10.0 | 250 | 0.7495 | 0.4225 | 0.5769 | 0.4878 | 0.8274 | | No log | 12.0 | 300 | 0.8934 | 0.4198 | 0.5288 | 0.4681 | 0.8085 | | No log | 14.0 | 350 | 0.9455 | 0.4758 | 0.5673 | 0.5175 | 0.8236 | | No log | 16.0 | 400 | 0.9469 | 0.5893 | 0.6346 | 0.6111 | 0.8410 | | No log | 18.0 | 450 | 0.9936 | 0.5333 | 0.6154 | 0.5714 | 0.8485 | | 0.2726 | 20.0 | 500 | 0.9804 | 0.5 | 0.6058 | 0.5478 | 0.8519 | | 0.2726 | 22.0 | 550 | 1.1035 | 0.5963 | 0.625 | 0.6103 | 0.8432 | | 0.2726 | 24.0 | 600 | 1.0318 | 0.5856 | 0.625 | 0.6047 | 0.8576 | | 0.2726 | 26.0 | 650 | 1.1820 | 0.4921 | 0.5962 | 0.5391 | 0.8221 | | 0.2726 | 28.0 | 700 | 1.1204 | 0.4878 | 0.5769 | 0.5286 | 0.8311 | | 0.2726 | 30.0 | 750 | 1.1911 | 0.5357 | 0.5769 | 0.5556 | 0.8376 | | 0.2726 | 32.0 | 800 | 1.1747 | 0.5259 | 0.5865 | 0.5545 | 0.8394 | | 0.2726 | 34.0 | 850 | 1.1403 | 0.5872 | 0.6154 | 0.6009 | 0.8542 | | 0.2726 | 36.0 | 900 | 1.1824 | 0.5370 | 0.5577 | 0.5472 | 0.8330 | | 0.2726 | 38.0 | 950 | 1.1467 | 0.5424 | 0.6154 | 0.5766 | 0.8440 | | 0.003 | 40.0 | 1000 | 1.2148 | 0.5268 | 0.5673 | 0.5463 | 0.8360 | | 0.003 | 42.0 | 1050 | 1.3478 | 0.5273 | 0.5577 | 0.5421 | 0.8266 | | 0.003 | 44.0 | 1100 | 1.2137 | 0.5042 | 0.5769 | 0.5381 | 0.8481 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1