malduwais commited on
Commit
c837101
·
1 Parent(s): f71d63e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -19,7 +19,7 @@ model-index:
19
  metrics:
20
  - name: F1
21
  type: f1
22
- value: 0.8404237430637297
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,8 +29,8 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
31
  It achieves the following results on the evaluation set:
32
- - Loss: 0.2716
33
- - F1: 0.8404
34
 
35
  ## Model description
36
 
@@ -50,8 +50,8 @@ More information needed
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
- - train_batch_size: 24
54
- - eval_batch_size: 24
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
@@ -61,14 +61,14 @@ The following hyperparameters were used during training:
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | F1 |
63
  |:-------------:|:-----:|:----:|:---------------:|:------:|
64
- | 0.571 | 1.0 | 191 | 0.3288 | 0.7826 |
65
- | 0.2554 | 2.0 | 382 | 0.2857 | 0.8261 |
66
- | 0.1688 | 3.0 | 573 | 0.2716 | 0.8404 |
67
 
68
 
69
  ### Framework versions
70
 
71
  - Transformers 4.16.2
72
- - Pytorch 2.0.1+cu118
73
  - Datasets 1.16.1
74
- - Tokenizers 0.13.3
 
19
  metrics:
20
  - name: F1
21
  type: f1
22
+ value: 0.8513011152416358
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
31
  It achieves the following results on the evaluation set:
32
+ - Loss: 0.3396
33
+ - F1: 0.8513
34
 
35
  ## Model description
36
 
 
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
+ - train_batch_size: 8
54
+ - eval_batch_size: 8
55
  - seed: 42
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
 
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | F1 |
63
  |:-------------:|:-----:|:----:|:---------------:|:------:|
64
+ | 0.5189 | 1.0 | 573 | 0.4020 | 0.7585 |
65
+ | 0.2723 | 2.0 | 1146 | 0.3157 | 0.8322 |
66
+ | 0.1837 | 3.0 | 1719 | 0.3396 | 0.8513 |
67
 
68
 
69
  ### Framework versions
70
 
71
  - Transformers 4.16.2
72
+ - Pytorch 2.1.0+cu121
73
  - Datasets 1.16.1
74
+ - Tokenizers 0.15.0