erickrribeiro commited on
Commit
9b06654
1 Parent(s): c1f53c7

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: neuralmind/bert-base-portuguese-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - glue-ptpt
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ model-index:
12
+ - name: bert-base-portuguese-fine-tuned-mrpc
13
+ results:
14
+ - task:
15
+ name: Text Classification
16
+ type: text-classification
17
+ dataset:
18
+ name: glue-ptpt
19
+ type: glue-ptpt
20
+ config: mrpc
21
+ split: validation
22
+ args: mrpc
23
+ metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.8504901960784313
27
+ - name: F1
28
+ type: f1
29
+ value: 0.8920353982300885
30
+ ---
31
+
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # bert-base-portuguese-fine-tuned-mrpc
36
+
37
+ This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the glue-ptpt dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 1.2843
40
+ - Accuracy: 0.8505
41
+ - F1: 0.8920
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 2e-05
61
+ - train_batch_size: 8
62
+ - eval_batch_size: 8
63
+ - seed: 42
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - num_epochs: 10
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
72
+ | No log | 1.0 | 459 | 0.6757 | 0.8603 | 0.8966 |
73
+ | 0.2011 | 2.0 | 918 | 0.7120 | 0.8505 | 0.8897 |
74
+ | 0.1215 | 3.0 | 1377 | 0.9679 | 0.8382 | 0.8764 |
75
+ | 0.0901 | 4.0 | 1836 | 1.0548 | 0.8333 | 0.8799 |
76
+ | 0.0478 | 5.0 | 2295 | 1.3125 | 0.8260 | 0.8769 |
77
+ | 0.0312 | 6.0 | 2754 | 1.0122 | 0.8578 | 0.8953 |
78
+ | 0.0309 | 7.0 | 3213 | 1.2197 | 0.8431 | 0.8849 |
79
+ | 0.0095 | 8.0 | 3672 | 1.1705 | 0.8554 | 0.8941 |
80
+ | 0.0076 | 9.0 | 4131 | 1.3132 | 0.8480 | 0.8912 |
81
+ | 0.0014 | 10.0 | 4590 | 1.2843 | 0.8505 | 0.8920 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.31.0
87
+ - Pytorch 2.0.1+cu117
88
+ - Datasets 2.14.4
89
+ - Tokenizers 0.13.3