espejelomar commited on
Commit
29957fb
·
1 Parent(s): ac0dc3c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -14
README.md CHANGED
@@ -1,18 +1,12 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - text-classification
5
  - generated_from_trainer
6
  datasets:
7
  - glue
8
  metrics:
9
  - accuracy
10
  - f1
11
- widget:
12
- - text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.","Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
13
- example_title: Not Equivalent
14
- - text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
15
- example_title: Equivalent
16
  model-index:
17
  - name: platzi-distilroberta-base-mrpc-glue-omar-espejel
18
  results:
@@ -28,10 +22,10 @@ model-index:
28
  metrics:
29
  - name: Accuracy
30
  type: accuracy
31
- value: 0.8553921568627451
32
  - name: F1
33
  type: f1
34
- value: 0.897391304347826
35
  ---
36
 
37
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -39,11 +33,11 @@ should probably proofread and complete it, then remove this comment. -->
39
 
40
  # platzi-distilroberta-base-mrpc-glue-omar-espejel
41
 
42
- This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
43
  It achieves the following results on the evaluation set:
44
- - Loss: 0.6342
45
- - Accuracy: 0.8554
46
- - F1: 0.8974
47
 
48
  ## Model description
49
 
@@ -74,8 +68,8 @@ The following hyperparameters were used during training:
74
 
75
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
76
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
77
- | 0.5004 | 1.09 | 500 | 0.6371 | 0.8113 | 0.8715 |
78
- | 0.33 | 2.18 | 1000 | 0.6342 | 0.8554 | 0.8974 |
79
 
80
 
81
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
  tags:
 
4
  - generated_from_trainer
5
  datasets:
6
  - glue
7
  metrics:
8
  - accuracy
9
  - f1
 
 
 
 
 
10
  model-index:
11
  - name: platzi-distilroberta-base-mrpc-glue-omar-espejel
12
  results:
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8431372549019608
26
  - name: F1
27
  type: f1
28
+ value: 0.8861209964412811
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  # platzi-distilroberta-base-mrpc-glue-omar-espejel
35
 
36
+ This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 0.6332
39
+ - Accuracy: 0.8431
40
+ - F1: 0.8861
41
 
42
  ## Model description
43
 
 
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
70
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
71
+ | 0.5076 | 1.09 | 500 | 0.7464 | 0.8137 | 0.8671 |
72
+ | 0.3443 | 2.18 | 1000 | 0.6332 | 0.8431 | 0.8861 |
73
 
74
 
75
  ### Framework versions