jantrienes commited on
Commit
4486a84
1 Parent(s): e8c0878

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -24
README.md CHANGED
@@ -4,7 +4,7 @@ base_model: roberta-large
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - open_question_type
8
  metrics:
9
  - f1
10
  model-index:
@@ -14,38 +14,52 @@ model-index:
14
  name: Text Classification
15
  type: text-classification
16
  dataset:
17
- name: open_question_type
18
- type: open_question_type
19
  config: default
20
  split: validation
21
  args: default
22
  metrics:
23
- - name: F1
24
  type: f1
25
- value: 0.7954091951908298
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ---
27
 
28
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
- should probably proofread and complete it, then remove this comment. -->
30
 
31
  # roberta-large-question-classifier
32
 
33
- This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the open_question_type dataset.
34
- It achieves the following results on the evaluation set:
35
- - Loss: 1.9002
36
- - F1: 0.7954
37
-
38
- ## Model description
39
-
40
- More information needed
41
-
42
- ## Intended uses & limitations
43
-
44
- More information needed
45
-
46
- ## Training and evaluation data
47
-
48
- More information needed
 
 
 
49
 
50
  ## Training procedure
51
 
@@ -102,4 +116,4 @@ The following hyperparameters were used during training:
102
  - Transformers 4.33.2
103
  - Pytorch 2.1.0+cu118
104
  - Datasets 2.14.5
105
- - Tokenizers 0.13.3
 
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
+ - launch/open_question_type
8
  metrics:
9
  - f1
10
  model-index:
 
14
  name: Text Classification
15
  type: text-classification
16
  dataset:
17
+ name: launch/open_question_type
18
+ type: launch/open_question_type
19
  config: default
20
  split: validation
21
  args: default
22
  metrics:
23
+ - name: F1 (macro avg.)
24
  type: f1
25
+ value: 0.8123190611646329
26
+ - task:
27
+ name: Text Classification
28
+ type: text-classification
29
+ dataset:
30
+ name: launch/open_question_type
31
+ type: launch/open_question_type
32
+ config: default
33
+ split: test
34
+ args: default
35
+ metrics:
36
+ - name: F1 (macro avg.)
37
+ type: f1
38
+ value: 0.80
39
  ---
40
 
 
 
41
 
42
  # roberta-large-question-classifier
43
 
44
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [open_question_type](https://huggingface.co/datasets/launch/open_question_type) dataset.
45
+ It achieves the following results on the test set:
46
+
47
+ ```
48
+ precision recall f1-score support
49
+ cause 0.91 0.93 0.92 91
50
+ comparison 0.62 0.83 0.71 30
51
+ concept 0.85 0.65 0.74 54
52
+ consequence 0.80 0.73 0.76 11
53
+ disjunction 0.80 0.78 0.79 36
54
+ example 0.83 0.85 0.84 139
55
+ extent 0.82 0.94 0.87 48
56
+ judgmental 0.68 0.56 0.62 94
57
+ procedural 0.86 0.88 0.87 85
58
+ verification 0.79 0.86 0.83 72
59
+ accuracy 0.81 660
60
+ macro avg 0.80 0.80 0.80 660
61
+ weighted avg 0.81 0.81 0.81 660
62
+ ```
63
 
64
  ## Training procedure
65
 
 
116
  - Transformers 4.33.2
117
  - Pytorch 2.1.0+cu118
118
  - Datasets 2.14.5
119
+ - Tokenizers 0.13.3