File size: 4,463 Bytes
8eeab76
 
 
 
 
 
 
5cb64da
8eeab76
0347211
8eeab76
7030fb2
 
 
 
8eeab76
 
 
 
 
0347211
8eeab76
7030fb2
8eeab76
0347211
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8eeab76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0347211
8eeab76
 
 
 
 
0347211
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8eeab76
 
 
 
 
 
 
7030fb2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
base_model: bert-base-uncased
model-index:
- name: final-lr2e-5-bs16-fp16-2
  results: []
language:
- en
library_name: transformers
pipeline_tag: text-classification
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# final-lr2e-5-bs16-fp16-2

This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co./bert-base-uncased) on an https://github.com/rewire-online/edos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4823
- F1 Macro: 0.8301
- F1 Weighted: 0.8772
- F1: 0.7388
- Accuracy: 0.8792
- Confusion Matrix: [[2834  196]
 [ 287  683]]
- Confusion Matrix Norm: [[0.93531353 0.06468647]
 [0.29587629 0.70412371]]
- Classification Report:               precision    recall  f1-score     support
0              0.908042  0.935314  0.921476  3030.00000
1              0.777019  0.704124  0.738778   970.00000
accuracy       0.879250  0.879250  0.879250     0.87925
macro avg      0.842531  0.819719  0.830127  4000.00000
weighted avg   0.876269  0.879250  0.877172  4000.00000

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Weighted | F1     | Accuracy | Confusion Matrix           | Confusion Matrix Norm                              | Classification Report                                                                                                                                                                                                                                                                                                                           |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:------:|:--------:|:--------------------------:|:--------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.3333        | 1.0   | 1000 | 0.3064          | 0.8165   | 0.8672      | 0.7181 | 0.8692   | [[2811  219]
 [ 304  666]] | [[0.92772277 0.07227723]
 [0.31340206 0.68659794]] |               precision    recall  f1-score     support
0              0.902408  0.927723  0.914890  3030.00000
1              0.752542  0.686598  0.718059   970.00000
accuracy       0.869250  0.869250  0.869250     0.86925
macro avg      0.827475  0.807160  0.816475  4000.00000
weighted avg   0.866065  0.869250  0.867159  4000.00000 |
| 0.2271        | 2.0   | 2000 | 0.3905          | 0.8238   | 0.8708      | 0.7326 | 0.871    | [[2777  253]
 [ 263  707]] | [[0.91650165 0.08349835]
 [0.27113402 0.72886598]] |               precision    recall  f1-score   support
0              0.913487  0.916502  0.914992  3030.000
1              0.736458  0.728866  0.732642   970.000
accuracy       0.871000  0.871000  0.871000     0.871
macro avg      0.824973  0.822684  0.823817  4000.000
weighted avg   0.870557  0.871000  0.870772  4000.000             |
| 0.1435        | 3.0   | 3000 | 0.4823          | 0.8301   | 0.8772      | 0.7388 | 0.8792   | [[2834  196]
 [ 287  683]] | [[0.93531353 0.06468647]
 [0.29587629 0.70412371]] |               precision    recall  f1-score     support
0              0.908042  0.935314  0.921476  3030.00000
1              0.777019  0.704124  0.738778   970.00000
accuracy       0.879250  0.879250  0.879250     0.87925
macro avg      0.842531  0.819719  0.830127  4000.00000
weighted avg   0.876269  0.879250  0.877172  4000.00000 |


### Framework versions

- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2