abhyast commited on
Commit
0b876c8
1 Parent(s): da5a9be

Success is guaranteed!

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: microsoft/MiniLM-L12-H384-uncased
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: minilm-finetuned-emotion-class-model
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # minilm-finetuned-emotion-class-model
15
+
16
+ This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.1026
19
+ - F1 Score: 0.6649
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
+ - train_batch_size: 128
40
+ - eval_batch_size: 128
41
+ - seed: 42
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: linear
44
+ - num_epochs: 20
45
+ - mixed_precision_training: Native AMP
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss | F1 Score |
50
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
51
+ | 1.8502 | 1.0 | 270 | 1.4798 | 0.5071 |
52
+ | 1.3541 | 2.0 | 540 | 1.2377 | 0.5836 |
53
+ | 1.1809 | 3.0 | 810 | 1.1675 | 0.6202 |
54
+ | 1.0891 | 4.0 | 1080 | 1.1081 | 0.6522 |
55
+ | 1.0205 | 5.0 | 1350 | 1.0815 | 0.6603 |
56
+ | 0.9624 | 6.0 | 1620 | 1.0640 | 0.6645 |
57
+ | 0.9185 | 7.0 | 1890 | 1.0572 | 0.6689 |
58
+ | 0.8811 | 8.0 | 2160 | 1.0433 | 0.6693 |
59
+ | 0.8531 | 9.0 | 2430 | 1.0479 | 0.6746 |
60
+ | 0.8208 | 10.0 | 2700 | 1.0536 | 0.6697 |
61
+ | 0.8014 | 11.0 | 2970 | 1.0564 | 0.6713 |
62
+ | 0.7798 | 12.0 | 3240 | 1.0634 | 0.6716 |
63
+ | 0.7568 | 13.0 | 3510 | 1.0744 | 0.6698 |
64
+ | 0.7414 | 14.0 | 3780 | 1.0782 | 0.6704 |
65
+ | 0.7265 | 15.0 | 4050 | 1.0810 | 0.6694 |
66
+ | 0.7128 | 16.0 | 4320 | 1.0885 | 0.6684 |
67
+ | 0.7054 | 17.0 | 4590 | 1.0917 | 0.6631 |
68
+ | 0.6927 | 18.0 | 4860 | 1.0961 | 0.6678 |
69
+ | 0.6848 | 19.0 | 5130 | 1.1005 | 0.6644 |
70
+ | 0.6742 | 20.0 | 5400 | 1.1026 | 0.6649 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.38.2
76
+ - Pytorch 2.2.1+cu121
77
+ - Datasets 2.18.0
78
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0343b6745fd64a48a5d62c48ce2beb49a21a08bb8a0774ae19ffab927d9246dd
3
  size 133478696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd40e25706eb4934d24304b63777e3f0eeb01a2753f3b1a12c21152433dbf6ef
3
  size 133478696
runs/Mar22_16-00-35_f092ea680da5/events.out.tfevents.1711123348.f092ea680da5.7301.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f83c45dcc9a668b70ade769f2fa28e0ae7d1b7c2f2b2689da2a8f0399385b51
3
- size 14572
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a207bad51d591a5aaaac2daf7dfed548f7c42e2d50f4009c73d8765a761874e
3
+ size 15994