Update README.md
Browse files
README.md
CHANGED
@@ -11,88 +11,9 @@ model-index:
|
|
11 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
should probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
-
#
|
15 |
|
16 |
-
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the
|
17 |
-
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 0.0599
|
19 |
-
- F1-macro subtask 1: 0.2635
|
20 |
-
- F1-micro subtask 1: 0.3292
|
21 |
-
- Roc auc macro subtask 1: 0.6016
|
22 |
-
- F1-macro subtask 2: 0.0502
|
23 |
-
- F1-micro subtask 2: 0.0513
|
24 |
-
- Roc auc macro subtask 2: 0.6128
|
25 |
-
- Self-direction: thought1: 0.0588
|
26 |
-
- Self-direction: action1: 0.1801
|
27 |
-
- Stimulation1: 0.1815
|
28 |
-
- Hedonism1: 0.2645
|
29 |
-
- Achievement1: 0.2612
|
30 |
-
- Power: dominance1: 0.3455
|
31 |
-
- Power: resources1: 0.2642
|
32 |
-
- Face1: 0.1941
|
33 |
-
- Security: personal1: 0.2454
|
34 |
-
- Security: societal1: 0.4119
|
35 |
-
- Tradition1: 0.4361
|
36 |
-
- Conformity: rules1: 0.4578
|
37 |
-
- Conformity: interpersonal1: 0.0700
|
38 |
-
- Humility1: 0.0833
|
39 |
-
- Benevolence: caring1: 0.1117
|
40 |
-
- Benevolence: dependability1: 0.2426
|
41 |
-
- Universalism: concern1: 0.3619
|
42 |
-
- Universalism: nature1: 0.5693
|
43 |
-
- Universalism: tolerance1: 0.2667
|
44 |
-
- Self-direction: thought attained2: 0.0190
|
45 |
-
- Self-direction: thought constrained2: 0.0561
|
46 |
-
- Self-direction: action attained2: 0.0525
|
47 |
-
- Self-direction: action constrained2: 0.0762
|
48 |
-
- Stimulation attained2: 0.0631
|
49 |
-
- Stimulation constrained2: 0.0095
|
50 |
-
- Hedonism attained2: 0.0143
|
51 |
-
- Hedonism constrained2: 0.0071
|
52 |
-
- Achievement attained2: 0.1248
|
53 |
-
- Achievement constrained2: 0.0840
|
54 |
-
- Power: dominance attained2: 0.0701
|
55 |
-
- Power: dominance constrained2: 0.0767
|
56 |
-
- Power: resources attained2: 0.0806
|
57 |
-
- Power: resources constrained2: 0.0807
|
58 |
-
- Face attained2: 0.0239
|
59 |
-
- Face constrained2: 0.0361
|
60 |
-
- Security: personal attained2: 0.0187
|
61 |
-
- Security: personal constrained2: 0.0425
|
62 |
-
- Security: societal attained2: 0.1038
|
63 |
-
- Security: societal constrained2: 0.1802
|
64 |
-
- Tradition attained2: 0.0312
|
65 |
-
- Tradition constrained2: 0.0338
|
66 |
-
- Conformity: rules attained2: 0.1044
|
67 |
-
- Conformity: rules constrained2: 0.0914
|
68 |
-
- Conformity: interpersonal attained2: 0.0162
|
69 |
-
- Conformity: interpersonal constrained2: 0.0294
|
70 |
-
- Humility attained2: 0.0053
|
71 |
-
- Humility constrained2: 0.0024
|
72 |
-
- Benevolence: caring attained2: 0.0438
|
73 |
-
- Benevolence: caring constrained2: 0.0137
|
74 |
-
- Benevolence: dependability attained2: 0.0362
|
75 |
-
- Benevolence: dependability constrained2: 0.016
|
76 |
-
- Universalism: concern attained2: 0.0852
|
77 |
-
- Universalism: concern constrained2: 0.0583
|
78 |
-
- Universalism: nature attained2: 0.0423
|
79 |
-
- Universalism: nature constrained2: 0.0516
|
80 |
-
- Universalism: tolerance attained2: 0.0142
|
81 |
-
- Universalism: tolerance constrained2: 0.0133
|
82 |
-
|
83 |
-
## Model description
|
84 |
-
|
85 |
-
More information needed
|
86 |
-
|
87 |
-
## Intended uses & limitations
|
88 |
-
|
89 |
-
More information needed
|
90 |
-
|
91 |
-
## Training and evaluation data
|
92 |
-
|
93 |
-
More information needed
|
94 |
-
|
95 |
-
## Training procedure
|
96 |
|
97 |
### Training hyperparameters
|
98 |
|
@@ -106,16 +27,6 @@ The following hyperparameters were used during training:
|
|
106 |
- lr_scheduler_warmup_ratio: 0.2
|
107 |
- num_epochs: 4
|
108 |
|
109 |
-
### Training results
|
110 |
-
|
111 |
-
| Training Loss | Epoch | Step | Validation Loss | F1-macro subtask 1 | F1-micro subtask 1 | Roc auc macro subtask 1 | F1-macro subtask 2 | F1-micro subtask 2 | Roc auc macro subtask 2 | Self-direction: thought1 | Self-direction: action1 | Stimulation1 | Hedonism1 | Achievement1 | Power: dominance1 | Power: resources1 | Face1 | Security: personal1 | Security: societal1 | Tradition1 | Conformity: rules1 | Conformity: interpersonal1 | Humility1 | Benevolence: caring1 | Benevolence: dependability1 | Universalism: concern1 | Universalism: nature1 | Universalism: tolerance1 | Self-direction: thought attained2 | Self-direction: thought constrained2 | Self-direction: action attained2 | Self-direction: action constrained2 | Stimulation attained2 | Stimulation constrained2 | Hedonism attained2 | Hedonism constrained2 | Achievement attained2 | Achievement constrained2 | Power: dominance attained2 | Power: dominance constrained2 | Power: resources attained2 | Power: resources constrained2 | Face attained2 | Face constrained2 | Security: personal attained2 | Security: personal constrained2 | Security: societal attained2 | Security: societal constrained2 | Tradition attained2 | Tradition constrained2 | Conformity: rules attained2 | Conformity: rules constrained2 | Conformity: interpersonal attained2 | Conformity: interpersonal constrained2 | Humility attained2 | Humility constrained2 | Benevolence: caring attained2 | Benevolence: caring constrained2 | Benevolence: dependability attained2 | Benevolence: dependability constrained2 | Universalism: concern attained2 | Universalism: concern constrained2 | Universalism: nature attained2 | Universalism: nature constrained2 | Universalism: tolerance attained2 | Universalism: tolerance constrained2 |
|
112 |
-
|:-------------:|:-----:|:-----:|:---------------:|:------------------:|:------------------:|:-----------------------:|:------------------:|:------------------:|:-----------------------:|:------------------------:|:-----------------------:|:------------:|:---------:|:------------:|:-----------------:|:-----------------:|:------:|:-------------------:|:-------------------:|:----------:|:------------------:|:--------------------------:|:---------:|:--------------------:|:---------------------------:|:----------------------:|:---------------------:|:------------------------:|:---------------------------------:|:------------------------------------:|:--------------------------------:|:-----------------------------------:|:---------------------:|:------------------------:|:------------------:|:---------------------:|:---------------------:|:------------------------:|:--------------------------:|:-----------------------------:|:--------------------------:|:-----------------------------:|:--------------:|:-----------------:|:----------------------------:|:-------------------------------:|:----------------------------:|:-------------------------------:|:-------------------:|:----------------------:|:---------------------------:|:------------------------------:|:-----------------------------------:|:--------------------------------------:|:------------------:|:---------------------:|:-----------------------------:|:--------------------------------:|:------------------------------------:|:---------------------------------------:|:-------------------------------:|:----------------------------------:|:------------------------------:|:---------------------------------:|:---------------------------------:|:------------------------------------:|
|
113 |
-
| 0.0603 | 1.0 | 7183 | 0.0604 | 0.1393 | 0.2079 | 0.5512 | 0.0485 | 0.0509 | 0.6252 | 0.0 | 0.0431 | 0.0 | 0.0 | 0.2512 | 0.1886 | 0.0844 | 0.2393 | 0.0 | 0.2646 | 0.2359 | 0.3725 | 0.1083 | 0.0 | 0.0277 | 0.0385 | 0.0819 | 0.5374 | 0.1734 | 0.0188 | 0.0137 | 0.0550 | 0.0418 | 0.0530 | 0.0141 | 0.0119 | 0.0110 | 0.1262 | 0.0807 | 0.0659 | 0.0600 | 0.0806 | 0.0833 | 0.0201 | 0.0576 | 0.0161 | 0.0506 | 0.1231 | 0.1471 | 0.0345 | 0.0188 | 0.1007 | 0.0982 | 0.0157 | 0.0351 | 0.0069 | 0.0018 | 0.0476 | 0.0151 | 0.0480 | 0.0142 | 0.0836 | 0.0540 | 0.0380 | 0.0727 | 0.0120 | 0.0166 |
|
114 |
-
| 0.0411 | 2.0 | 14366 | 0.0607 | 0.2394 | 0.3018 | 0.5948 | 0.0491 | 0.0516 | 0.6270 | 0.0664 | 0.1223 | 0.3371 | 0.3165 | 0.3128 | 0.1460 | 0.1394 | 0.2103 | 0.1218 | 0.4200 | 0.4190 | 0.4399 | 0.0972 | 0.1013 | 0.0874 | 0.1508 | 0.3057 | 0.5369 | 0.2182 | 0.0179 | 0.0375 | 0.0526 | 0.0773 | 0.0589 | 0.0104 | 0.0170 | 0.0059 | 0.1271 | 0.0824 | 0.0694 | 0.0645 | 0.0841 | 0.0744 | 0.0250 | 0.0356 | 0.0180 | 0.0406 | 0.1057 | 0.1696 | 0.0329 | 0.0286 | 0.1114 | 0.0839 | 0.0166 | 0.0295 | 0.0054 | 0.0027 | 0.0439 | 0.0208 | 0.0375 | 0.0203 | 0.0896 | 0.0542 | 0.0477 | 0.0405 | 0.0119 | 0.0158 |
|
115 |
-
| 0.0282 | 3.0 | 21549 | 0.0599 | 0.2635 | 0.3292 | 0.6016 | 0.0502 | 0.0513 | 0.6128 | 0.0588 | 0.1801 | 0.1815 | 0.2645 | 0.2612 | 0.3455 | 0.2642 | 0.1941 | 0.2454 | 0.4119 | 0.4361 | 0.4578 | 0.0700 | 0.0833 | 0.1117 | 0.2426 | 0.3619 | 0.5693 | 0.2667 | 0.0190 | 0.0561 | 0.0525 | 0.0762 | 0.0631 | 0.0095 | 0.0143 | 0.0071 | 0.1248 | 0.0840 | 0.0701 | 0.0767 | 0.0806 | 0.0807 | 0.0239 | 0.0361 | 0.0187 | 0.0425 | 0.1038 | 0.1802 | 0.0312 | 0.0338 | 0.1044 | 0.0914 | 0.0162 | 0.0294 | 0.0053 | 0.0024 | 0.0438 | 0.0137 | 0.0362 | 0.016 | 0.0852 | 0.0583 | 0.0423 | 0.0516 | 0.0142 | 0.0133 |
|
116 |
-
| 0.0272 | 4.0 | 28732 | 0.0630 | 0.2883 | 0.3504 | 0.6141 | 0.0492 | 0.0515 | 0.6185 | 0.0935 | 0.2240 | 0.2606 | 0.2857 | 0.3633 | 0.3567 | 0.2972 | 0.2133 | 0.3140 | 0.4242 | 0.4379 | 0.4547 | 0.0662 | 0.0385 | 0.2308 | 0.2571 | 0.3798 | 0.5229 | 0.2577 | 0.0190 | 0.0619 | 0.0525 | 0.0757 | 0.0651 | 0.0137 | 0.0158 | 0.0073 | 0.1216 | 0.0894 | 0.0744 | 0.035 | 0.0804 | 0.0828 | 0.0240 | 0.0343 | 0.0205 | 0.0392 | 0.1079 | 0.1682 | 0.0329 | 0.0220 | 0.1066 | 0.0931 | 0.0159 | 0.0265 | 0.0049 | 0.0023 | 0.0449 | 0.0135 | 0.0355 | 0.0200 | 0.0830 | 0.0601 | 0.0465 | 0.0437 | 0.0126 | 0.0152 |
|
117 |
-
|
118 |
-
|
119 |
### Framework versions
|
120 |
|
121 |
- Transformers 4.37.2
|
|
|
11 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
should probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
+
# Human Value Detection Roberta Large with Upsampled Data
|
15 |
|
16 |
+
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the upsampled training data of the ValueML dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
### Training hyperparameters
|
19 |
|
|
|
27 |
- lr_scheduler_warmup_ratio: 0.2
|
28 |
- num_epochs: 4
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
### Framework versions
|
31 |
|
32 |
- Transformers 4.37.2
|