PergaZuZ commited on
Commit
266b586
1 Parent(s): c43c9f9

Model save

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.8151
22
- - Accuracy: 0.7681
23
 
24
  ## Model description
25
 
@@ -39,31 +39,31 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 5e-05
42
- - train_batch_size: 8
43
- - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_ratio: 0.1
48
- - training_steps: 156
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
- | 1.5758 | 0.1282 | 20 | 1.6619 | 0.2901 |
55
- | 1.3067 | 1.1282 | 40 | 1.6048 | 0.2990 |
56
- | 1.3787 | 2.1282 | 60 | 1.4723 | 0.3181 |
57
- | 1.1642 | 3.1282 | 80 | 1.4191 | 0.3004 |
58
- | 1.1172 | 4.1282 | 100 | 1.2374 | 0.3196 |
59
- | 0.8982 | 5.1282 | 120 | 1.0099 | 0.5655 |
60
- | 0.915 | 6.1282 | 140 | 0.9540 | 0.5891 |
61
- | 0.7809 | 7.1026 | 156 | 0.9189 | 0.5714 |
62
 
63
 
64
  ### Framework versions
65
 
66
  - Transformers 4.45.2
67
- - Pytorch 2.0.1+cu118
68
  - Datasets 3.0.1
69
  - Tokenizers 0.20.1
 
18
 
19
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.7926
22
+ - Accuracy: 0.7972
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 5e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_ratio: 0.1
48
+ - training_steps: 76
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
+ | 1.6087 | 0.1316 | 10 | 1.8483 | 0.2597 |
55
+ | 1.3273 | 1.1316 | 20 | 1.4452 | 0.2983 |
56
+ | 1.2351 | 2.1316 | 30 | 1.5890 | 0.2799 |
57
+ | 1.1635 | 3.1316 | 40 | 1.3830 | 0.2910 |
58
+ | 1.0374 | 4.1316 | 50 | 1.3682 | 0.3002 |
59
+ | 0.9699 | 5.1316 | 60 | 1.2128 | 0.5322 |
60
+ | 0.8748 | 6.1316 | 70 | 1.0850 | 0.5562 |
61
+ | 0.8748 | 7.0789 | 76 | 1.0721 | 0.5599 |
62
 
63
 
64
  ### Framework versions
65
 
66
  - Transformers 4.45.2
67
+ - Pytorch 2.1.1+cu118
68
  - Datasets 3.0.1
69
  - Tokenizers 0.20.1