videomae-base-finetuned-subset

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7700
  • Accuracy: 0.6713

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 11100

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.638 0.01 112 1.5736 0.1567
1.5845 1.01 224 1.5841 0.2719
1.4522 2.01 336 1.6293 0.2350
1.3111 3.01 448 1.0450 0.6037
1.2849 4.01 560 1.3186 0.4608
1.3246 5.01 672 1.1759 0.5161
1.3801 6.01 784 1.2188 0.4608
1.3228 7.01 896 0.9895 0.6406
0.9706 8.01 1008 1.1265 0.6129
1.2483 9.01 1120 1.2352 0.5484
0.9394 10.01 1232 1.2345 0.4977
0.8285 11.01 1344 0.8702 0.6682
1.1175 12.01 1456 0.9073 0.6406
1.093 13.01 1568 0.9210 0.5576
0.8364 14.01 1680 0.9316 0.6590
0.766 15.01 1792 0.7628 0.7742
0.7702 16.01 1904 0.8982 0.6682
0.9184 17.01 2016 1.1010 0.6221
0.7309 18.01 2128 0.8245 0.6866
0.9575 19.01 2240 0.9029 0.7097
0.8233 20.01 2352 1.2445 0.5161
0.7643 21.01 2464 0.9558 0.6498
0.6722 22.01 2576 1.1864 0.5714
0.8441 23.01 2688 0.9690 0.7235
0.7971 24.01 2800 0.9349 0.6774
0.8296 25.01 2912 1.4574 0.4516
0.8613 26.01 3024 0.8688 0.7189
0.5614 27.01 3136 1.2101 0.6083
0.6971 28.01 3248 1.3006 0.4654
0.9642 29.01 3360 0.9573 0.6313
0.836 30.01 3472 1.1268 0.6221
0.7166 31.01 3584 1.2384 0.5622
0.9302 32.01 3696 1.0620 0.5991
0.7729 33.01 3808 1.3253 0.5622
0.8005 34.01 3920 1.4979 0.4931
0.8025 35.01 4032 0.9786 0.5668
0.881 36.01 4144 0.8477 0.6544
0.5343 37.01 4256 1.3107 0.6544
0.5611 38.01 4368 0.9520 0.6866
0.6824 39.01 4480 0.7909 0.7281
0.6146 40.01 4592 1.0886 0.6175
1.0098 41.01 4704 1.0434 0.6313
0.5555 42.01 4816 0.9603 0.6912
0.4578 43.01 4928 1.2341 0.5945
0.5883 44.01 5040 1.2559 0.6359
0.3579 45.01 5152 1.2459 0.5622
0.7936 46.01 5264 1.2685 0.6083
0.4331 47.01 5376 0.9118 0.7097
0.8989 48.01 5488 1.3406 0.5806
0.7674 49.01 5600 1.5231 0.5484
0.8136 50.01 5712 1.2210 0.6221
0.6583 51.01 5824 0.9262 0.7051
0.4305 52.01 5936 1.0339 0.6959
0.7197 53.01 6048 1.1948 0.6682
0.7143 54.01 6160 1.1851 0.6774
0.5441 55.01 6272 1.0351 0.6636
0.6443 56.01 6384 1.0297 0.6866
0.7747 57.01 6496 1.5174 0.5991
0.5943 58.01 6608 1.1961 0.6452
0.5781 59.01 6720 1.2187 0.7143
0.6913 60.01 6832 1.1590 0.6728
0.6186 61.01 6944 1.0495 0.7235
0.5185 62.01 7056 0.9844 0.7051
0.4077 63.01 7168 1.3194 0.6313
0.8217 64.01 7280 1.2620 0.6636
0.5273 65.01 7392 1.0395 0.7373
0.9002 66.01 7504 1.5225 0.5806
0.5763 67.01 7616 1.2559 0.6406
1.0535 68.01 7728 1.2646 0.6498
1.0064 69.01 7840 1.1533 0.6866
0.332 70.01 7952 1.0438 0.7005
0.3978 71.01 8064 1.0248 0.7051
0.4459 72.01 8176 1.0926 0.7465
0.511 73.01 8288 1.1233 0.7143
0.7933 74.01 8400 1.1535 0.7189
0.3739 75.01 8512 1.3056 0.6912
0.6976 76.01 8624 1.3159 0.6682
0.5453 77.01 8736 1.4541 0.6359
0.2915 78.01 8848 1.2601 0.7051
0.6552 79.01 8960 1.5338 0.6544
0.5067 80.01 9072 1.6630 0.6037
0.5134 81.01 9184 1.4740 0.6406
0.7271 82.01 9296 1.2171 0.7097
0.719 83.01 9408 1.3653 0.6406
0.1955 84.01 9520 1.4696 0.6544
0.5761 85.01 9632 1.3334 0.6636
0.7094 86.01 9744 1.2673 0.6912
0.5186 87.01 9856 1.3147 0.6866
0.6876 88.01 9968 1.2622 0.7051
0.4912 89.01 10080 1.3054 0.7189
0.194 90.01 10192 1.3244 0.6959
0.6916 91.01 10304 1.1800 0.7327
0.5735 92.01 10416 1.1056 0.7419
0.2122 93.01 10528 1.1070 0.7281
0.1434 94.01 10640 1.1776 0.7097
0.4681 95.01 10752 1.1505 0.7327
0.2856 96.01 10864 1.1203 0.7235
0.6509 97.01 10976 1.1502 0.7189
0.1881 98.01 11088 1.1474 0.7189
0.5577 99.0 11100 1.1473 0.7189

Framework versions

  • Transformers 4.36.2
  • Pytorch 1.13.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
30
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Joy28/videomae-base-finetuned-subset

Finetuned
(440)
this model