videomae-base-finetuned-subset-100epochs

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7077
  • Accuracy: 0.7685

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 5550

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.6657 0.01 56 1.6248 0.2258
1.6109 1.01 112 1.5601 0.3917
1.5669 2.01 168 1.5563 0.3733
1.45 3.01 224 1.0988 0.5991
1.1208 4.01 280 1.2279 0.5714
1.1588 5.01 336 0.8424 0.7097
1.0834 6.01 392 1.1035 0.5346
1.2194 7.01 448 1.0749 0.4839
0.8462 8.01 504 0.8755 0.6406
1.058 9.01 560 0.9025 0.6498
1.0163 10.01 616 1.2588 0.4839
1.0639 11.01 672 0.8928 0.6359
0.9317 12.01 728 0.8825 0.6221
0.9038 13.01 784 0.8765 0.5622
0.9155 14.01 840 0.8431 0.7005
1.0731 15.01 896 0.8175 0.7005
0.6864 16.01 952 1.0591 0.5853
0.9537 17.01 1008 0.9703 0.6221
0.7499 18.01 1064 0.8371 0.5806
0.7142 19.01 1120 0.9132 0.6636
0.675 20.01 1176 0.7597 0.6728
0.604 21.01 1232 1.2004 0.5714
0.7738 22.01 1288 1.0633 0.5668
0.7651 23.01 1344 0.6865 0.6820
0.6292 24.01 1400 0.7607 0.6912
0.7387 25.01 1456 1.3038 0.5346
0.7038 26.01 1512 1.2832 0.5530
0.7565 27.01 1568 0.8128 0.7005
0.6516 28.01 1624 1.0893 0.5392
0.7074 29.01 1680 1.0894 0.5991
0.4902 30.01 1736 1.0695 0.5622
0.4563 31.01 1792 1.2922 0.5300
0.7543 32.01 1848 0.8960 0.6820
0.7467 33.01 1904 0.7861 0.7465
0.6459 34.01 1960 1.2835 0.5622
0.7296 35.01 2016 1.0303 0.5806
0.5 36.01 2072 0.8924 0.6129
0.5181 37.01 2128 0.8769 0.7235
0.5225 38.01 2184 0.7288 0.7512
0.5617 39.01 2240 0.6330 0.7926
0.677 40.01 2296 0.7733 0.7419
0.6891 41.01 2352 0.7463 0.8157
0.6662 42.01 2408 0.9304 0.7235
0.4602 43.01 2464 1.5115 0.5207
0.581 44.01 2520 1.2296 0.6175
0.5418 45.01 2576 1.0070 0.6221
0.5199 46.01 2632 1.1344 0.6083
0.6876 47.01 2688 0.9800 0.5760
0.5165 48.01 2744 1.3709 0.5069
0.5727 49.01 2800 0.9960 0.6866
0.3698 50.01 2856 1.2246 0.5484
0.5836 51.01 2912 0.9892 0.6866
0.6017 52.01 2968 0.9388 0.6590
0.4851 53.01 3024 1.1415 0.6590
0.3038 54.01 3080 0.9413 0.6959
0.6075 55.01 3136 1.0467 0.6129
0.4474 56.01 3192 0.8436 0.6866
0.3711 57.01 3248 0.8994 0.6774
0.5279 58.01 3304 0.8859 0.7189
0.6032 59.01 3360 1.2931 0.6498
0.3282 60.01 3416 0.9435 0.7143
0.3506 61.01 3472 1.0971 0.6728
0.3169 62.01 3528 0.9101 0.7512
0.438 63.01 3584 1.4072 0.6359
0.5208 64.01 3640 1.2648 0.6544
0.4563 65.01 3696 1.1162 0.6498
0.6693 66.01 3752 1.8558 0.5576
0.5599 67.01 3808 1.6574 0.5392
0.4751 68.01 3864 1.1883 0.6129
0.6489 69.01 3920 1.2733 0.6129
0.4229 70.01 3976 1.0994 0.6682
0.4194 71.01 4032 1.1464 0.6175
0.2121 72.01 4088 1.1798 0.6175
0.4106 73.01 4144 1.3294 0.5806
0.3962 74.01 4200 1.4209 0.6359
0.2963 75.01 4256 1.5016 0.5945
0.5436 76.01 4312 1.5647 0.5484
0.4115 77.01 4368 1.4309 0.6037
0.1635 78.01 4424 1.3660 0.6452
0.2931 79.01 4480 1.3299 0.6498
0.5154 80.01 4536 1.6550 0.5806
0.2993 81.01 4592 1.6520 0.5991
0.4391 82.01 4648 1.3823 0.6406
0.485 83.01 4704 1.4860 0.6037
0.3313 84.01 4760 1.3875 0.6175
0.4194 85.01 4816 1.4334 0.5899
0.4515 86.01 4872 1.6489 0.5991
0.3283 87.01 4928 1.4549 0.6083
0.1914 88.01 4984 1.3415 0.6267
0.2142 89.01 5040 1.6426 0.6267
0.3121 90.01 5096 1.6999 0.6037
0.367 91.01 5152 1.4683 0.6083
0.178 92.01 5208 1.4665 0.6267
0.3972 93.01 5264 1.3464 0.6452
0.224 94.01 5320 1.5009 0.6175
0.1848 95.01 5376 1.5068 0.6129
0.2776 96.01 5432 1.5383 0.6175
0.3506 97.01 5488 1.5356 0.6129
0.401 98.01 5544 1.5504 0.6175
0.3466 99.0 5550 1.5505 0.6175

Framework versions

  • Transformers 4.36.2
  • Pytorch 1.13.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
46
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Joy28/videomae-base-finetuned-subset-100epochs

Finetuned
(421)
this model