Edit model card

videomae-base-finetuned-subset-0401

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6379
  • Accuracy: 0.7824

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 2775

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.6048 0.02 56 1.6213 0.0829
1.5891 1.02 112 1.5230 0.2811
1.4797 2.02 168 1.6437 0.1982
1.3999 3.02 224 0.9263 0.7465
1.0917 4.02 280 1.2308 0.4931
1.238 5.02 336 0.9406 0.6590
1.1525 6.02 392 0.8809 0.7051
1.0806 7.02 448 1.0089 0.5945
0.8483 8.02 504 0.9700 0.5853
0.992 9.02 560 1.1880 0.4885
0.862 10.02 616 0.7174 0.7512
1.0694 11.02 672 0.8598 0.7143
0.8885 12.02 728 0.8290 0.7097
0.8965 13.02 784 0.8304 0.7143
0.7371 14.02 840 0.7009 0.7696
0.6872 15.02 896 0.6768 0.7926
0.6022 16.02 952 0.7513 0.7373
0.9308 17.02 1008 0.8055 0.7097
0.4456 18.02 1064 0.7876 0.6728
0.6802 19.02 1120 0.7224 0.7235
0.7154 20.02 1176 0.7434 0.7051
0.503 21.02 1232 0.8346 0.6959
0.7203 22.02 1288 0.9694 0.5991
0.6799 23.02 1344 0.6474 0.7696
0.5802 24.02 1400 0.9573 0.6359
0.7047 25.02 1456 0.9120 0.6959
0.6701 26.02 1512 1.1690 0.5853
0.5514 27.02 1568 0.9174 0.6866
0.538 28.02 1624 0.8543 0.6866
0.7226 29.02 1680 0.7774 0.7465
0.4459 30.02 1736 0.9135 0.6359
0.3905 31.02 1792 0.8586 0.6728
0.7071 32.02 1848 0.7919 0.7327
0.4983 33.02 1904 0.7507 0.7512
0.5654 34.02 1960 0.7679 0.7143
0.5569 35.02 2016 0.8438 0.7097
0.3998 36.02 2072 0.8691 0.7189
0.5341 37.02 2128 0.8056 0.7604
0.4024 38.02 2184 0.7071 0.7880
0.5011 39.02 2240 0.8827 0.7005
0.5857 40.02 2296 0.8525 0.7097
0.5619 41.02 2352 0.8228 0.7512
0.6052 42.02 2408 0.8320 0.7373
0.5124 43.02 2464 0.8776 0.7419
0.3323 44.02 2520 0.8515 0.7465
0.5684 45.02 2576 0.9309 0.7097
0.4406 46.02 2632 0.8826 0.7465
0.6164 47.02 2688 0.8994 0.6959
0.4549 48.02 2744 0.8700 0.7189
0.3453 49.01 2775 0.8822 0.7189

Framework versions

  • Transformers 4.36.2
  • Pytorch 1.13.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
12
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Joy28/videomae-base-finetuned-subset-0401

Finetuned
(386)
this model