Edit model card

videomae-base-finetuned-scratch_1

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6254
  • Accuracy: 0.7624

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 12
  • eval_batch_size: 12
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 30192

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.5872 0.01 420 0.6884 0.5518
0.5358 1.01 840 0.6766 0.6194
0.5339 2.01 1260 0.6295 0.6430
0.4483 3.01 1680 0.7323 0.5957
0.4654 4.01 2100 0.7020 0.6486
0.3897 5.01 2520 0.7636 0.6498
0.3386 6.01 2940 0.8877 0.6610
0.3601 7.01 3360 0.8791 0.6486
0.3401 8.01 3780 0.7403 0.6633
0.3113 9.01 4200 0.7316 0.6959
0.2096 10.01 4620 0.9519 0.6982
0.1537 11.01 5040 0.9116 0.7016
0.1113 12.01 5460 1.0047 0.6971
0.3247 13.01 5880 1.2167 0.6847
0.171 14.01 6300 0.9337 0.7027
0.3076 15.01 6720 1.1811 0.7207
0.2927 16.01 7140 1.0953 0.7218
0.1679 17.01 7560 1.2948 0.7207
0.1523 18.01 7980 1.3632 0.7016
0.1059 19.01 8400 1.2915 0.7185
0.1741 20.01 8820 1.2315 0.7432
0.0629 21.01 9240 1.3948 0.7230
0.0075 22.01 9660 1.1435 0.7376
0.1692 23.01 10080 1.3998 0.7128
0.0347 24.01 10500 1.4803 0.7027
0.0396 25.01 10920 1.6457 0.7005
0.0074 26.01 11340 1.5602 0.7050
0.1256 27.01 11760 1.3965 0.7173
0.0021 28.01 12180 1.4514 0.7342
0.0476 29.01 12600 1.2915 0.7173
0.0065 30.01 13020 1.3397 0.7095
0.0435 31.01 13440 1.8912 0.6948
0.0268 32.01 13860 1.5767 0.7286
0.0487 33.01 14280 1.6439 0.6948
0.0448 34.01 14700 1.5990 0.7354
0.0166 35.01 15120 1.3866 0.7466
0.1029 36.01 15540 1.7427 0.7106
0.0678 37.01 15960 1.4194 0.7365
0.0007 38.01 16380 1.9137 0.7072
0.0602 39.01 16800 1.6180 0.7309
0.0977 40.01 17220 1.5710 0.7354
0.0606 41.01 17640 1.3908 0.7342
0.1046 42.01 18060 1.7846 0.7252
0.0004 43.01 18480 1.6396 0.7241
0.0881 44.01 18900 1.6206 0.7196
0.0934 45.01 19320 1.6994 0.7320
0.0001 46.01 19740 2.0068 0.7162
0.024 47.01 20160 1.5350 0.7376
0.017 48.01 20580 1.8864 0.7162
0.121 49.01 21000 1.6862 0.7230
0.0001 50.01 21420 1.8462 0.7365
0.0319 51.01 21840 1.9072 0.7286
0.065 52.01 22260 1.6631 0.7556
0.0424 53.01 22680 1.9177 0.7399
0.0001 54.01 23100 1.8990 0.7365
0.0 55.01 23520 2.0622 0.7455
0.0582 56.01 23940 1.4821 0.7444
0.0001 57.01 24360 1.6254 0.7624
0.0 58.01 24780 1.8024 0.7545
0.0486 59.01 25200 1.6804 0.7545
0.0001 60.01 25620 1.7991 0.7523
0.0 61.01 26040 1.8281 0.7511
0.0022 62.01 26460 1.8172 0.75
0.0001 63.01 26880 1.9532 0.7489
0.0 64.01 27300 1.9209 0.7477
0.0 65.01 27720 1.9100 0.7579
0.0 66.01 28140 1.9572 0.7534
0.0007 67.01 28560 2.0380 0.75
0.0627 68.01 28980 1.8911 0.7579
0.0002 69.01 29400 1.9255 0.75
0.0 70.01 29820 1.9195 0.7568
0.0 71.01 30192 1.9317 0.75

Framework versions

  • Transformers 4.39.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
9
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dat96/videomae-base-finetuned-scratch_1

Finetuned
(386)
this model