|
--- |
|
license: cc-by-nc-4.0 |
|
base_model: MCG-NJU/videomae-base-finetuned-kinetics |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- accuracy |
|
- precision |
|
- recall |
|
model-index: |
|
- name: videomae-base-finetuned-kinetics-fight_22-01-2024 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# videomae-base-finetuned-kinetics-fight_22-01-2024 |
|
|
|
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co./MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.2265 |
|
- Accuracy: 0.9159 |
|
- Precision: 0.9507 |
|
- Recall: 0.8773 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-07 |
|
- train_batch_size: 14 |
|
- eval_batch_size: 14 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- training_steps: 15820 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | |
|
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:| |
|
| 0.6066 | 0.05 | 792 | 0.5969 | 0.7509 | 0.7817 | 0.6961 | |
|
| 0.429 | 1.05 | 1584 | 0.3898 | 0.8422 | 0.8937 | 0.7767 | |
|
| 0.184 | 2.05 | 2376 | 0.2700 | 0.8859 | 0.9281 | 0.8367 | |
|
| 0.1911 | 3.05 | 3168 | 0.2280 | 0.9028 | 0.9405 | 0.8601 | |
|
| 0.1115 | 4.05 | 3960 | 0.2218 | 0.9063 | 0.9436 | 0.8642 | |
|
| 0.1799 | 5.05 | 4752 | 0.2293 | 0.9090 | 0.9604 | 0.8532 | |
|
| 0.1282 | 6.05 | 5544 | 0.2265 | 0.9159 | 0.9507 | 0.8773 | |
|
| 0.1211 | 7.05 | 6336 | 0.2554 | 0.9087 | 0.9562 | 0.8567 | |
|
| 0.076 | 8.05 | 7128 | 0.2738 | 0.9063 | 0.9588 | 0.8491 | |
|
| 0.1152 | 9.05 | 7920 | 0.2785 | 0.9090 | 0.9541 | 0.8594 | |
|
| 0.0281 | 10.05 | 8712 | 0.2852 | 0.9118 | 0.9537 | 0.8656 | |
|
| 0.0806 | 11.05 | 9504 | 0.2994 | 0.9094 | 0.9548 | 0.8594 | |
|
| 0.0755 | 12.05 | 10296 | 0.3124 | 0.9104 | 0.9556 | 0.8608 | |
|
| 0.0986 | 13.05 | 11088 | 0.3134 | 0.9114 | 0.9462 | 0.8725 | |
|
| 0.0222 | 14.05 | 11880 | 0.3241 | 0.9111 | 0.9550 | 0.8629 | |
|
| 0.0272 | 15.05 | 12672 | 0.3269 | 0.9125 | 0.9503 | 0.8704 | |
|
| 0.0657 | 16.05 | 13464 | 0.3401 | 0.9097 | 0.9556 | 0.8594 | |
|
| 0.1083 | 17.05 | 14256 | 0.3424 | 0.9097 | 0.9549 | 0.8601 | |
|
| 0.0059 | 18.05 | 15048 | 0.3461 | 0.9094 | 0.9555 | 0.8587 | |
|
| 0.0143 | 19.05 | 15820 | 0.3462 | 0.9094 | 0.9555 | 0.8587 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.38.2 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.18.0 |
|
- Tokenizers 0.15.2 |
|
|