carlosleao's picture
End of training
dd49630 verified
metadata
library_name: transformers
base_model: motheecreator/vit-Facial-Expression-Recognition
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: FER-Facial-Expression-Recognition
    results: []

FER-Facial-Expression-Recognition

This model is a fine-tuned version of motheecreator/vit-Facial-Expression-Recognition on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4710
  • Accuracy: 0.8474

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.8868 0.8959 100 1.7638 0.5923
1.2277 1.7962 200 1.1092 0.7253
0.8414 2.6965 300 0.8105 0.8041
0.7076 3.5969 400 0.6746 0.8256
0.6079 4.4972 500 0.6111 0.8287
0.5624 5.3975 600 0.5529 0.8379
0.5254 6.2979 700 0.5266 0.8399
0.4784 7.1982 800 0.4978 0.8433
0.4634 8.0985 900 0.4844 0.8458
0.4305 8.9944 1000 0.4710 0.8474
0.3995 9.8947 1100 0.4381 0.8564

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3