FacialConfidence / README.md
march18's picture
Update README.md
07f3839 verified
|
raw
history blame
6.27 kB
metadata
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - image-classification
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: vit-Facial-Confidence
    results: []

vit-Facial-Confidence

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the FacialConfidence dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2560
  • Accuracy: 0.8970

Model description

Facial Confidence is an image classification model which takes a black and white image of a persons headshot and classifies it as confident or uncofident.

Intended uses & limitations

The model is intended to help with behavioral analysis tasks. The model is limited to black and white images where the image is a zoomed in headshot of a person (For best output the input image should be as zoomed in on the subjects face as possible without cutting any aspects of their head)

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6103 0.0557 100 0.5715 0.7310
0.554 0.1114 200 0.5337 0.7194
0.4275 0.1671 300 0.5142 0.7549
0.5831 0.2228 400 0.5570 0.7345
0.5804 0.2786 500 0.4909 0.7660
0.5652 0.3343 600 0.4956 0.7764
0.4513 0.3900 700 0.4294 0.7972
0.4217 0.4457 800 0.4619 0.7924
0.435 0.5014 900 0.4563 0.7901
0.3943 0.5571 1000 0.4324 0.7917
0.4136 0.6128 1100 0.4131 0.8110
0.3302 0.6685 1200 0.4516 0.8054
0.4945 0.7242 1300 0.4135 0.8164
0.3729 0.7799 1400 0.4010 0.8139
0.4865 0.8357 1500 0.4145 0.8174
0.4011 0.8914 1600 0.4098 0.8112
0.4287 0.9471 1700 0.3914 0.8181
0.3644 1.0028 1800 0.3948 0.8188
0.3768 1.0585 1900 0.4044 0.8266
0.383 1.1142 2000 0.4363 0.8064
0.4011 1.1699 2100 0.4424 0.8025
0.4079 1.2256 2200 0.4384 0.7853
0.2791 1.2813 2300 0.4491 0.8089
0.3159 1.3370 2400 0.3863 0.8274
0.4306 1.3928 2500 0.3944 0.8158
0.3386 1.4485 2600 0.3835 0.8305
0.395 1.5042 2700 0.3812 0.8261
0.3041 1.5599 2800 0.3736 0.8312
0.3365 1.6156 2900 0.4420 0.8097
0.3697 1.6713 3000 0.3808 0.8353
0.3661 1.7270 3100 0.4046 0.8084
0.3208 1.7827 3200 0.4042 0.8328
0.3511 1.8384 3300 0.4113 0.8192
0.3246 1.8942 3400 0.3611 0.8377
0.3616 1.9499 3500 0.4207 0.8231
0.2726 2.0056 3600 0.3650 0.8342
0.1879 2.0613 3700 0.4334 0.8359
0.2981 2.1170 3800 0.3657 0.8435
0.227 2.1727 3900 0.3948 0.8399
0.3184 2.2284 4000 0.4229 0.8377
0.2391 2.2841 4100 0.3824 0.8405
0.2019 2.3398 4200 0.4628 0.8345
0.1931 2.3955 4300 0.3848 0.8448
0.238 2.4513 4400 0.3948 0.8398
0.2633 2.5070 4500 0.3779 0.8440
0.1829 2.5627 4600 0.3901 0.8455
0.2286 2.6184 4700 0.3797 0.8481
0.2123 2.6741 4800 0.4203 0.8502
0.266 2.7298 4900 0.4073 0.8455
0.1768 2.7855 5000 0.3750 0.8498
0.1659 2.8412 5100 0.3906 0.8427
0.1644 2.8969 5200 0.3833 0.8466
0.241 2.9526 5300 0.4071 0.8476
0.16 3.0084 5400 0.3691 0.8530
0.0788 3.0641 5500 0.4656 0.8514
0.1244 3.1198 5600 0.4990 0.8484
0.1423 3.1755 5700 0.5219 0.8475
0.1279 3.2312 5800 0.5687 0.8515
0.0974 3.2869 5900 0.5386 0.8458
0.065 3.3426 6000 0.5215 0.8454
0.0497 3.3983 6100 0.5161 0.8483
0.1871 3.4540 6200 0.5148 0.8523
0.0891 3.5097 6300 0.4915 0.8527
0.1375 3.5655 6400 0.5067 0.8509
0.1333 3.6212 6500 0.5272 0.8532
0.2635 3.6769 6600 0.5170 0.8516
0.0375 3.7326 6700 0.5148 0.8534
0.1286 3.7883 6800 0.4945 0.8543
0.091 3.8440 6900 0.4948 0.8540
0.1088 3.8997 7000 0.4985 0.8532
0.0598 3.9554 7100 0.4969 0.8514

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.0+cu121
  • Datasets 3.0.2
  • Tokenizers 0.19.1