--- base_model: facebook/wav2vec2-base-960h library_name: transformers license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: wav2vec2-base-960h-EMOPIA-10sec-full-50epoc results: [] --- # wav2vec2-base-960h-EMOPIA-10sec-full-50epoc This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co./facebook/wav2vec2-base-960h) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2688 - Accuracy: 0.8630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 1.2025 | 1.0 | 2248 | 1.3446 | 0.5196 | | 1.415 | 2.0 | 4496 | 1.6350 | 0.6032 | | 1.4176 | 3.0 | 6744 | 1.6250 | 0.6584 | | 1.384 | 4.0 | 8992 | 1.3694 | 0.7242 | | 1.3658 | 5.0 | 11240 | 1.4331 | 0.7100 | | 1.2763 | 6.0 | 13488 | 1.3311 | 0.7438 | | 1.2175 | 7.0 | 15736 | 1.2727 | 0.7580 | | 1.1276 | 8.0 | 17984 | 1.4520 | 0.7331 | | 1.1053 | 9.0 | 20232 | 1.2134 | 0.7722 | | 1.0314 | 10.0 | 22480 | 1.2143 | 0.7829 | | 1.0029 | 11.0 | 24728 | 1.3312 | 0.7811 | | 0.9108 | 12.0 | 26976 | 1.2228 | 0.8025 | | 0.8335 | 13.0 | 29224 | 1.1526 | 0.8078 | | 0.8514 | 14.0 | 31472 | 0.9904 | 0.8203 | | 0.7389 | 15.0 | 33720 | 1.3000 | 0.8025 | | 0.6993 | 16.0 | 35968 | 1.0873 | 0.8203 | | 0.6177 | 17.0 | 38216 | 1.0856 | 0.8327 | | 0.641 | 18.0 | 40464 | 1.3224 | 0.7972 | | 0.611 | 19.0 | 42712 | 1.1800 | 0.8292 | | 0.5744 | 20.0 | 44960 | 1.2937 | 0.8096 | | 0.5008 | 21.0 | 47208 | 1.1565 | 0.8416 | | 0.4396 | 22.0 | 49456 | 1.3663 | 0.8149 | | 0.4313 | 23.0 | 51704 | 1.3267 | 0.8221 | | 0.3954 | 24.0 | 53952 | 1.1824 | 0.8470 | | 0.4217 | 25.0 | 56200 | 1.5586 | 0.8043 | | 0.3797 | 26.0 | 58448 | 1.1746 | 0.8523 | | 0.358 | 27.0 | 60696 | 1.1937 | 0.8452 | | 0.2963 | 28.0 | 62944 | 1.4036 | 0.8310 | | 0.3338 | 29.0 | 65192 | 1.3134 | 0.8505 | | 0.2565 | 30.0 | 67440 | 1.4806 | 0.8345 | | 0.2798 | 31.0 | 69688 | 1.5173 | 0.8310 | | 0.2674 | 32.0 | 71936 | 1.5758 | 0.8132 | | 0.2334 | 33.0 | 74184 | 1.3401 | 0.8559 | | 0.2352 | 34.0 | 76432 | 1.2717 | 0.8470 | | 0.2406 | 35.0 | 78680 | 1.6163 | 0.8256 | | 0.2208 | 36.0 | 80928 | 1.3815 | 0.8505 | | 0.1796 | 37.0 | 83176 | 1.3929 | 0.8577 | | 0.2127 | 38.0 | 85424 | 1.5271 | 0.8274 | | 0.1748 | 39.0 | 87672 | 1.5069 | 0.8416 | | 0.1612 | 40.0 | 89920 | 1.3966 | 0.8470 | | 0.1757 | 41.0 | 92168 | 1.4628 | 0.8470 | | 0.1664 | 42.0 | 94416 | 1.3363 | 0.8523 | | 0.1313 | 43.0 | 96664 | 1.4388 | 0.8434 | | 0.1272 | 44.0 | 98912 | 1.3670 | 0.8630 | | 0.1127 | 45.0 | 101160 | 1.4244 | 0.8541 | | 0.1062 | 46.0 | 103408 | 1.3812 | 0.8541 | | 0.0924 | 47.0 | 105656 | 1.4448 | 0.8541 | | 0.0998 | 48.0 | 107904 | 1.3051 | 0.8683 | | 0.1055 | 49.0 | 110152 | 1.2630 | 0.8701 | | 0.1073 | 50.0 | 112400 | 1.2688 | 0.8630 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu118 - Datasets 3.0.1 - Tokenizers 0.20.0