--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: addison6 results: [] --- # addison6 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co./facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4278 - Wer: 0.3784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.0707 | 1.03 | 500 | 2.8972 | 0.9998 | | 1.8869 | 2.05 | 1000 | 1.3286 | 0.6772 | | 1.218 | 3.08 | 1500 | 1.0999 | 0.5714 | | 1.0183 | 4.11 | 2000 | 1.0781 | 0.5523 | | 0.8743 | 5.13 | 2500 | 0.9991 | 0.4951 | | 0.7561 | 6.16 | 3000 | 1.0861 | 0.4869 | | 0.6883 | 7.19 | 3500 | 1.1915 | 0.4704 | | 0.6185 | 8.21 | 4000 | 1.1553 | 0.4634 | | 0.5761 | 9.24 | 4500 | 1.1144 | 0.4513 | | 0.5315 | 10.27 | 5000 | 1.1818 | 0.4298 | | 0.4982 | 11.29 | 5500 | 1.1973 | 0.4329 | | 0.4719 | 12.32 | 6000 | 1.1456 | 0.4259 | | 0.4369 | 13.35 | 6500 | 1.1701 | 0.4292 | | 0.4083 | 14.37 | 7000 | 1.1929 | 0.4132 | | 0.3886 | 15.4 | 7500 | 1.3307 | 0.4163 | | 0.3752 | 16.43 | 8000 | 1.3405 | 0.4081 | | 0.349 | 17.45 | 8500 | 1.3283 | 0.3952 | | 0.3261 | 18.48 | 9000 | 1.2956 | 0.4128 | | 0.3094 | 19.51 | 9500 | 1.2671 | 0.4004 | | 0.3041 | 20.53 | 10000 | 1.3534 | 0.3964 | | 0.2796 | 21.56 | 10500 | 1.3730 | 0.3899 | | 0.25 | 22.59 | 11000 | 1.3952 | 0.3942 | | 0.2303 | 23.61 | 11500 | 1.4792 | 0.3923 | | 0.2321 | 24.64 | 12000 | 1.4228 | 0.3847 | | 0.2 | 25.67 | 12500 | 1.4469 | 0.3837 | | 0.2009 | 26.69 | 13000 | 1.4532 | 0.3820 | | 0.195 | 27.72 | 13500 | 1.4329 | 0.3821 | | 0.1804 | 28.75 | 14000 | 1.4265 | 0.3799 | | 0.1713 | 29.77 | 14500 | 1.4278 | 0.3784 | ### Framework versions - Transformers 4.17.0 - Pytorch 2.0.0+cu118 - Datasets 1.18.3 - Tokenizers 0.13.2