--- language: - ko license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer base_model: openai/whisper-small datasets: - Marcusxx/gwanju model-index: - name: gwanju_small_model results: [] --- # gwanju_small_model This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co./openai/whisper-small) on the Marcusxx/gwanju dataset. It achieves the following results on the evaluation set: - Loss: 0.6780 - Cer: 250.8533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:-----:|:---------------:|:--------:| | 0.4654 | 0.5928 | 500 | 0.4638 | 87.7337 | | 0.333 | 1.1855 | 1000 | 0.4244 | 178.8377 | | 0.3286 | 1.7783 | 1500 | 0.4122 | 177.5061 | | 0.2259 | 2.3711 | 2000 | 0.4145 | 308.2453 | | 0.2364 | 2.9638 | 2500 | 0.4093 | 108.4814 | | 0.1643 | 3.5566 | 3000 | 0.4257 | 204.9992 | | 0.1078 | 4.1494 | 3500 | 0.4426 | 293.1846 | | 0.111 | 4.7421 | 4000 | 0.4517 | 169.9893 | | 0.0702 | 5.3349 | 4500 | 0.4771 | 297.9658 | | 0.064 | 5.9277 | 5000 | 0.4881 | 245.7345 | | 0.0417 | 6.5205 | 5500 | 0.5119 | 193.3111 | | 0.024 | 7.1132 | 6000 | 0.5500 | 282.1528 | | 0.0278 | 7.7060 | 6500 | 0.5631 | 188.2193 | | 0.014 | 8.2988 | 7000 | 0.6062 | 257.3118 | | 0.0164 | 8.8915 | 7500 | 0.6047 | 235.9248 | | 0.0088 | 9.4843 | 8000 | 0.6341 | 234.2000 | | 0.0061 | 10.0771 | 8500 | 0.6508 | 239.4206 | | 0.0064 | 10.6698 | 9000 | 0.6657 | 252.8322 | | 0.0039 | 11.2626 | 9500 | 0.6747 | 250.7573 | | 0.0044 | 11.8554 | 10000 | 0.6780 | 250.8533 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.2+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1