gwanju2_mparameters1e-5__model

This model is a fine-tuned version of openai/whisper-medium on the Marcusxx/gwanju2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8405
  • Cer: 19.4108

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 50000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.43 0.6046 1000 0.4009 166.0129
0.2587 1.2092 2000 0.3832 55.7046
0.2636 1.8138 3000 0.3843 98.7021
0.1623 2.4184 4000 0.4000 33.8374
0.1503 3.0230 5000 0.4133 27.1959
0.1061 3.6276 6000 0.4223 25.8623
0.0638 4.2322 7000 0.4534 24.9179
0.0671 4.8368 8000 0.4526 22.6665
0.0406 5.4414 9000 0.4991 21.5252
0.0372 6.0459 10000 0.5183 22.3878
0.0273 6.6505 11000 0.5348 21.8627
0.0195 7.2551 12000 0.5536 20.8458
0.0212 7.8597 13000 0.5466 21.0162
0.0146 8.4643 14000 0.5870 21.4089
0.0126 9.0689 15000 0.6040 26.2815
0.0113 9.6735 16000 0.6097 20.9229
0.0088 10.2781 17000 0.6190 21.1083
0.0098 10.8827 18000 0.6309 20.7387
0.0069 11.4873 19000 0.6489 20.5855
0.0056 12.0919 20000 0.6642 20.5590
0.0064 12.6965 21000 0.6514 20.8815
0.0053 13.3011 22000 0.6716 20.3149
0.005 13.9057 23000 0.6720 20.3068
0.0037 14.5103 24000 0.6912 20.6039
0.0044 15.1149 25000 0.7038 20.6166
0.004 15.7195 26000 0.6959 20.3149
0.0029 16.3241 27000 0.7064 20.6454
0.0031 16.9287 28000 0.7188 20.5463
0.002 17.5333 29000 0.7127 20.7640
0.0033 18.1378 30000 0.7193 20.4496
0.0019 18.7424 31000 0.7109 20.6719
0.002 19.3470 32000 0.7208 20.1306
0.0013 19.9516 33000 0.7442 20.1490
0.0007 20.5562 34000 0.7357 19.9198
0.0012 21.1608 35000 0.7501 19.5755
0.0022 21.7654 36000 0.7537 19.5870
0.0008 22.3700 37000 0.7686 19.6031
0.0013 22.9746 38000 0.7702 20.6097
0.0005 23.5792 39000 0.7712 19.9590
0.0007 24.1838 40000 0.7802 20.2020
0.0004 24.7884 41000 0.8050 19.5536
0.0004 25.3930 42000 0.8012 19.8093
0.0003 25.9976 43000 0.8049 19.6722
0.0002 26.6022 44000 0.8094 19.5214
0.0002 27.2068 45000 0.8109 19.6296
0.0001 27.8114 46000 0.8206 19.5248
0.0001 28.4160 47000 0.8299 19.5974
0.0001 29.0206 48000 0.8343 19.5490
0.0001 29.6252 49000 0.8395 19.4661
0.0001 30.2297 50000 0.8405 19.4108

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.2+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
10
Safetensors
Model size
764M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Marcusxx/gwanju2_mparameters1e-5__model

Finetuned
(548)
this model

Dataset used to train Marcusxx/gwanju2_mparameters1e-5__model