whisper-5b-20k / README.md
CheeLi03's picture
Upload tokenizer
ab30a65 verified
|
raw
history blame
2.73 kB
metadata
base_model: openai/whisper-base
language:
  - en
license: apache-2.0
metrics:
  - wer
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
model-index:
  - name: Whisper Small Five 20K - Chee Li
    results: []

Whisper Small Five 20K - Chee Li

This model is a fine-tuned version of openai/whisper-base on the Google Fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5771
  • Wer: 22.0375

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2500
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4014 1.0560 1000 0.4369 25.7071
0.2677 2.1119 2000 0.3905 22.1327
0.1651 3.1679 3000 0.3856 21.2139
0.1102 4.2239 4000 0.3920 20.4471
0.0514 5.2798 5000 0.4072 21.2883
0.0255 6.3358 6000 0.4273 21.4687
0.0184 7.3918 7000 0.4442 21.6251
0.01 8.4477 8000 0.4635 21.3397
0.0051 9.5037 9000 0.4805 21.3867
0.0043 10.5597 10000 0.4924 21.5508
0.0025 11.6156 11000 0.5054 21.5847
0.0023 12.6716 12000 0.5166 22.0703
0.0016 13.7276 13000 0.5292 21.7509
0.0012 14.7835 14000 0.5375 21.7925
0.001 15.8395 15000 0.5480 21.9325
0.0008 16.8955 16000 0.5565 21.8866
0.0008 17.9514 17000 0.5638 21.9423
0.0005 19.0074 18000 0.5709 21.9916
0.0005 20.0634 19000 0.5755 22.0397
0.0004 21.1193 20000 0.5771 22.0375

Framework versions

  • Transformers 4.43.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1