Makkoen's picture
End of training
03091c7 verified
metadata
language:
  - en
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: ./949
    results: []

./949

This model is a fine-tuned version of openai/whisper-large-v3 on the 949 FULL dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5601
  • Wer Ortho: 29.5461
  • Wer: 21.9669

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
1.0667 1.8692 100 0.7607 37.1700 28.4674
0.7153 3.7383 200 0.6157 32.8982 24.5167
0.5672 5.6075 300 0.5747 30.5251 22.3872
0.4809 7.4766 400 0.5630 29.4275 21.7428
0.428 9.3458 500 0.5601 29.5461 21.9669

Framework versions

  • Transformers 4.44.0
  • Pytorch 1.13.1+cu117
  • Datasets 2.20.0
  • Tokenizers 0.19.1