Makkoen's picture
End of training
9704f1c verified
|
raw
history blame
2.79 kB
metadata
language:
  - en
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-large-cit-do1.5-wd1e-3-lr3
    results: []

whisper-large-cit-do1.5-wd1e-3-lr3

This model is a fine-tuned version of openai/whisper-large-v3 on the SF 200 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9438
  • Wer: 33.1808

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.087 0.8889 10 0.9839 39.8169
0.8784 1.7778 20 0.7808 34.3249
0.6585 2.6667 30 0.6338 32.2654
0.435 3.5556 40 0.5815 33.8673
0.3445 4.4444 50 0.5508 31.8078
0.2314 5.3333 60 0.5571 30.4348
0.1603 6.2222 70 0.5791 30.4348
0.0927 7.1111 80 0.6309 29.5195
0.0611 8.0 90 0.6768 32.7231
0.0366 8.8889 100 0.7544 29.7483
0.0199 9.7778 110 0.8286 30.8924
0.0155 10.6667 120 0.7920 29.5195
0.0066 11.5556 130 0.8726 30.2059
0.0054 12.4444 140 0.8955 31.3501
0.0077 13.3333 150 0.9194 32.0366
0.0076 14.2222 160 0.9336 32.4943
0.0021 15.1111 170 0.9399 33.1808
0.002 16.0 180 0.9404 32.2654
0.003 16.8889 190 0.9399 32.9519
0.0018 17.7778 200 0.9438 33.1808

Framework versions

  • Transformers 4.41.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.19.1
  • Tokenizers 0.19.1