Aysha630's picture
End of training
417a533 verified
|
raw
history blame
3.94 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: openai/whisper-large-v3
model-index:
  - name: whisper-large-v3-MH-fine-tuned
    results: []

whisper-large-v3-MH-fine-tuned

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6849

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 10
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 1 1.0192
No log 2.0 2 1.0085
No log 3.0 3 0.9859
No log 4.0 4 0.9554
No log 5.0 5 0.9172
No log 6.0 6 0.8728
No log 7.0 7 0.8330
No log 8.0 8 0.7908
No log 9.0 9 0.7546
No log 10.0 10 0.7167
No log 11.0 11 0.6891
No log 12.0 12 0.6698
No log 13.0 13 0.6546
No log 14.0 14 0.6372
No log 15.0 15 0.6247
No log 16.0 16 0.6198
No log 17.0 17 0.6209
No log 18.0 18 0.6249
No log 19.0 19 0.6316
No log 20.0 20 0.6368
No log 21.0 21 0.6398
No log 22.0 22 0.6463
No log 23.0 23 0.6555
No log 24.0 24 0.6680
0.4535 25.0 25 0.6789
0.4535 26.0 26 0.6905
0.4535 27.0 27 0.7006
0.4535 28.0 28 0.7102
0.4535 29.0 29 0.7208
0.4535 30.0 30 0.7363
0.4535 31.0 31 0.7512
0.4535 32.0 32 0.7582
0.4535 33.0 33 0.7685
0.4535 34.0 34 0.7794
0.4535 35.0 35 0.7849
0.4535 36.0 36 0.7896
0.4535 37.0 37 0.7917
0.4535 38.0 38 0.8132
0.4535 39.0 39 0.7889
0.4535 40.0 40 0.7614
0.4535 41.0 41 0.7371
0.4535 42.0 42 0.7206
0.4535 43.0 43 0.7066
0.4535 44.0 44 0.7024
0.4535 45.0 45 0.7128
0.4535 46.0 46 0.7242
0.4535 47.0 47 0.7137
0.4535 48.0 48 0.7004
0.4535 49.0 49 0.6883
0.0261 50.0 50 0.6849

Framework versions

  • PEFT 0.11.2.dev0
  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1