Aysha630's picture
End of training
f975bf0 verified
|
raw
history blame
3.94 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: openai/whisper-large-v3
model-index:
  - name: whisper-large-v3-MH-fine-tuned
    results: []

whisper-large-v3-MH-fine-tuned

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0926

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 10
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 1 1.5700
No log 2.0 2 1.5532
No log 3.0 3 1.5169
No log 4.0 4 1.4654
No log 5.0 5 1.4050
No log 6.0 6 1.3413
No log 7.0 7 1.2879
No log 8.0 8 1.2420
No log 9.0 9 1.1945
No log 10.0 10 1.1463
No log 11.0 11 1.0969
No log 12.0 12 1.0539
No log 13.0 13 1.0243
No log 14.0 14 1.0018
No log 15.0 15 0.9775
No log 16.0 16 0.9583
No log 17.0 17 0.9521
No log 18.0 18 0.9549
No log 19.0 19 0.9596
No log 20.0 20 0.9569
No log 21.0 21 0.9512
No log 22.0 22 0.9512
No log 23.0 23 0.9577
No log 24.0 24 0.9615
1.0177 25.0 25 0.9770
1.0177 26.0 26 0.9980
1.0177 27.0 27 1.0184
1.0177 28.0 28 1.0271
1.0177 29.0 29 1.0317
1.0177 30.0 30 1.0514
1.0177 31.0 31 1.0524
1.0177 32.0 32 1.0568
1.0177 33.0 33 1.0673
1.0177 34.0 34 1.0699
1.0177 35.0 35 1.0774
1.0177 36.0 36 1.0873
1.0177 37.0 37 1.0838
1.0177 38.0 38 1.0742
1.0177 39.0 39 1.0757
1.0177 40.0 40 1.0692
1.0177 41.0 41 1.0705
1.0177 42.0 42 1.0738
1.0177 43.0 43 1.0765
1.0177 44.0 44 1.0762
1.0177 45.0 45 1.0806
1.0177 46.0 46 1.0755
1.0177 47.0 47 1.0789
1.0177 48.0 48 1.0783
1.0177 49.0 49 1.0856
0.2796 50.0 50 1.0926

Framework versions

  • PEFT 0.11.1.dev0
  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1