joeyMartig's picture
End of training
7e2d852 verified
|
raw
history blame
4.48 kB
metadata
language:
  - fr
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper large v3 FR D&D - Joey Martig
    results: []

Whisper large v3 FR D&D - Joey Martig

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0138
  • Wer: 33.4454

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 1.0 7 1.0810 38.6555
No log 2.0 14 1.0631 38.7395
No log 3.0 21 0.9917 38.1513
No log 4.0 28 0.9133 37.3109
No log 5.0 35 0.7915 37.0588
No log 6.0 42 0.7010 35.6303
No log 7.0 49 0.6078 34.6218
No log 8.0 56 0.5114 45.6303
No log 9.0 63 0.4258 40.0
No log 10.0 70 0.3484 33.4454
No log 11.0 77 0.2802 33.4454
No log 12.0 84 0.2228 33.4454
No log 13.0 91 0.1804 33.4454
No log 14.0 98 0.1436 36.5546
No log 15.0 105 0.1166 33.4454
No log 16.0 112 0.0932 36.5546
No log 17.0 119 0.0730 36.5546
No log 18.0 126 0.0573 36.5546
No log 19.0 133 0.0451 36.5546
No log 20.0 140 0.0390 33.4454
No log 21.0 147 0.0319 33.4454
No log 22.0 154 0.0287 33.4454
No log 23.0 161 0.0252 33.4454
No log 24.0 168 0.0224 33.4454
No log 25.0 175 0.0209 33.4454
No log 26.0 182 0.0199 33.4454
No log 27.0 189 0.0186 33.4454
No log 28.0 196 0.0179 33.4454
No log 29.0 203 0.0175 33.4454
No log 30.0 210 0.0168 33.4454
No log 31.0 217 0.0164 33.4454
No log 32.0 224 0.0161 33.4454
No log 33.0 231 0.0158 33.4454
No log 34.0 238 0.0156 33.4454
No log 35.0 245 0.0153 33.4454
No log 36.0 252 0.0151 33.4454
No log 37.0 259 0.0149 33.4454
No log 38.0 266 0.0148 33.4454
No log 39.0 273 0.0146 33.4454
No log 40.0 280 0.0145 33.4454
No log 41.0 287 0.0143 33.4454
No log 42.0 294 0.0143 33.4454
No log 43.0 301 0.0141 33.4454
No log 44.0 308 0.0141 33.4454
No log 45.0 315 0.0140 33.4454
No log 46.0 322 0.0139 33.4454
No log 47.0 329 0.0139 33.4454
No log 48.0 336 0.0138 33.4454
No log 49.0 343 0.0138 33.4454
No log 50.0 350 0.0138 33.4454

Framework versions

  • Transformers 4.43.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1