joeyMartig's picture
End of training
992f349 verified
|
raw
history blame
No virus
2.65 kB
metadata
language:
  - fr
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper large v3 FR D&D - Joey Martig
    results: []

Whisper large v3 FR D&D - Joey Martig

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1264
  • Wer: 33.4454

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 1.0 7 1.0810 38.6555
No log 2.0 14 1.0631 38.7395
No log 3.0 21 0.9919 38.1513
No log 4.0 28 0.9135 37.3109
No log 5.0 35 0.7922 37.0588
No log 6.0 42 0.7013 35.6303
No log 7.0 49 0.6073 34.6218
No log 8.0 56 0.5099 45.5462
No log 9.0 63 0.4297 40.0
No log 10.0 70 0.3603 33.4454
No log 11.0 77 0.3024 33.4454
No log 12.0 84 0.2540 33.4454
No log 13.0 91 0.2196 33.4454
No log 14.0 98 0.1907 33.4454
No log 15.0 105 0.1695 36.7227
No log 16.0 112 0.1536 36.7227
No log 17.0 119 0.1421 33.4454
No log 18.0 126 0.1336 33.4454
No log 19.0 133 0.1282 33.4454
No log 20.0 140 0.1264 33.4454

Framework versions

  • Transformers 4.43.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1