joeyMartig's picture
End of training
09ba6fd verified
|
raw
history blame
No virus
2.04 kB
metadata
language:
  - fr
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Whisper large v3 FR D&D - Joey Martig
    results: []

Whisper large v3 FR D&D - Joey Martig

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0117
  • Wer: 33.4454

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 1.0 7 0.9825 38.1513
No log 2.0 14 0.7112 35.7143
No log 3.0 21 0.4668 68.2353
No log 4.0 28 0.2396 33.6134
No log 5.0 35 0.1178 33.4454
No log 6.0 42 0.0526 33.4454
No log 7.0 49 0.0317 33.4454
No log 8.0 56 0.0165 33.4454
No log 9.0 63 0.0133 33.4454
No log 10.0 70 0.0117 33.4454

Framework versions

  • Transformers 4.43.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1