jlvdoorn's picture
Model save
c046d14
|
raw
history blame
2.24 kB
metadata
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-large-v3-atcosim
    results: []

whisper-large-v3-atcosim

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0573
  • Wer: 15.7807

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 12500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0031 8.33 1000 0.0372 54.8342
0.0005 16.67 2000 0.0415 20.1519
0.0024 25.0 3000 0.0392 10.2102
0.0 33.33 4000 0.0469 18.6609
0.0 41.67 5000 0.0493 17.3180
0.0 50.0 6000 0.0511 16.8179
0.0 58.33 7000 0.0526 16.4753
0.0 66.67 8000 0.0538 16.5725
0.0 75.0 9000 0.0550 15.9983
0.0 83.33 10000 0.0560 15.7205
0.0 91.67 11000 0.0568 15.7159
0.0 100.0 12000 0.0573 15.7807

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.14.1