CA

This model is a fine-tuned version of openai/whisper-large-v3 on the 3309 CA-2024-12-01 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4526
  • Wer Ortho: 21.9410
  • Wer: 15.1505

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 1200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.6296 1.0738 200 0.4426 24.2755 17.3001
0.3923 2.1477 400 0.4143 22.7639 15.8469
0.2856 3.2215 600 0.4175 22.3166 15.5460
0.2101 4.2953 800 0.4351 22.1020 15.3224
0.1688 5.3691 1000 0.4375 21.9589 15.3740
0.1306 6.4430 1200 0.4526 21.9410 15.1505

Framework versions

  • Transformers 4.44.0
  • Pytorch 1.13.1+cu117
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
5
Safetensors
Model size
1.61B params
Tensor type
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Makkoen/whisper-large-v3-cit-do005-wd0-lr5e-06-steps1200-CA

Finetuned
(350)
this model