whisper-large-v3-turbo-gl-en

This model is a fine-tuned version of openai/whisper-large-v3-turbo on juanjucm/OpenSLR-SpeechT-GL-EN. It achieves the following results on the test set:

  • Loss: 0.9360
  • Bleu: 55.6535

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • total_train_batch_size: 32
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 3500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu
0.2758 1.6667 250 0.7646 50.6055
0.0592 3.3333 500 0.7730 53.1258
0.0406 5.0 750 0.7860 53.3406
0.0173 6.6667 1000 0.8358 51.9789
0.0091 8.3333 1250 0.8909 54.4806
0.0071 10.0 1500 0.8862 54.2655
0.0039 11.6667 1750 0.9216 52.5119
0.0014 13.3333 2000 0.9281 54.5752
0.0013 15.0 2250 0.9471 54.5791
0.0009 16.6667 2500 0.9541 54.8725
0.0006 18.3333 2750 0.9614 53.1879
0.0006 20.0 3000 0.9701 54.6499
0.0006 21.6667 3250 0.9739 54.4341
0.0006 23.3333 3500 0.9747 54.5311

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
10
Safetensors
Model size
809M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for juanjucm/whisper-large-v3-turbo-OpenSLR-GL-EN

Finetuned
(125)
this model

Dataset used to train juanjucm/whisper-large-v3-turbo-OpenSLR-GL-EN

Collection including juanjucm/whisper-large-v3-turbo-OpenSLR-GL-EN