Edit model card

Whisper Small GA-EN Speech Translation

This model is a fine-tuned version of openai/whisper-small on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3690
  • Bleu: 32.44
  • Chrf: 48.06
  • Wer: 63.2598

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.01
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Chrf Wer
2.3655 0.0438 100 1.7709 8.84 26.21 127.7803
1.8998 0.0876 200 1.5198 14.9 32.89 99.2796
1.5421 0.1313 300 1.3972 16.15 35.77 86.8077
1.3154 0.1751 400 1.3412 20.46 39.48 83.0707
1.1138 0.2189 500 1.3126 23.16 41.28 74.1108
0.9814 0.2627 600 1.3217 25.56 41.67 68.7528
0.8897 0.3065 700 1.2859 27.0 43.54 66.3215
0.7495 0.3503 800 1.2668 21.71 43.03 75.7767
0.7068 0.3940 900 1.2852 17.86 40.88 106.0333
0.6002 0.4378 1000 1.2476 24.0 44.26 78.4331
0.4989 0.4816 1100 1.2756 28.88 45.57 67.2670
0.4464 0.5254 1200 1.2756 27.81 45.53 66.8618
0.3883 0.5692 1300 1.2799 29.84 46.03 64.0702
0.341 0.6130 1400 1.2693 26.51 43.97 75.3715
0.2853 0.6567 1500 1.3310 26.99 45.58 74.0207
0.2611 0.7005 1600 1.3022 25.83 44.79 73.4354
0.2013 0.7443 1700 1.3266 30.78 46.61 63.6650
0.1886 0.7881 1800 1.2943 25.56 45.46 73.7055
0.1517 0.8319 1900 1.3193 28.93 45.09 64.3854
0.1288 0.8757 2000 1.3567 28.22 44.75 67.6722
0.1129 0.9194 2100 1.3431 29.55 46.22 66.2314
0.1 0.9632 2200 1.3365 31.46 48.14 64.9257
0.0505 1.0070 2300 1.3557 30.37 47.16 64.1153
0.0468 1.0508 2400 1.3648 31.57 48.17 62.0891
0.0373 1.0946 2500 1.3661 31.56 47.76 64.7456
0.0297 1.1384 2600 1.3638 31.13 47.74 64.3854
0.0283 1.1821 2700 1.3847 29.98 47.54 65.9613
0.0302 1.2259 2800 1.3730 32.32 48.28 64.0252
0.0229 1.2697 2900 1.3702 31.47 47.55 65.1508
0.0262 1.3135 3000 1.3690 32.44 48.06 63.2598

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
22
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ymoslem/whisper-small-ga2en-v5.5-r

Finetuned
(1908)
this model

Datasets used to train ymoslem/whisper-small-ga2en-v5.5-r

Evaluation results

  • Bleu on IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia
    self-reported
    32.440
  • Wer on IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia
    self-reported
    63.260