ymoslem's picture
End of training
969b6c6 verified
|
raw
history blame
4.47 kB
metadata
language:
  - ga
  - en
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - ymoslem/IWSLT2023-GA-EN
  - ymoslem/FLEURS-GA-EN
  - ymoslem/BitesizeIrish-GA-EN
  - ymoslem/SpokenWords-GA-EN-MTed
  - ymoslem/Tatoeba-Speech-Irish
  - ymoslem/Wikimedia-Speech-Irish
metrics:
  - bleu
  - wer
model-index:
  - name: Whisper Small GA-EN Speech Translation
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia
          type: ymoslem/IWSLT2023-GA-EN
        metrics:
          - name: Bleu
            type: bleu
            value: 27.85
          - name: Wer
            type: wer
            value: 73.43538946420531

Whisper Small GA-EN Speech Translation

This model is a fine-tuned version of openai/whisper-small on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4107
  • Bleu: 27.85
  • Chrf: 46.91
  • Wer: 73.4354

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.01
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Chrf Wer
2.3549 0.1312 100 1.8335 7.17 24.71 135.7497
1.8906 0.2625 200 1.5173 15.56 34.19 91.5353
1.653 0.3937 300 1.3530 17.17 36.35 103.7371
1.4901 0.5249 400 1.3334 24.65 43.44 78.2530
1.3551 0.6562 500 1.2763 27.04 43.88 67.4471
1.2187 0.7874 600 1.2618 27.08 43.98 69.2031
1.0359 0.9186 700 1.2644 20.82 40.76 96.8483
0.5364 1.0499 800 1.3258 24.9 42.9 65.8262
0.4892 1.1811 900 1.3296 23.82 42.86 72.3098
0.4504 1.3123 1000 1.3001 25.78 43.72 75.5065
0.4161 1.4436 1100 1.2948 27.16 44.31 67.3120
0.3953 1.5748 1200 1.3261 29.14 44.65 65.5110
0.3509 1.7060 1300 1.3398 22.75 44.32 80.1441
0.2955 1.8373 1400 1.3077 26.29 42.89 74.8762
0.2801 1.9685 1500 1.3206 25.51 43.39 76.5871
0.1084 2.0997 1600 1.3609 28.01 45.59 68.1225
0.1003 2.2310 1700 1.3722 26.4 42.69 72.8501
0.1083 2.3622 1800 1.3776 3.81 19.2 396.1279
0.0939 2.4934 1900 1.3729 28.43 45.61 69.2031
0.0909 2.6247 2000 1.3834 27.12 43.39 67.4921
0.0772 2.7559 2100 1.4094 28.44 44.15 65.5110
0.0753 2.8871 2200 1.3825 30.5 46.21 64.9257
0.0438 3.0184 2300 1.4198 30.44 46.18 62.5844
0.0257 3.1496 2400 1.4033 31.03 46.67 63.6650
0.0252 3.2808 2500 1.4045 31.2 46.44 62.4043
0.0241 3.4121 2600 1.3971 32.42 48.21 61.1436
0.0208 3.5433 2700 1.4129 30.36 46.28 65.7362
0.0186 3.6745 2800 1.4076 31.14 47.73 64.4304
0.018 3.8058 2900 1.4151 27.67 45.87 73.5254
0.0193 3.9370 3000 1.4107 27.85 46.91 73.4354

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1