whisper-small-rus / README.md
CheeLi03's picture
End of training
fc1b4a8 verified
|
raw
history blame
2.01 kB
metadata
language:
  - ru
license: apache-2.0
base_model: openai/whisper-small
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - fleurs
metrics:
  - wer
model-index:
  - name: Whisper Small ru - Chee Li
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Google Fleurs
          type: fleurs
          config: ru_ru
          split: None
          args: 'config: ru split: test'
        metrics:
          - name: Wer
            type: wer
            value: 50.354088722608225

Whisper Small ru - Chee Li

This model is a fine-tuned version of openai/whisper-small on the Google Fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2500
  • Wer: 50.3541

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0049 5.4645 1000 0.2170 29.2090
0.0013 10.9290 2000 0.2340 43.3993
0.0006 16.3934 3000 0.2457 49.9800
0.0004 21.8579 4000 0.2500 50.3541

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1