liqi03's picture
End of training
d58e0e2 verified
|
raw
history blame
2.06 kB
metadata
language:
  - pt
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - google/fleurs
metrics:
  - wer
model-index:
  - name: Whisper Large V3 pt Fleurs Aug - Chee Li
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Google Fleurs
          type: google/fleurs
          config: pt_br
          split: None
          args: 'config: pt split: test'
        metrics:
          - name: Wer
            type: wer
            value: 418.6592073715387

Whisper Large V3 pt Fleurs Aug - Chee Li

This model is a fine-tuned version of openai/whisper-large-v3 on the Google Fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1648
  • Wer: 418.6592

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0298 1.2579 1000 0.1279 73.4662
0.0053 2.5157 2000 0.1516 315.7726
0.0058 3.7736 3000 0.1560 433.2424
0.0005 5.0314 4000 0.1648 418.6592

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1