whisper-finetune-th / README.md
Wachiraya's picture
Upload tokenizer
975eb83 verified
metadata
base_model: biodatlab/whisper-th-medium-combined
datasets:
  - common_voice_17_0
library_name: transformers
license: apache-2.0
metrics:
  - wer
tags:
  - generated_from_trainer
model-index:
  - name: whisper-finetune-th
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: common_voice_17_0
          type: common_voice_17_0
          config: th
          split: None
          args: th
        metrics:
          - type: wer
            value: 15.045342636924866
            name: Wer

whisper-finetune-th

This model is a fine-tuned version of biodatlab/whisper-th-medium-combined on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1015
  • Wer: 15.0453
  • Cer: 3.8830

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.2829 0.4873 1000 0.1345 20.0856 5.3644
0.1548 0.9747 2000 0.1161 17.6348 4.5783
0.1775 1.4620 3000 0.1074 15.9448 4.1193
0.1477 1.9493 4000 0.1015 15.0453 3.8830

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1