Whisper tiny AR - BH
This model is a fine-tuned version of openai/whisper-tiny on the quran-ayat-speech-to-text dataset. It achieves the following results on the evaluation set:
- Loss: 0.0100
- Wer: 0.1520
- Cer: 0.0580
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
0.0083 | 1.0 | 313 | 0.0106 | 0.1573 | 0.0532 |
0.0075 | 2.0 | 626 | 0.0103 | 0.1544 | 0.0535 |
0.0076 | 3.0 | 939 | 0.0097 | 0.1605 | 0.0581 |
0.0059 | 4.0 | 1252 | 0.0095 | 0.1582 | 0.0562 |
0.0056 | 5.0 | 1565 | 0.0094 | 0.1533 | 0.0623 |
0.0064 | 6.0 | 1878 | 0.0094 | 0.1736 | 0.0610 |
0.0052 | 7.0 | 2191 | 0.0094 | 0.1560 | 0.0560 |
0.0046 | 8.0 | 2504 | 0.0093 | 0.1674 | 0.0567 |
0.0036 | 9.0 | 2817 | 0.0096 | 0.1437 | 0.0482 |
0.0036 | 10.0 | 3130 | 0.0095 | 0.1522 | 0.0518 |
0.0032 | 11.0 | 3443 | 0.0095 | 0.1508 | 0.0520 |
0.0023 | 12.0 | 3756 | 0.0096 | 0.1466 | 0.0487 |
0.0028 | 13.0 | 4069 | 0.0096 | 0.1426 | 0.0461 |
0.0028 | 14.0 | 4382 | 0.0100 | 0.1508 | 0.0582 |
0.0023 | 14.9536 | 4680 | 0.0097 | 0.1403 | 0.0455 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 29
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Baselhany/Whisper_tiny_tring_large_sample_with_early_stop2
Base model
openai/whisper-tiny