--- license: apache-2.0 base_model: DewiBrynJones/wav2vec2-xlsr-53-ft-btb-cv-cy tags: - automatic-speech-recognition - ./data-configs/btb.json - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-btb-cv-ft-btb-cy-cand results: [] --- # wav2vec2-btb-cv-ft-btb-cy-cand This model is a fine-tuned version of [DewiBrynJones/wav2vec2-xlsr-53-ft-btb-cv-cy](https://huggingface.co./DewiBrynJones/wav2vec2-xlsr-53-ft-btb-cv-cy) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.3402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | No log | 0.0215 | 200 | inf | 0.5592 | | No log | 0.0429 | 400 | inf | 0.4289 | | 2.1964 | 0.0644 | 600 | inf | 0.4374 | | 2.1964 | 0.0858 | 800 | inf | 0.4944 | | 0.8327 | 0.1073 | 1000 | inf | 0.5150 | | 0.8327 | 0.1287 | 1200 | inf | 0.5634 | | 0.8327 | 0.1502 | 1400 | inf | 0.5355 | | 0.91 | 0.1716 | 1600 | inf | 0.5152 | | 0.91 | 0.1931 | 1800 | inf | 0.5595 | | 0.8721 | 0.2145 | 2000 | inf | 0.5057 | | 0.8721 | 0.2360 | 2200 | inf | 0.5041 | | 0.8721 | 0.2574 | 2400 | inf | 0.5146 | | 0.8218 | 0.2789 | 2600 | inf | 0.5018 | | 0.8218 | 0.3003 | 2800 | inf | 0.5091 | | 0.8469 | 0.3218 | 3000 | inf | 0.5037 | | 0.8469 | 0.3432 | 3200 | inf | 0.4703 | | 0.8469 | 0.3647 | 3400 | inf | 0.4795 | | 0.8142 | 0.3861 | 3600 | inf | 0.4714 | | 0.8142 | 0.4076 | 3800 | inf | 0.4554 | | 0.8085 | 0.4290 | 4000 | inf | 0.4506 | | 0.8085 | 0.4505 | 4200 | inf | 0.4458 | | 0.8085 | 0.4720 | 4400 | inf | 0.4367 | | 0.7802 | 0.4934 | 4600 | inf | 0.4401 | | 0.7802 | 0.5149 | 4800 | inf | 0.4334 | | 0.7493 | 0.5363 | 5000 | inf | 0.4224 | | 0.7493 | 0.5578 | 5200 | inf | 0.4328 | | 0.7493 | 0.5792 | 5400 | inf | 0.4176 | | 0.7668 | 0.6007 | 5600 | inf | 0.4183 | | 0.7668 | 0.6221 | 5800 | inf | 0.4030 | | 0.6999 | 0.6436 | 6000 | inf | 0.4125 | | 0.6999 | 0.6650 | 6200 | inf | 0.4076 | | 0.6999 | 0.6865 | 6400 | inf | 0.3917 | | 0.6918 | 0.7079 | 6600 | inf | 0.4004 | | 0.6918 | 0.7294 | 6800 | inf | 0.3865 | | 0.6888 | 0.7508 | 7000 | inf | 0.3785 | | 0.6888 | 0.7723 | 7200 | inf | 0.3824 | | 0.6888 | 0.7937 | 7400 | inf | 0.3743 | | 0.646 | 0.8152 | 7600 | inf | 0.3673 | | 0.646 | 0.8366 | 7800 | inf | 0.3667 | | 0.6324 | 0.8581 | 8000 | inf | 0.3662 | | 0.6324 | 0.8795 | 8200 | inf | 0.3601 | | 0.6324 | 0.9010 | 8400 | inf | 0.3535 | | 0.6221 | 0.9224 | 8600 | inf | 0.3526 | | 0.6221 | 0.9439 | 8800 | inf | 0.3487 | | 0.6215 | 0.9654 | 9000 | inf | 0.3481 | | 0.6215 | 0.9868 | 9200 | inf | 0.3447 | | 0.6215 | 1.0083 | 9400 | inf | 0.3410 | | 0.5603 | 1.0297 | 9600 | inf | 0.3405 | | 0.5603 | 1.0512 | 9800 | inf | 0.3412 | | 0.5284 | 1.0726 | 10000 | inf | 0.3402 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1