--- library_name: transformers license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: W2V2_Bert_BIG-C_BEMBA_10hr_v1 results: [] --- # W2V2_Bert_BIG-C_BEMBA_10hr_v1 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co./facebook/w2v-bert-2.0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.4556 - Cer: 0.1168 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:|:------:| | 2.809 | 0.9992 | 160 | 1.1132 | 0.6643 | 0.2005 | | 0.9734 | 1.9984 | 320 | 0.9685 | 0.5738 | 0.1782 | | 0.8447 | 2.9977 | 480 | 0.8472 | 0.5473 | 0.1643 | | 0.8229 | 3.9969 | 640 | 0.8477 | 0.5224 | 0.1630 | | 0.8776 | 4.9961 | 800 | 0.9532 | 0.5940 | 0.1767 | | 1.1449 | 5.9953 | 960 | 1.7822 | 0.9266 | 0.3271 | | 3.2265 | 6.9945 | 1120 | 3.3887 | 0.9994 | 0.9847 | | 3.5702 | 8.0 | 1281 | 2.9517 | 0.9993 | 0.9694 | | 3.92 | 8.9992 | 1441 | 4.2453 | 1.0 | 1.0 | | 3.9997 | 9.9984 | 1601 | 4.4161 | 1.0 | 1.0 | | 4.004 | 10.9977 | 1761 | 4.4161 | 1.0 | 1.0 | | 3.9971 | 11.9969 | 1921 | 4.4161 | 1.0 | 1.0 | | 4.0026 | 12.9961 | 2081 | 4.4161 | 1.0 | 1.0 | | 4.002 | 13.9953 | 2241 | 4.4161 | 1.0 | 1.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.2.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1