menhior's picture
End of training
e35f462
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab-full
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.30497395567357777
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab-full
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co./facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3991
- Wer: 0.3050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9196 | 3.67 | 400 | 0.6539 | 0.6524 |
| 0.3908 | 7.34 | 800 | 0.4486 | 0.4502 |
| 0.1859 | 11.01 | 1200 | 0.4015 | 0.3799 |
| 0.1228 | 14.68 | 1600 | 0.4080 | 0.3741 |
| 0.0956 | 18.35 | 2000 | 0.3930 | 0.3468 |
| 0.0757 | 22.02 | 2400 | 0.4163 | 0.3355 |
| 0.0573 | 25.69 | 2800 | 0.3983 | 0.3115 |
| 0.0463 | 29.36 | 3200 | 0.3991 | 0.3050 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3