|
--- |
|
license: cc-by-nc-4.0 |
|
base_model: facebook/mms-1b-all |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- nena_speech_1_0_test |
|
metrics: |
|
- wer |
|
model-index: |
|
- name: wav2vec2-large-mms-1b-urmi-christian-nointonations |
|
results: |
|
- task: |
|
name: Automatic Speech Recognition |
|
type: automatic-speech-recognition |
|
dataset: |
|
name: nena_speech_1_0_test |
|
type: nena_speech_1_0_test |
|
config: urmi (christian) |
|
split: test |
|
args: urmi (christian) |
|
metrics: |
|
- name: Wer |
|
type: wer |
|
value: 1.0 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# wav2vec2-large-mms-1b-urmi-christian-nointonations |
|
|
|
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co./facebook/mms-1b-all) on the nena_speech_1_0_test dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.4023 |
|
- Wer: 1.0 |
|
- Cer: 0.3080 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.001 |
|
- train_batch_size: 32 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 100 |
|
- num_epochs: 5 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |
|
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:| |
|
| 11.9124 | 0.14 | 25 | 8.0029 | 1.0 | 0.9396 | |
|
| 4.0619 | 0.29 | 50 | 3.1702 | 1.0 | 0.9850 | |
|
| 2.5226 | 0.43 | 75 | 1.2813 | 1.0 | 0.3749 | |
|
| 1.7097 | 0.57 | 100 | 1.0049 | 1.0 | 0.3041 | |
|
| 1.4508 | 0.72 | 125 | 0.8869 | 1.0 | 0.2564 | |
|
| 1.1873 | 0.86 | 150 | 0.8490 | 0.9984 | 0.2503 | |
|
| 1.4657 | 1.01 | 175 | 0.8487 | 1.0 | 0.2513 | |
|
| 1.0877 | 1.15 | 200 | 0.7699 | 0.9984 | 0.2352 | |
|
| 1.3957 | 1.29 | 225 | 0.7402 | 0.9984 | 0.2271 | |
|
| 1.1216 | 1.44 | 250 | 0.7486 | 0.9984 | 0.2228 | |
|
| 1.2285 | 1.58 | 275 | 0.7122 | 0.9984 | 0.2191 | |
|
| 1.24 | 1.72 | 300 | 0.6914 | 0.9984 | 0.2208 | |
|
| 0.9623 | 1.87 | 325 | 0.6688 | 0.9984 | 0.2132 | |
|
| 1.2324 | 2.01 | 350 | 0.6708 | 0.9984 | 0.2117 | |
|
| 0.9558 | 2.16 | 375 | 0.6614 | 0.9984 | 0.2071 | |
|
| 1.2007 | 2.3 | 400 | 0.7159 | 0.9984 | 0.2183 | |
|
| 1.0645 | 2.44 | 425 | 0.7265 | 0.9984 | 0.2104 | |
|
| 1.1051 | 2.59 | 450 | 0.8289 | 1.0 | 0.2172 | |
|
| 1.6129 | 2.73 | 475 | 1.5108 | 1.0 | 0.3514 | |
|
| 2.0501 | 2.87 | 500 | 1.6020 | 1.0 | 0.4407 | |
|
| 2.0458 | 3.02 | 525 | 1.4441 | 1.0 | 0.4181 | |
|
| 1.621 | 3.16 | 550 | 1.2917 | 1.0 | 0.3545 | |
|
| 1.7942 | 3.3 | 575 | 1.4151 | 0.9984 | 0.2664 | |
|
| 1.6505 | 3.45 | 600 | 1.2550 | 1.0 | 0.3075 | |
|
| 1.7165 | 3.59 | 625 | 1.3912 | 1.0 | 0.3056 | |
|
| 1.8114 | 3.74 | 650 | 1.2554 | 1.0 | 0.3100 | |
|
| 1.6019 | 3.88 | 675 | 1.5515 | 1.0 | 0.2889 | |
|
| 2.0484 | 4.02 | 700 | 1.3666 | 1.0 | 0.2826 | |
|
| 1.7132 | 4.17 | 725 | 1.3629 | 1.0 | 0.3414 | |
|
| 1.8599 | 4.31 | 750 | 1.3831 | 1.0 | 0.3355 | |
|
| 1.8653 | 4.45 | 775 | 1.4025 | 1.0 | 0.3344 | |
|
| 1.8246 | 4.6 | 800 | 1.4007 | 1.0 | 0.3110 | |
|
| 1.9346 | 4.74 | 825 | 1.4022 | 1.0 | 0.3082 | |
|
| 1.732 | 4.89 | 850 | 1.4023 | 1.0 | 0.3080 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.34.1 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.14.6 |
|
- Tokenizers 0.14.1 |
|
|