Icelandic-finetuned-no-data-augmentation
This model is a fine-tuned version of carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1262
- Wer: 0.1544
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
No log | 0.2 | 10 | 0.2849 | 0.1667 |
No log | 0.4 | 20 | 0.3085 | 0.1611 |
No log | 0.6 | 30 | 0.3264 | 0.1600 |
No log | 0.8 | 40 | 0.3149 | 0.1577 |
0.3456 | 1.0 | 50 | 0.3221 | 0.1588 |
0.3456 | 1.2 | 60 | 0.3183 | 0.1577 |
0.3456 | 1.4 | 70 | 0.3133 | 0.1510 |
0.3456 | 1.6 | 80 | 0.3069 | 0.1521 |
0.3456 | 1.8 | 90 | 0.2970 | 0.1521 |
0.2689 | 2.0 | 100 | 0.2782 | 0.1477 |
0.2689 | 2.2 | 110 | 0.2242 | 0.1488 |
0.2689 | 2.4 | 120 | 0.2257 | 0.1454 |
0.2689 | 2.6 | 130 | 0.2722 | 0.1521 |
0.2689 | 2.8 | 140 | 0.2564 | 0.1488 |
0.2312 | 3.0 | 150 | 0.1811 | 0.1521 |
0.2312 | 3.2 | 160 | 0.1762 | 0.1454 |
0.2312 | 3.4 | 170 | 0.2154 | 0.1421 |
0.2312 | 3.6 | 180 | 0.1873 | 0.1465 |
0.2312 | 3.8 | 190 | 0.2015 | 0.1465 |
0.2153 | 4.0 | 200 | 0.2402 | 0.1499 |
0.2153 | 4.2 | 210 | 0.2366 | 0.1454 |
0.2153 | 4.4 | 220 | 0.1890 | 0.1477 |
0.2153 | 4.6 | 230 | 0.1897 | 0.1499 |
0.2153 | 4.8 | 240 | 0.1777 | 0.1488 |
0.1986 | 5.0 | 250 | 0.1659 | 0.1555 |
0.1986 | 5.2 | 260 | 0.1936 | 0.1588 |
0.1986 | 5.4 | 270 | 0.2044 | 0.1532 |
0.1986 | 5.6 | 280 | 0.1958 | 0.1555 |
0.1986 | 5.8 | 290 | 0.1760 | 0.1521 |
0.1842 | 6.0 | 300 | 0.2056 | 0.1600 |
0.1842 | 6.2 | 310 | 0.1649 | 0.1532 |
0.1842 | 6.4 | 320 | 0.2269 | 0.1532 |
0.1842 | 6.6 | 330 | 0.1572 | 0.1488 |
0.1842 | 6.8 | 340 | 0.1890 | 0.1600 |
0.181 | 7.0 | 350 | 0.1757 | 0.1700 |
0.181 | 7.2 | 360 | 0.2322 | 0.1644 |
0.181 | 7.4 | 370 | 0.2644 | 0.1600 |
0.181 | 7.6 | 380 | 0.2047 | 0.1555 |
0.181 | 7.8 | 390 | 0.2406 | 0.1678 |
0.1751 | 8.0 | 400 | 0.2820 | 0.1678 |
0.1751 | 8.2 | 410 | 0.2965 | 0.1655 |
0.1751 | 8.4 | 420 | 0.2841 | 0.1667 |
0.1751 | 8.6 | 430 | 0.2527 | 0.1734 |
0.1751 | 8.8 | 440 | 0.5464 | 0.1700 |
0.1836 | 9.0 | 450 | 0.2185 | 0.1622 |
0.1836 | 9.2 | 460 | 0.2129 | 0.1857 |
0.1836 | 9.4 | 470 | 0.3367 | 0.1779 |
0.1836 | 9.6 | 480 | 0.1457 | 0.1644 |
0.1836 | 9.8 | 490 | 0.1320 | 0.1667 |
0.2141 | 10.0 | 500 | 0.1262 | 0.1544 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.