xezpeleta commited on
Commit
1aafed0
·
verified ·
1 Parent(s): 844822a

Model save

Browse files
Files changed (1) hide show
  1. README.md +47 -74
README.md CHANGED
@@ -3,50 +3,23 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: openai/whisper-large-v3
5
  tags:
6
- - whisper-event
7
  - generated_from_trainer
8
- datasets:
9
- - asierhv/composite_corpus_eu_v2.1
10
  metrics:
11
  - wer
12
  model-index:
13
- - name: Whisper Large Basque
14
- results:
15
- - task:
16
- name: Automatic Speech Recognition
17
- type: automatic-speech-recognition
18
- dataset:
19
- name: Common Voice 17.0
20
- type: mozilla-foundation/common_voice_17_0
21
- config: eu
22
- split: test
23
- args:
24
- language: eu
25
- metrics:
26
- - name: Test WER
27
- type: wer
28
- value: 4.47
29
- - task:
30
- name: Automatic Speech Recognition
31
- type: automatic-speech-recognition
32
- dataset:
33
- name: asierhv/composite_corpus_eu_v2.1
34
- type: asierhv/composite_corpus_eu_v2.1
35
- metrics:
36
- - name: Wer
37
- type: wer
38
- value: 7.100121529400767
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
  should probably proofread and complete it, then remove this comment. -->
43
 
44
- # Whisper Large Basque
45
 
46
- This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the asierhv/composite_corpus_eu_v2.1 dataset.
47
  It achieves the following results on the evaluation set:
48
- - Loss: 0.1407
49
- - Wer: 7.1001
50
 
51
  ## Model description
52
 
@@ -79,51 +52,51 @@ The following hyperparameters were used during training:
79
 
80
  | Training Loss | Epoch | Step | Validation Loss | Wer |
81
  |:-------------:|:-----:|:-----:|:---------------:|:-------:|
82
- | 0.2854 | 0.05 | 500 | 0.3763 | 24.9836 |
83
- | 0.1425 | 0.1 | 1000 | 0.3326 | 19.8654 |
84
- | 0.2196 | 0.15 | 1500 | 0.2802 | 16.2475 |
85
- | 0.2338 | 0.2 | 2000 | 0.2536 | 14.6116 |
86
- | 0.1383 | 0.25 | 2500 | 0.2451 | 12.8961 |
87
- | 0.0848 | 0.3 | 3000 | 0.2280 | 12.2464 |
88
- | 0.0854 | 0.35 | 3500 | 0.2152 | 11.4144 |
89
- | 0.1304 | 0.4 | 4000 | 0.2097 | 11.1433 |
90
- | 0.1328 | 0.45 | 4500 | 0.2055 | 10.6011 |
91
- | 0.0737 | 0.5 | 5000 | 0.2079 | 10.5357 |
92
- | 0.0804 | 0.55 | 5500 | 0.2133 | 10.1150 |
93
- | 0.0964 | 0.6 | 6000 | 0.1988 | 9.4606 |
94
- | 0.0811 | 0.65 | 6500 | 0.2019 | 9.4933 |
95
- | 0.0677 | 0.7 | 7000 | 0.1916 | 8.9231 |
96
- | 0.1114 | 0.75 | 7500 | 0.2029 | 9.3250 |
97
- | 0.1142 | 0.8 | 8000 | 0.1895 | 8.9978 |
98
- | 0.0466 | 0.85 | 8500 | 0.1936 | 8.8576 |
99
- | 0.0664 | 0.9 | 9000 | 0.1876 | 8.9698 |
100
- | 0.0759 | 0.95 | 9500 | 0.1827 | 8.8202 |
101
- | 0.0555 | 1.0 | 10000 | 0.1834 | 8.6426 |
102
- | 0.0603 | 0.525 | 10500 | 0.1872 | 9.3344 |
103
- | 0.0727 | 0.55 | 11000 | 0.1838 | 9.3624 |
104
- | 0.0523 | 0.575 | 11500 | 0.2022 | 8.8903 |
105
- | 0.0719 | 0.6 | 12000 | 0.1840 | 9.0072 |
106
- | 0.0505 | 0.625 | 12500 | 0.1860 | 8.5631 |
107
- | 0.0678 | 0.65 | 13000 | 0.1852 | 8.1238 |
108
- | 0.0586 | 0.675 | 13500 | 0.1888 | 8.7641 |
109
- | 0.0818 | 0.7 | 14000 | 0.1822 | 8.2547 |
110
- | 0.0583 | 0.725 | 14500 | 0.1349 | 7.8760 |
111
- | 0.0516 | 0.75 | 15000 | 0.1432 | 7.8386 |
112
- | 0.0721 | 0.775 | 15500 | 0.1439 | 7.7966 |
113
- | 0.0697 | 0.8 | 16000 | 0.1345 | 7.6470 |
114
- | 0.0459 | 0.825 | 16500 | 0.1381 | 7.4881 |
115
- | 0.0533 | 0.85 | 17000 | 0.1422 | 7.2871 |
116
- | 0.0449 | 0.875 | 17500 | 0.1426 | 7.7218 |
117
- | 0.0424 | 0.9 | 18000 | 0.1417 | 7.4367 |
118
- | 0.0714 | 0.925 | 18500 | 0.1337 | 6.9973 |
119
- | 0.0573 | 0.95 | 19000 | 0.1432 | 7.6657 |
120
- | 0.0441 | 0.975 | 19500 | 0.1408 | 7.1001 |
121
- | 0.0453 | 1.0 | 20000 | 0.1407 | 7.1001 |
122
 
123
 
124
  ### Framework versions
125
 
126
  - Transformers 4.49.0.dev0
127
  - Pytorch 2.6.0+cu124
128
- - Datasets 3.2.1.dev0
129
  - Tokenizers 0.21.0
 
3
  license: apache-2.0
4
  base_model: openai/whisper-large-v3
5
  tags:
 
6
  - generated_from_trainer
 
 
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: openai/whisper-large-v3
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # openai/whisper-large-v3
18
 
19
+ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1549
22
+ - Wer: 6.5443
23
 
24
  ## Model description
25
 
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-----:|:-----:|:---------------:|:-------:|
55
+ | 0.2854 | 0.025 | 500 | 0.4194 | 25.8898 |
56
+ | 0.1425 | 0.05 | 1000 | 0.3923 | 20.5071 |
57
+ | 0.2199 | 0.075 | 1500 | 0.3291 | 17.4785 |
58
+ | 0.2343 | 0.1 | 2000 | 0.2861 | 14.1314 |
59
+ | 0.1391 | 0.125 | 2500 | 0.2906 | 13.3134 |
60
+ | 0.0853 | 0.15 | 3000 | 0.2688 | 12.0457 |
61
+ | 0.0866 | 0.175 | 3500 | 0.2575 | 11.4712 |
62
+ | 0.1311 | 0.2 | 4000 | 0.2472 | 12.4828 |
63
+ | 0.1338 | 0.225 | 4500 | 0.2437 | 10.9904 |
64
+ | 0.0748 | 0.25 | 5000 | 0.2557 | 10.7094 |
65
+ | 0.0821 | 0.275 | 5500 | 0.2597 | 10.2473 |
66
+ | 0.0988 | 0.3 | 6000 | 0.2407 | 9.4480 |
67
+ | 0.0824 | 0.325 | 6500 | 0.2425 | 9.2232 |
68
+ | 0.0678 | 0.35 | 7000 | 0.2301 | 9.1358 |
69
+ | 0.1124 | 0.375 | 7500 | 0.2559 | 9.3231 |
70
+ | 0.1122 | 0.4 | 8000 | 0.2240 | 8.5238 |
71
+ | 0.0477 | 0.425 | 8500 | 0.2379 | 8.3177 |
72
+ | 0.0638 | 0.45 | 9000 | 0.2354 | 8.9484 |
73
+ | 0.0735 | 0.475 | 9500 | 0.2231 | 8.3989 |
74
+ | 0.0548 | 0.5 | 10000 | 0.2330 | 8.5737 |
75
+ | 0.0557 | 0.525 | 10500 | 0.2133 | 8.3614 |
76
+ | 0.0626 | 0.55 | 11000 | 0.2084 | 8.2865 |
77
+ | 0.0472 | 0.575 | 11500 | 0.2331 | 8.0742 |
78
+ | 0.0636 | 0.6 | 12000 | 0.2118 | 7.9618 |
79
+ | 0.0466 | 0.625 | 12500 | 0.2126 | 7.4685 |
80
+ | 0.0604 | 0.65 | 13000 | 0.2160 | 7.6558 |
81
+ | 0.0544 | 0.675 | 13500 | 0.2187 | 7.9993 |
82
+ | 0.07 | 0.7 | 14000 | 0.2117 | 7.4372 |
83
+ | 0.0534 | 0.725 | 14500 | 0.1381 | 7.0438 |
84
+ | 0.046 | 0.75 | 15000 | 0.1496 | 7.0813 |
85
+ | 0.066 | 0.775 | 15500 | 0.1525 | 7.0001 |
86
+ | 0.0632 | 0.8 | 16000 | 0.1408 | 6.6817 |
87
+ | 0.0437 | 0.825 | 16500 | 0.1475 | 6.5942 |
88
+ | 0.0478 | 0.85 | 17000 | 0.1573 | 6.7941 |
89
+ | 0.0418 | 0.875 | 17500 | 0.1565 | 6.6504 |
90
+ | 0.0382 | 0.9 | 18000 | 0.1559 | 6.5630 |
91
+ | 0.0658 | 0.925 | 18500 | 0.1452 | 6.5630 |
92
+ | 0.0531 | 0.95 | 19000 | 0.1576 | 6.6629 |
93
+ | 0.0416 | 0.975 | 19500 | 0.1550 | 6.5443 |
94
+ | 0.0435 | 1.0 | 20000 | 0.1549 | 6.5443 |
95
 
96
 
97
  ### Framework versions
98
 
99
  - Transformers 4.49.0.dev0
100
  - Pytorch 2.6.0+cu124
101
+ - Datasets 3.3.1.dev0
102
  - Tokenizers 0.21.0