Update README.md
Browse files
README.md
CHANGED
@@ -1,32 +1,36 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
tags:
|
4 |
- automatic-speech-recognition
|
5 |
- Finnish parliament data slow samples 300h
|
6 |
-
- generated_from_trainer
|
7 |
model-index:
|
8 |
-
- name:
|
9 |
results: []
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
-
|
13 |
-
should probably proofread and complete it, then remove this comment. -->
|
14 |
|
15 |
-
|
16 |
-
|
17 |
-
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the FINNISH PARLIAMENT DATA SLOW SAMPLES 300H - FI-FI dataset.
|
18 |
-
It achieves the following results on the evaluation set:
|
19 |
- Loss: 196.9006
|
20 |
- Cer: 0.0178
|
21 |
- Wer: 0.0592
|
22 |
|
23 |
## Model description
|
24 |
|
25 |
-
|
26 |
|
27 |
## Intended uses & limitations
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
## Training and evaluation data
|
32 |
|
@@ -70,8 +74,8 @@ The following hyperparameters were used during training:
|
|
70 |
| 199.4458 | 7.96 | 8500 | 0.0183 | 196.9986 | 0.0606 |
|
71 |
| 199.1502 | 8.43 | 9000 | 0.0178 | 197.0260 | 0.0590 |
|
72 |
| 199.4437 | 8.9 | 9500 | 0.0180 | 196.9412 | 0.0595 |
|
73 |
-
| 198.8669 | 9.36 | 10000 |
|
74 |
-
| 199.1329 | 9.83 | 10500 |
|
75 |
|
76 |
|
77 |
### Framework versions
|
@@ -79,4 +83,4 @@ The following hyperparameters were used during training:
|
|
79 |
- Transformers 4.18.0
|
80 |
- Pytorch 1.12.0.dev20220305
|
81 |
- Datasets 1.18.4.dev0
|
82 |
-
- Tokenizers 0.11.6
|
|
|
1 |
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
tags:
|
4 |
- automatic-speech-recognition
|
5 |
- Finnish parliament data slow samples 300h
|
|
|
6 |
model-index:
|
7 |
+
- name: CaptainA_XLS-R_entropy-10_v0
|
8 |
results: []
|
9 |
+
language:
|
10 |
+
- fi
|
11 |
+
pipeline_tag: automatic-speech-recognition
|
12 |
---
|
13 |
|
14 |
+
# # CaptainA_XLS-R_entropy-10_v0
|
|
|
15 |
|
16 |
+
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Finnish Parliament Corpus with 210h of slow and clean samples.
|
17 |
+
It achieves the following results on the evaluation set (70h of the slow and clean samples from the same corpus):
|
|
|
|
|
18 |
- Loss: 196.9006
|
19 |
- Cer: 0.0178
|
20 |
- Wer: 0.0592
|
21 |
|
22 |
## Model description
|
23 |
|
24 |
+
This model is used in the CaptainA app.
|
25 |
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
+
The model was fine-tuned with entropy regularization to improve generalization for the Finnish L2 MDD task.
|
29 |
+
Even though the model was trained for the purpose of MDD for L2 Finnish speakers, it was not fine-tuned with any L2 data due to the lack of proper corpus for Finnish L2 MDD.
|
30 |
+
|
31 |
+
Therefore, it is important to note that this model **is NOT intended for Finnish L2 ASR** and will need to be improved further even for the MDD task (hence the model version is 0).
|
32 |
+
|
33 |
+
Because the model was fine-tuned with the Finnish Parliament Corpus, it has the same biases from that corpus. The biases are more notable since the model's intended uses are for L2 Finnish speakers. More detail can be read in the Master's thesis: [A Mobile App For Practicing Finnish Pronunciation Using Wav2vec 2.0](http://urn.fi/URN:NBN:fi:aalto-202305213302)
|
34 |
|
35 |
## Training and evaluation data
|
36 |
|
|
|
74 |
| 199.4458 | 7.96 | 8500 | 0.0183 | 196.9986 | 0.0606 |
|
75 |
| 199.1502 | 8.43 | 9000 | 0.0178 | 197.0260 | 0.0590 |
|
76 |
| 199.4437 | 8.9 | 9500 | 0.0180 | 196.9412 | 0.0595 |
|
77 |
+
| 198.8669 | 9.36 | 10000 | 0.0180 | 196.8834 | 0.0600 |
|
78 |
+
| 199.1329 | 9.83 | 10500 | 0.0178 | 196.9176 | 0.0591 |
|
79 |
|
80 |
|
81 |
### Framework versions
|
|
|
83 |
- Transformers 4.18.0
|
84 |
- Pytorch 1.12.0.dev20220305
|
85 |
- Datasets 1.18.4.dev0
|
86 |
+
- Tokenizers 0.11.6
|