gweltou commited on
Commit
3e128f6
1 Parent(s): 80bdb8c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -38
README.md CHANGED
@@ -21,39 +21,42 @@ model-index:
21
  args: br
22
  metrics:
23
  - type: wer
24
- value: 49.79811574697174
25
- name: Wer
 
 
 
 
 
 
26
  ---
27
 
28
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
- should probably proofread and complete it, then remove this comment. -->
30
-
31
  # wav2vec2-xls-r-300m-br
32
 
33
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_15_0 dataset.
34
- It achieves the following results on the evaluation set:
35
- - Loss: 0.8887
36
- - Wer: 49.7981
37
- - Cer: 17.3877
38
 
39
  ## Model description
40
 
41
- More information needed
42
 
43
  ## Intended uses & limitations
44
 
45
- More information needed
46
 
47
  ## Training and evaluation data
48
 
49
- More information needed
 
 
50
 
51
  ## Training procedure
52
 
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
- - learning_rate: 5e-05
57
  - train_batch_size: 8
58
  - eval_batch_size: 8
59
  - seed: 42
@@ -65,33 +68,10 @@ The following hyperparameters were used during training:
65
  - num_epochs: 40
66
  - mixed_precision_training: Native AMP
67
 
68
- ### Training results
69
-
70
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
71
- |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
72
- | 5.1153 | 2.18 | 1000 | 2.8854 | 100.0 | 100.0 |
73
- | 1.4117 | 4.36 | 2000 | 0.9161 | 71.2786 | 25.3180 |
74
- | 0.7888 | 6.54 | 3000 | 0.7753 | 62.7456 | 22.0767 |
75
- | 0.6316 | 8.71 | 4000 | 0.7550 | 58.1786 | 20.5383 |
76
- | 0.5434 | 10.89 | 5000 | 0.7508 | 56.5096 | 20.1168 |
77
- | 0.4672 | 13.07 | 6000 | 0.7844 | 54.9125 | 19.3835 |
78
- | 0.4237 | 15.25 | 7000 | 0.7786 | 53.2705 | 18.5765 |
79
- | 0.3899 | 17.43 | 8000 | 0.8050 | 53.0552 | 18.6105 |
80
- | 0.3607 | 19.61 | 9000 | 0.8280 | 51.9874 | 18.3024 |
81
- | 0.3355 | 21.79 | 10000 | 0.7967 | 51.5388 | 17.9811 |
82
- | 0.3098 | 23.97 | 11000 | 0.8296 | 51.2876 | 17.9547 |
83
- | 0.2937 | 26.14 | 12000 | 0.8544 | 50.9915 | 17.7827 |
84
- | 0.2793 | 28.32 | 13000 | 0.8909 | 51.5478 | 18.1286 |
85
- | 0.2641 | 30.5 | 14000 | 0.8740 | 50.4800 | 17.6561 |
86
- | 0.2552 | 32.68 | 15000 | 0.8832 | 49.9776 | 17.4463 |
87
- | 0.2467 | 34.86 | 16000 | 0.8753 | 50.3096 | 17.4765 |
88
- | 0.2378 | 37.04 | 17000 | 0.8895 | 49.8789 | 17.3952 |
89
- | 0.2337 | 39.22 | 18000 | 0.8887 | 49.7981 | 17.3877 |
90
-
91
 
92
  ### Framework versions
93
 
94
  - Transformers 4.39.1
95
  - Pytorch 2.0.1+cu117
96
  - Datasets 2.18.0
97
- - Tokenizers 0.15.2
 
21
  args: br
22
  metrics:
23
  - type: wer
24
+ value: 41
25
+ name: WER
26
+ - type: cer
27
+ value: 14.7
28
+ name: CER
29
+ language:
30
+ - br
31
+ pipeline_tag: automatic-speech-recognition
32
  ---
33
 
 
 
 
34
  # wav2vec2-xls-r-300m-br
35
 
36
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Mozilla Common Voice 15 Breton dataset and [Roadennoù](https://github.com/gweltou/roadennou) dataset. It achieves the following results on the MCV15-br test set:
37
+ - Wer: 41.0
38
+ - Cer: 14.7
 
 
39
 
40
  ## Model description
41
 
42
+ This model was trained to assess the performance wav2vec2-xls-r-300m for fine-tuning a Breton ASR model.
43
 
44
  ## Intended uses & limitations
45
 
46
+ This model is a research model. Usage for production is not recommended.
47
 
48
  ## Training and evaluation data
49
 
50
+ The training dataset consists of MCV15-br train dataset and 90% of the Roadennoù dataset.
51
+ The validation dataset consists of MCV15-br validation dataset and the remaining 10% of the Roadennoù dataset.
52
+ The final test dataset consists of MCV15-br test dataset.
53
 
54
  ## Training procedure
55
 
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
+ - learning_rate: 6e-05
60
  - train_batch_size: 8
61
  - eval_batch_size: 8
62
  - seed: 42
 
68
  - num_epochs: 40
69
  - mixed_precision_training: Native AMP
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
  ### Framework versions
73
 
74
  - Transformers 4.39.1
75
  - Pytorch 2.0.1+cu117
76
  - Datasets 2.18.0
77
+ - Tokenizers 0.15.2