File size: 3,123 Bytes
96e8020
 
 
 
ef46c45
eac3a78
61f2f82
3c01bec
 
96e8020
61f2f82
3c01bec
 
 
ef46c45
3c01bec
 
 
 
 
 
 
ef46c45
3c01bec
ef46c45
96e8020
 
 
 
 
61f2f82
96e8020
61f2f82
96e8020
3c01bec
 
 
96e8020
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c01bec
61f2f82
96e8020
 
 
61f2f82
96e8020
 
48bc185
61f2f82
96e8020
 
3c01bec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96e8020
 
61f2f82
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/wav2vec2-xls-r-300m
datasets:
- common_voice_15_0
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-br
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: common_voice_15_0
      type: common_voice_15_0
      config: br
      split: None
      args: br
    metrics:
    - type: wer
      value: 49.79811574697174
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wav2vec2-xls-r-300m-br

This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co./facebook/wav2vec2-xls-r-300m) on the common_voice_15_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8887
- Wer: 49.7981
- Cer: 17.3877

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Wer     | Cer     |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 5.1153        | 2.18  | 1000  | 2.8854          | 100.0   | 100.0   |
| 1.4117        | 4.36  | 2000  | 0.9161          | 71.2786 | 25.3180 |
| 0.7888        | 6.54  | 3000  | 0.7753          | 62.7456 | 22.0767 |
| 0.6316        | 8.71  | 4000  | 0.7550          | 58.1786 | 20.5383 |
| 0.5434        | 10.89 | 5000  | 0.7508          | 56.5096 | 20.1168 |
| 0.4672        | 13.07 | 6000  | 0.7844          | 54.9125 | 19.3835 |
| 0.4237        | 15.25 | 7000  | 0.7786          | 53.2705 | 18.5765 |
| 0.3899        | 17.43 | 8000  | 0.8050          | 53.0552 | 18.6105 |
| 0.3607        | 19.61 | 9000  | 0.8280          | 51.9874 | 18.3024 |
| 0.3355        | 21.79 | 10000 | 0.7967          | 51.5388 | 17.9811 |
| 0.3098        | 23.97 | 11000 | 0.8296          | 51.2876 | 17.9547 |
| 0.2937        | 26.14 | 12000 | 0.8544          | 50.9915 | 17.7827 |
| 0.2793        | 28.32 | 13000 | 0.8909          | 51.5478 | 18.1286 |
| 0.2641        | 30.5  | 14000 | 0.8740          | 50.4800 | 17.6561 |
| 0.2552        | 32.68 | 15000 | 0.8832          | 49.9776 | 17.4463 |
| 0.2467        | 34.86 | 16000 | 0.8753          | 50.3096 | 17.4765 |
| 0.2378        | 37.04 | 17000 | 0.8895          | 49.8789 | 17.3952 |
| 0.2337        | 39.22 | 18000 | 0.8887          | 49.7981 | 17.3877 |


### Framework versions

- Transformers 4.39.1
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2