Max200293 commited on
Commit
882617f
1 Parent(s): 42972fd

wav2vec2-classic-300m-norwegian-colab

Browse files
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/wav2vec2-xls-r-300m
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - nb_samtale
8
+ metrics:
9
+ - wer
10
+ model-index:
11
+ - name: wav2vec2-classic-300m-norwegian-colab
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: nb_samtale
18
+ type: nb_samtale
19
+ config: annotations
20
+ split: test
21
+ args: annotations
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.7528477035956058
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # wav2vec2-classic-300m-norwegian-colab
32
+
33
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the nb_samtale dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 2.2190
36
+ - Wer: 0.7528
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 0.0003
56
+ - train_batch_size: 16
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 32
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 500
64
+ - num_epochs: 30
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
70
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
71
+ | 4.8141 | 2.57 | 400 | 3.0571 | 1.0 |
72
+ | 3.0777 | 5.14 | 800 | 2.9987 | 1.0 |
73
+ | 2.7311 | 7.72 | 1200 | 2.5705 | 0.9829 |
74
+ | 2.1302 | 10.29 | 1600 | 1.8399 | 0.9225 |
75
+ | 1.6827 | 12.86 | 2000 | 1.6372 | 0.8559 |
76
+ | 1.312 | 15.43 | 2400 | 1.8908 | 0.9172 |
77
+ | 0.9979 | 18.01 | 2800 | 1.7908 | 0.7890 |
78
+ | 0.7456 | 20.58 | 3200 | 1.8110 | 0.7720 |
79
+ | 0.592 | 23.15 | 3600 | 2.0024 | 0.7686 |
80
+ | 0.4946 | 25.72 | 4000 | 2.1173 | 0.7702 |
81
+ | 0.4093 | 28.3 | 4400 | 2.2190 | 0.7528 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.35.2
87
+ - Pytorch 2.1.0+cu118
88
+ - Datasets 2.15.0
89
+ - Tokenizers 0.15.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f72285c104cf9e50af59bad86b6550bdd186291bfdd42d89b5a005d6d8624739
3
  size 1261955080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17fb026c80291e1a6f01e4182b81e9efc3dfd31f2c05649d7a0f16eb8c75815d
3
  size 1261955080
runs/Nov25_14-51-21_acdb54445ae2/events.out.tfevents.1700924076.acdb54445ae2.1664.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3aad8e1bd228e277e25ae8fdae5951cad597dc6c34f36cd283d9896f064437b5
3
- size 11201
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d445a9d35bb93734826836f5745bfeec34843bded69c00a85b0d6c7d9d7a6da
3
+ size 11555