ngia commited on
Commit
af6d765
·
verified ·
1 Parent(s): 939075c

End of training

Browse files
README.md CHANGED
@@ -23,7 +23,7 @@ model-index:
23
  metrics:
24
  - name: Wer
25
  type: wer
26
- value: 52.95489047928865
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR Wolof Dataset dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 1.0929
37
- - Wer: 52.9549
38
 
39
  ## Model description
40
 
@@ -61,16 +61,15 @@ The following hyperparameters were used during training:
61
  - total_train_batch_size: 32
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: cosine
64
- - num_epochs: 3
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
- | 0.1788 | 1.0 | 450 | 1.0593 | 54.4766 |
72
- | 0.1082 | 2.0 | 900 | 1.0710 | 53.9887 |
73
- | 0.0663 | 3.0 | 1350 | 1.0929 | 52.9549 |
74
 
75
 
76
  ### Framework versions
 
23
  metrics:
24
  - name: Wer
25
  type: wer
26
+ value: 51.21087255114581
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR Wolof Dataset dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.1760
37
+ - Wer: 51.2109
38
 
39
  ## Model description
40
 
 
61
  - total_train_batch_size: 32
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: cosine
64
+ - num_epochs: 2
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
+ | 0.0367 | 1.0 | 450 | 1.1685 | 50.4807 |
72
+ | 0.0191 | 2.0 | 900 | 1.1760 | 51.2109 |
 
73
 
74
 
75
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:463bdddb76ac49ef8f085878015d923eee6064305c21a142142e4c7951e1e6e2
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ca87c278cccf7f76deb291404555b23970d237e8f5efe0f3953a4394f5c42f4
3
  size 966995080
runs/Dec02_01-02-33_7931188d900a/events.out.tfevents.1733101355.7931188d900a.30.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:690876109a1830993b8bf15af06f9709131a60eadfc64e55ea08b50754900940
3
- size 6965
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:296f70271acc74d1eb73768078bd18953ba85f311a1a44fa3d5e55a5a1f8f286
3
+ size 7319