jadorantes2 commited on
Commit
5df093d
1 Parent(s): b45ae43

End of training

Browse files
Files changed (2) hide show
  1. README.md +80 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - ami
6
+ metrics:
7
+ - wer
8
+ model-index:
9
+ - name: model_optimization
10
+ results:
11
+ - task:
12
+ name: Automatic Speech Recognition
13
+ type: automatic-speech-recognition
14
+ dataset:
15
+ name: ami
16
+ type: ami
17
+ config: ihm
18
+ split: None
19
+ args: ihm
20
+ metrics:
21
+ - name: Wer
22
+ type: wer
23
+ value: 0.24598930481283424
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # model_optimization
30
+
31
+ This model was trained from scratch on the ami dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 2.0220
34
+ - Wer: 0.2460
35
+
36
+ ## Model description
37
+
38
+ More information needed
39
+
40
+ ## Intended uses & limitations
41
+
42
+ More information needed
43
+
44
+ ## Training and evaluation data
45
+
46
+ More information needed
47
+
48
+ ## Training procedure
49
+
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 1e-05
54
+ - train_batch_size: 8
55
+ - eval_batch_size: 8
56
+ - seed: 42
57
+ - gradient_accumulation_steps: 2
58
+ - total_train_batch_size: 16
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - lr_scheduler_warmup_steps: 500
62
+ - training_steps: 1000
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
68
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
69
+ | 1.2804 | 50.0 | 250 | 1.8094 | 0.3636 |
70
+ | 0.637 | 100.0 | 500 | 2.6436 | 0.3155 |
71
+ | 0.4223 | 150.0 | 750 | 1.6623 | 0.2406 |
72
+ | 0.3273 | 200.0 | 1000 | 2.0220 | 0.2460 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - Transformers 4.42.4
78
+ - Pytorch 2.3.1+cu121
79
+ - Datasets 2.20.0
80
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:34d31b0f4cde310d47bd6448e070b0af9fa38efe01857a07ae972b1f7715cf62
3
  size 377611120
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8b8fd572f128d44962a6c1e8e255b15c5bada1f71efddebdb5b8ef635caa9c9
3
  size 377611120