End of training
Browse files
README.md
CHANGED
@@ -16,7 +16,12 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the None dataset.
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
-
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
## Model description
|
22 |
|
@@ -42,118 +47,12 @@ The following hyperparameters were used during training:
|
|
42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
43 |
- lr_scheduler_type: linear
|
44 |
- lr_scheduler_warmup_steps: 50
|
45 |
-
- num_epochs:
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
48 |
-
### Training results
|
49 |
-
|
50 |
-
| Training Loss | Epoch | Step | Validation Loss |
|
51 |
-
|:-------------:|:-----:|:----:|:---------------:|
|
52 |
-
| No log | 1.0 | 1 | 1.6559 |
|
53 |
-
| No log | 2.0 | 2 | 1.6397 |
|
54 |
-
| No log | 3.0 | 3 | 1.6015 |
|
55 |
-
| No log | 4.0 | 4 | 1.5469 |
|
56 |
-
| No log | 5.0 | 5 | 1.4813 |
|
57 |
-
| No log | 6.0 | 6 | 1.4143 |
|
58 |
-
| No log | 7.0 | 7 | 1.3524 |
|
59 |
-
| No log | 8.0 | 8 | 1.2961 |
|
60 |
-
| No log | 9.0 | 9 | 1.2443 |
|
61 |
-
| No log | 10.0 | 10 | 1.1941 |
|
62 |
-
| No log | 11.0 | 11 | 1.1438 |
|
63 |
-
| No log | 12.0 | 12 | 1.0963 |
|
64 |
-
| No log | 13.0 | 13 | 1.0597 |
|
65 |
-
| No log | 14.0 | 14 | 1.0356 |
|
66 |
-
| No log | 15.0 | 15 | 1.0152 |
|
67 |
-
| No log | 16.0 | 16 | 0.9937 |
|
68 |
-
| No log | 17.0 | 17 | 0.9774 |
|
69 |
-
| No log | 18.0 | 18 | 0.9688 |
|
70 |
-
| No log | 19.0 | 19 | 0.9666 |
|
71 |
-
| No log | 20.0 | 20 | 0.9624 |
|
72 |
-
| No log | 21.0 | 21 | 0.9540 |
|
73 |
-
| No log | 22.0 | 22 | 0.9510 |
|
74 |
-
| No log | 23.0 | 23 | 0.9600 |
|
75 |
-
| No log | 24.0 | 24 | 0.9656 |
|
76 |
-
| 1.0039 | 25.0 | 25 | 0.9712 |
|
77 |
-
| 1.0039 | 26.0 | 26 | 0.9890 |
|
78 |
-
| 1.0039 | 27.0 | 27 | 1.0123 |
|
79 |
-
| 1.0039 | 28.0 | 28 | 1.0186 |
|
80 |
-
| 1.0039 | 29.0 | 29 | 1.0223 |
|
81 |
-
| 1.0039 | 30.0 | 30 | 1.0406 |
|
82 |
-
| 1.0039 | 31.0 | 31 | 1.0501 |
|
83 |
-
| 1.0039 | 32.0 | 32 | 1.0564 |
|
84 |
-
| 1.0039 | 33.0 | 33 | 1.0711 |
|
85 |
-
| 1.0039 | 34.0 | 34 | 1.0799 |
|
86 |
-
| 1.0039 | 35.0 | 35 | 1.0843 |
|
87 |
-
| 1.0039 | 36.0 | 36 | 1.0861 |
|
88 |
-
| 1.0039 | 37.0 | 37 | 1.0943 |
|
89 |
-
| 1.0039 | 38.0 | 38 | 1.1023 |
|
90 |
-
| 1.0039 | 39.0 | 39 | 1.1005 |
|
91 |
-
| 1.0039 | 40.0 | 40 | 1.1138 |
|
92 |
-
| 1.0039 | 41.0 | 41 | 1.0908 |
|
93 |
-
| 1.0039 | 42.0 | 42 | 1.1106 |
|
94 |
-
| 1.0039 | 43.0 | 43 | 1.1051 |
|
95 |
-
| 1.0039 | 44.0 | 44 | 1.1027 |
|
96 |
-
| 1.0039 | 45.0 | 45 | 1.1110 |
|
97 |
-
| 1.0039 | 46.0 | 46 | 1.1231 |
|
98 |
-
| 1.0039 | 47.0 | 47 | 1.1304 |
|
99 |
-
| 1.0039 | 48.0 | 48 | 1.1325 |
|
100 |
-
| 1.0039 | 49.0 | 49 | 1.1372 |
|
101 |
-
| 0.2875 | 50.0 | 50 | 1.1450 |
|
102 |
-
| 0.2875 | 51.0 | 51 | 1.1635 |
|
103 |
-
| 0.2875 | 52.0 | 52 | 1.1798 |
|
104 |
-
| 0.2875 | 53.0 | 53 | 1.1333 |
|
105 |
-
| 0.2875 | 54.0 | 54 | 1.1759 |
|
106 |
-
| 0.2875 | 55.0 | 55 | 1.2435 |
|
107 |
-
| 0.2875 | 56.0 | 56 | 1.1802 |
|
108 |
-
| 0.2875 | 57.0 | 57 | 1.0951 |
|
109 |
-
| 0.2875 | 58.0 | 58 | 1.1254 |
|
110 |
-
| 0.2875 | 59.0 | 59 | 1.1150 |
|
111 |
-
| 0.2875 | 60.0 | 60 | 1.0558 |
|
112 |
-
| 0.2875 | 61.0 | 61 | 1.0482 |
|
113 |
-
| 0.2875 | 62.0 | 62 | 1.0277 |
|
114 |
-
| 0.2875 | 63.0 | 63 | 1.0109 |
|
115 |
-
| 0.2875 | 64.0 | 64 | 1.0255 |
|
116 |
-
| 0.2875 | 65.0 | 65 | 1.0293 |
|
117 |
-
| 0.2875 | 66.0 | 66 | 1.0387 |
|
118 |
-
| 0.2875 | 67.0 | 67 | 1.0540 |
|
119 |
-
| 0.2875 | 68.0 | 68 | 1.0771 |
|
120 |
-
| 0.2875 | 69.0 | 69 | 1.0651 |
|
121 |
-
| 0.2875 | 70.0 | 70 | 1.0527 |
|
122 |
-
| 0.2875 | 71.0 | 71 | 1.0472 |
|
123 |
-
| 0.2875 | 72.0 | 72 | 1.0476 |
|
124 |
-
| 0.2875 | 73.0 | 73 | 1.0566 |
|
125 |
-
| 0.2875 | 74.0 | 74 | 1.0526 |
|
126 |
-
| 0.2752 | 75.0 | 75 | 1.0576 |
|
127 |
-
| 0.2752 | 76.0 | 76 | 1.0647 |
|
128 |
-
| 0.2752 | 77.0 | 77 | 1.0720 |
|
129 |
-
| 0.2752 | 78.0 | 78 | 1.0814 |
|
130 |
-
| 0.2752 | 79.0 | 79 | 1.1008 |
|
131 |
-
| 0.2752 | 80.0 | 80 | 1.0758 |
|
132 |
-
| 0.2752 | 81.0 | 81 | 1.0752 |
|
133 |
-
| 0.2752 | 82.0 | 82 | 1.0775 |
|
134 |
-
| 0.2752 | 83.0 | 83 | 1.0863 |
|
135 |
-
| 0.2752 | 84.0 | 84 | 1.1308 |
|
136 |
-
| 0.2752 | 85.0 | 85 | 1.0823 |
|
137 |
-
| 0.2752 | 86.0 | 86 | 1.0900 |
|
138 |
-
| 0.2752 | 87.0 | 87 | 1.0955 |
|
139 |
-
| 0.2752 | 88.0 | 88 | 1.1000 |
|
140 |
-
| 0.2752 | 89.0 | 89 | 1.1051 |
|
141 |
-
| 0.2752 | 90.0 | 90 | 1.1096 |
|
142 |
-
| 0.2752 | 91.0 | 91 | 1.1118 |
|
143 |
-
| 0.2752 | 92.0 | 92 | 1.1115 |
|
144 |
-
| 0.2752 | 93.0 | 93 | 1.1100 |
|
145 |
-
| 0.2752 | 94.0 | 94 | 1.1082 |
|
146 |
-
| 0.2752 | 95.0 | 95 | 1.1068 |
|
147 |
-
| 0.2752 | 96.0 | 96 | 1.1059 |
|
148 |
-
| 0.2752 | 97.0 | 97 | 1.1054 |
|
149 |
-
| 0.2752 | 98.0 | 98 | 1.1052 |
|
150 |
-
| 0.2752 | 99.0 | 99 | 1.1051 |
|
151 |
-
| 0.2605 | 100.0 | 100 | 1.1050 |
|
152 |
-
|
153 |
-
|
154 |
### Framework versions
|
155 |
|
156 |
-
- PEFT 0.11.
|
157 |
- Transformers 4.39.3
|
158 |
- Pytorch 2.1.2
|
159 |
- Datasets 2.18.0
|
|
|
16 |
|
17 |
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the None dataset.
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- eval_loss: 0.6723
|
20 |
+
- eval_runtime: 24.726
|
21 |
+
- eval_samples_per_second: 1.092
|
22 |
+
- eval_steps_per_second: 0.162
|
23 |
+
- epoch: 48.0
|
24 |
+
- step: 48
|
25 |
|
26 |
## Model description
|
27 |
|
|
|
47 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
48 |
- lr_scheduler_type: linear
|
49 |
- lr_scheduler_warmup_steps: 50
|
50 |
+
- num_epochs: 80
|
51 |
- mixed_precision_training: Native AMP
|
52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
### Framework versions
|
54 |
|
55 |
+
- PEFT 0.11.2.dev0
|
56 |
- Transformers 4.39.3
|
57 |
- Pytorch 2.1.2
|
58 |
- Datasets 2.18.0
|
adapter_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 62969640
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dc9cd753315241fcd85e8aace8a41dba0ccff4439c66d0c86d320d34df19a38c
|
3 |
size 62969640
|
runs/May17_14-16-51_cd9fc56ae957/events.out.tfevents.1715955429.cd9fc56ae957.34.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4ddbdaacea77c9c80c0061e4f86604a09151b561eb0fb14dde72980ba842c85c
|
3 |
+
size 7616
|
runs/May17_14-22-11_cd9fc56ae957/events.out.tfevents.1715955743.cd9fc56ae957.34.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3a7b024d268492ac8b52ea9245e39aaa8db6076f0a8ef99d24d9b571bca7f6c5
|
3 |
+
size 18993
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5176
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c949b3062a355d3221303c499157ee9dd4a2e1ea9026195dd18b78b7de12c094
|
3 |
size 5176
|