ChuGyouk commited on
Commit
611b2ce
1 Parent(s): 497f771

End of training

Browse files
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: openai/whisper-large-v3
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - marsyas/gtzan
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: whisper-large-v3-finetuned-gtzan
12
+ results:
13
+ - task:
14
+ name: Audio Classification
15
+ type: audio-classification
16
+ dataset:
17
+ name: GTZAN
18
+ type: marsyas/gtzan
19
+ config: all
20
+ split: train
21
+ args: all
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.94
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # whisper-large-v3-finetuned-gtzan
32
+
33
+ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the GTZAN dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.2657
36
+ - Accuracy: 0.94
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 4e-05
56
+ - train_batch_size: 1
57
+ - eval_batch_size: 1
58
+ - seed: 42
59
+ - distributed_type: multi-GPU
60
+ - num_devices: 4
61
+ - gradient_accumulation_steps: 4
62
+ - total_train_batch_size: 16
63
+ - total_eval_batch_size: 4
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: cosine
66
+ - lr_scheduler_warmup_ratio: 0.1
67
+ - num_epochs: 10
68
+ - mixed_precision_training: Native AMP
69
+
70
+ ### Training results
71
+
72
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
73
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
74
+ | 2.1646 | 0.5 | 28 | 1.8012 | 0.55 |
75
+ | 1.0152 | 1.0 | 56 | 0.8618 | 0.79 |
76
+ | 1.1129 | 1.49 | 84 | 0.7426 | 0.8 |
77
+ | 0.8163 | 1.99 | 112 | 0.8078 | 0.75 |
78
+ | 0.4374 | 2.49 | 140 | 0.6259 | 0.81 |
79
+ | 0.4607 | 2.99 | 168 | 0.5424 | 0.83 |
80
+ | 0.4225 | 3.48 | 196 | 0.3723 | 0.89 |
81
+ | 0.1769 | 3.98 | 224 | 0.3517 | 0.9 |
82
+ | 0.0927 | 4.48 | 252 | 0.3385 | 0.89 |
83
+ | 0.0159 | 4.98 | 280 | 0.3985 | 0.88 |
84
+ | 0.0119 | 5.48 | 308 | 0.4626 | 0.9 |
85
+ | 0.029 | 5.97 | 336 | 0.4292 | 0.91 |
86
+ | 0.0064 | 6.47 | 364 | 0.2710 | 0.93 |
87
+ | 0.0057 | 6.97 | 392 | 0.2665 | 0.93 |
88
+ | 0.0048 | 7.47 | 420 | 0.2784 | 0.93 |
89
+ | 0.0049 | 7.96 | 448 | 0.2550 | 0.94 |
90
+ | 0.0049 | 8.46 | 476 | 0.3011 | 0.94 |
91
+ | 0.0044 | 8.96 | 504 | 0.2759 | 0.94 |
92
+ | 0.0045 | 9.46 | 532 | 0.2661 | 0.94 |
93
+ | 0.0048 | 9.96 | 560 | 0.2657 | 0.94 |
94
+
95
+
96
+ ### Framework versions
97
+
98
+ - Transformers 4.37.2
99
+ - Pytorch 2.2.0+cu121
100
+ - Datasets 2.17.0
101
+ - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:867ed3fc6849eb86258023d34a3ba3dd0f7fa31f2ce0191d4405d2c9ce250bd3
3
  size 2549252296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6178a17b44fb65a40e82a540b733a715c8ac9d5deaa5668c036dcd3ea6075258
3
  size 2549252296
runs/Feb13_22-12-10_rocket/events.out.tfevents.1707829938.rocket.2101387.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2d908f19b2070ed4755dbb5f1812f1a7741d2657498ad4543c5d6300119bf2a8
3
- size 28129
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dfa7b18b5fa10ad0976ce5d751424012c94ee17e3cd7e9bb9855c554273e019
3
+ size 29748