lpcortez commited on
Commit
ed25e75
1 Parent(s): 5ae76c4

End of training

Browse files
README.md CHANGED
@@ -4,7 +4,7 @@ base_model: facebook/wav2vec2-base-960h
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - arrow
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +14,15 @@ model-index:
14
  name: Audio Classification
15
  type: audio-classification
16
  dataset:
17
- name: arrow
18
- type: arrow
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.3625798954096456
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # audio_consistency
32
 
33
- This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the arrow dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.0924
36
- - Accuracy: 0.3626
37
 
38
  ## Model description
39
 
@@ -65,14 +65,14 @@ The following hyperparameters were used during training:
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | No log | 1.0 | 216 | 1.0926 | 0.3626 |
69
- | No log | 2.0 | 432 | 1.0925 | 0.3626 |
70
- | 1.0933 | 3.0 | 648 | 1.0924 | 0.3626 |
71
 
72
 
73
  ### Framework versions
74
 
75
- - Transformers 4.41.2
76
- - Pytorch 2.3.0+cu121
77
  - Datasets 2.20.0
78
  - Tokenizers 0.19.1
 
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
+ - audiofolder
8
  metrics:
9
  - accuracy
10
  model-index:
 
14
  name: Audio Classification
15
  type: audio-classification
16
  dataset:
17
+ name: audiofolder
18
+ type: audiofolder
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.365699032365699
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # audio_consistency
32
 
33
+ This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the audiofolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.0866
36
+ - Accuracy: 0.3657
37
 
38
  ## Model description
39
 
 
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | No log | 1.0 | 375 | 1.0863 | 0.3747 |
69
+ | 1.0859 | 2.0 | 750 | 1.0869 | 0.3747 |
70
+ | 1.0838 | 3.0 | 1125 | 1.0866 | 0.3657 |
71
 
72
 
73
  ### Framework versions
74
 
75
+ - Transformers 4.42.4
76
+ - Pytorch 2.3.1+cu121
77
  - Datasets 2.20.0
78
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:287c79c9fab54fb17e0d5ee53935fd5fff6890d8ad161dbc5cfd95de6e4fd8b1
3
  size 378303396
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83ccbfdc98cf545a8270e65d1c6d5580a0354bc0a77e6c24231ed8dd206b44b2
3
  size 378303396
runs/Jul21_09-30-08_9d4a70b3229b/events.out.tfevents.1721554209.9d4a70b3229b.168.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8dd233a850722f2d5ca35fa875b54c42d5fd047ab0255065279ec62ef0481057
3
- size 7525
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af1311c1e53dddab4fa1316e0ba28887eb72d50a2a50fb421a581e9885bbf55c
3
+ size 8202