File size: 3,443 Bytes
f5ce010
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
base_model: microsoft/wavlm-base-plus
tags:
- generated_from_trainer
model-index:
- name: wavlm_base-plus_emodb
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wavlm_base-plus_emodb

This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co./microsoft/wavlm-base-plus) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1390
- Uar: 0.6759
- Acc: 0.7426

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

### Training results

| Training Loss | Epoch | Step | Validation Loss | Uar    | Acc    |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log        | 0.31  | 1    | 1.3804          | 0.3148 | 0.4559 |
| No log        | 0.62  | 2    | 1.3739          | 0.2593 | 0.4118 |
| No log        | 0.92  | 3    | 1.3586          | 0.25   | 0.4044 |
| 1.4729        | 1.23  | 4    | 1.3445          | 0.25   | 0.4044 |
| 1.4729        | 1.54  | 5    | 1.3265          | 0.3056 | 0.4485 |
| 1.4729        | 1.85  | 6    | 1.3054          | 0.4167 | 0.5368 |
| 1.3428        | 2.15  | 7    | 1.2888          | 0.4352 | 0.5515 |
| 1.3428        | 2.46  | 8    | 1.2719          | 0.4630 | 0.5735 |
| 1.3428        | 2.77  | 9    | 1.2511          | 0.5093 | 0.6103 |
| 1.2214        | 3.08  | 10   | 1.2465          | 0.5833 | 0.6691 |
| 1.2214        | 3.38  | 11   | 1.2409          | 0.5370 | 0.6324 |
| 1.2214        | 3.69  | 12   | 1.2366          | 0.5000 | 0.6029 |
| 1.2214        | 4.0   | 13   | 1.2346          | 0.5185 | 0.6176 |
| 0.7965        | 4.31  | 14   | 1.2130          | 0.6574 | 0.7279 |
| 0.7965        | 4.62  | 15   | 1.1881          | 0.7222 | 0.7794 |
| 0.7965        | 4.92  | 16   | 1.1775          | 0.7407 | 0.7941 |
| 0.9522        | 5.23  | 17   | 1.1707          | 0.7315 | 0.7868 |
| 0.9522        | 5.54  | 18   | 1.1667          | 0.7222 | 0.7794 |
| 0.9522        | 5.85  | 19   | 1.1636          | 0.7130 | 0.7721 |
| 0.8702        | 6.15  | 20   | 1.1628          | 0.7037 | 0.7647 |
| 0.8702        | 6.46  | 21   | 1.1557          | 0.7037 | 0.7647 |
| 0.8702        | 6.77  | 22   | 1.1444          | 0.7130 | 0.7721 |
| 0.7803        | 7.08  | 23   | 1.1378          | 0.7130 | 0.7721 |
| 0.7803        | 7.38  | 24   | 1.1331          | 0.7130 | 0.7721 |
| 0.7803        | 7.69  | 25   | 1.1339          | 0.7037 | 0.7647 |
| 0.7803        | 8.0   | 26   | 1.1363          | 0.6944 | 0.7574 |
| 0.5654        | 8.31  | 27   | 1.1382          | 0.6759 | 0.7426 |
| 0.5654        | 8.62  | 28   | 1.1394          | 0.6759 | 0.7426 |
| 0.5654        | 8.92  | 29   | 1.1395          | 0.6759 | 0.7426 |
| 0.7148        | 9.23  | 30   | 1.1390          | 0.6759 | 0.7426 |


### Framework versions

- Transformers 4.32.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3