File size: 2,729 Bytes
23c3d5f
c63c37f
 
23c3d5f
 
c63c37f
 
23c3d5f
24dc90c
 
23c3d5f
8e45d59
23c3d5f
ca2e65f
a1d409b
a183c8f
 
 
 
 
 
 
 
 
 
7345b76
a183c8f
 
7345b76
64e1352
6659559
 
64e1352
6659559
 
 
64e1352
 
 
7345b76
64e1352
 
7345b76
23c3d5f
61d1c08
23c3d5f
 
c63c37f
23c3d5f
7345b76
 
23c3d5f
61d1c08
 
 
 
 
 
 
 
23c3d5f
 
 
 
 
 
 
 
 
a183c8f
b3f4c64
8e45d59
 
23c3d5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- tr
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: Wav2Vec2 Base Turkish by Cahya
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice 7
      type: mozilla-foundation/common_voice_7_0
      args: tr
    metrics:
      - name: Test WER
        type: wer
        value: 8.147
      - name: Test CER
        type: cer
        value: 2.802
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Robust Speech Event - Dev Data
      type: speech-recognition-community-v2/dev_data
      args: tr
    metrics:
      - name: Test WER
        type: wer
        value: 28.011
      - name: Test CER
        type: cer
        value: 10.66
---
      
# 

This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co./cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1337
- Wer: 0.1353

 
|     | Dataset                       | WER     | CER      |
|---|-------------------------------|---------|----------|
| 1   | Common Voice 6.1              |  9.437  |  3.325   |
| 2   | Common Voice 7.0              |  8.147  |  2.802   |
| 3   | Common Voice 8.0              |  8.335  |  2.336   |
| 4   | Speech Recognition Community  | 28.011  | 10.66    |

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data
The following datasets were used for finetuning:
 - [Common Voice 7.0 TR](https://huggingface.co./datasets/mozilla-foundation/common_voice_7_0) 'train', 'validation' and 'other' split were used for training.
 - [Media Speech](https://www.openslr.org/108/)
 - [Magic Hub](https://magichub.com/)


## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 7.5e-06
- train_batch_size: 6
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer    |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1224        | 3.45  | 500  | 0.1641          | 0.1396 |


### Framework versions

- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3