jonatasgrosman commited on
Commit
f758bc3
1 Parent(s): acd04ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -10
README.md CHANGED
@@ -132,22 +132,30 @@ test_dataset = test_dataset.map(speech_file_to_array_fn)
132
  # Preprocessing the datasets.
133
  # We need to read the audio files as arrays
134
  def evaluate(batch):
135
- inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
136
 
137
- with torch.no_grad():
138
- logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
139
 
140
- pred_ids = torch.argmax(logits, dim=-1)
141
- batch["pred_strings"] = processor.batch_decode(pred_ids)
142
- return batch
143
 
144
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
145
 
146
- print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"], chunk_size=1000)))
147
- print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"], chunk_size=1000)))
 
 
 
148
  ```
149
 
150
  **Test Result**:
151
 
152
- - WER: 31.40%
153
- - CER: 6.20%
 
 
 
 
 
 
132
  # Preprocessing the datasets.
133
  # We need to read the audio files as arrays
134
  def evaluate(batch):
135
+ \tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
136
 
137
+ \twith torch.no_grad():
138
+ \t\tlogits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
139
 
140
+ \tpred_ids = torch.argmax(logits, dim=-1)
141
+ \tbatch["pred_strings"] = processor.batch_decode(pred_ids)
142
+ \treturn batch
143
 
144
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
145
 
146
+ predictions = [x.upper() for x in result["pred_strings"]]
147
+ references = [x.upper() for x in result["sentence"]]
148
+
149
+ print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
150
+ print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
151
  ```
152
 
153
  **Test Result**:
154
 
155
+ My model may report better scores than others because of some specificity of my evaluation script, so I ran the same evaluation script on other models (on 2021-04-22) to make a fairer comparison.
156
+
157
+ | Model | WER | CER |
158
+ | ------------- | ------------- | ------------- |
159
+ | jonatasgrosman/wav2vec2-large-xlsr-53-hungarian | **31.40%** | **6.20%** |
160
+ | anton-l/wav2vec2-large-xlsr-53-hungarian | 42.39% | 9.39% |
161
+ | birgermoell/wav2vec2-large-xlsr-hungarian | 46.93% | 10.31% |