jonatasgrosman
commited on
Commit
•
33eddf9
1
Parent(s):
0781a6f
Update README.md
Browse files
README.md
CHANGED
@@ -132,14 +132,14 @@ test_dataset = test_dataset.map(speech_file_to_array_fn)
|
|
132 |
# Preprocessing the datasets.
|
133 |
# We need to read the audio files as arrays
|
134 |
def evaluate(batch):
|
135 |
-
|
136 |
|
137 |
-
|
138 |
-
|
139 |
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
|
144 |
result = test_dataset.map(evaluate, batched=True, batch_size=8)
|
145 |
|
@@ -152,7 +152,7 @@ print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_
|
|
152 |
|
153 |
**Test Result**:
|
154 |
|
155 |
-
|
156 |
|
157 |
| Model | WER | CER |
|
158 |
| ------------- | ------------- | ------------- |
|
|
|
132 |
# Preprocessing the datasets.
|
133 |
# We need to read the audio files as arrays
|
134 |
def evaluate(batch):
|
135 |
+
\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
|
136 |
|
137 |
+
\\\\twith torch.no_grad():
|
138 |
+
\\\\t\\\\tlogits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
|
139 |
|
140 |
+
\\\\tpred_ids = torch.argmax(logits, dim=-1)
|
141 |
+
\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
|
142 |
+
\\\\treturn batch
|
143 |
|
144 |
result = test_dataset.map(evaluate, batched=True, batch_size=8)
|
145 |
|
|
|
152 |
|
153 |
**Test Result**:
|
154 |
|
155 |
+
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
|
156 |
|
157 |
| Model | WER | CER |
|
158 |
| ------------- | ------------- | ------------- |
|