whisper-tiny-vi / README.md
doof-ferb's picture
Update README.md
49a2658 verified
|
raw
history blame
1.75 kB
---
license: apache-2.0
datasets:
- doof-ferb/vlsp2020_vinai_100h
- doof-ferb/fpt_fosd
- doof-ferb/infore1_25hours
- doof-ferb/infore2_audiobooks
- quocanh34/viet_vlsp
- linhtran92/final_dataset_500hrs_wer0
- linhtran92/viet_youtube_asr_corpus_v2
- google/fleurs
- mozilla-foundation/common_voice_16_1
- vivos
language: ["vi"]
metrics: ["wer"]
library_name: transformers
base_model: openai/whisper-tiny
pipeline_tag: automatic-speech-recognition
model-index:
- name: doof-ferb/whisper-tiny-vi
results:
- task:
type: automatic-speech-recognition
dataset:
type: mozilla-foundation/common_voice_16_1
name: Mozilla CommonVoice (Vietnamese) v16.1
config: vi
split: test
metrics:
- type: wer
value: 26.6
verified: false
- task:
type: automatic-speech-recognition
dataset:
type: google/fleurs
name: Google FLEURS (Vietnamese)
config: vi_vn
split: test
metrics:
- type: wer
value: 37.1
verified: false
- task:
type: automatic-speech-recognition
dataset:
type: vivos
name: ĐHQG TPHCM VIVOS
split: test
metrics:
- type: wer
value: 18.7
verified: false
---
whisper tiny fine-tuned on a very big collection of vietnamese speech datasets
TODO:
- [x] training then publish checkpoint (*no ETA*)
- [x] evaluate WER on Common Voice & FLEURS
- [ ] convert to `openai-whisper`, `whisper.cpp`, `faster-whisper`
- [ ] convert to ONNX: to try `k2-fsa/sherpa-onnx` & `zhuzilin/whisper-openvino`
21k steps, warm-up 5%, batch size 16×2 (kaggle free T4×2)
all training + evaluation scripts are on my repo: https://github.com/phineas-pta/fine-tune-whisper-vi