Thaweewat commited on
Commit
cee719e
1 Parent(s): 9af91e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -3,7 +3,6 @@ license: apache-2.0
3
  language:
4
  - th
5
  base_model: biodatlab/whisper-th-large-combined
6
- library_name: transformers
7
  tags:
8
  - whisper
9
  - Pytorch
@@ -13,7 +12,7 @@ tags:
13
 
14
  whisper-th-large-ct2 is the CTranslate2 format of [biodatlab/whisper-th-large-combined](https://huggingface.co/biodatlab/whisper-th-large-combined), comparable with [WhisperX](https://github.com/m-bain/whisperX) and [faster-whisper](https://github.com/SYSTRAN/faster-whisper), which enables:
15
 
16
- - 🤏 **Half the size** of Original Huggingface format.
17
  - ⚡️ Batched inference for **70x** real-time transcription using Whisper large-v2.
18
  - 🪶 A faster-whisper backend, requiring **<8GB GPU memory** for large-v2 with beam_size=5.
19
  - 🎯 Accurate word-level timestamps using wav2vec2 alignment.
 
3
  language:
4
  - th
5
  base_model: biodatlab/whisper-th-large-combined
 
6
  tags:
7
  - whisper
8
  - Pytorch
 
12
 
13
  whisper-th-large-ct2 is the CTranslate2 format of [biodatlab/whisper-th-large-combined](https://huggingface.co/biodatlab/whisper-th-large-combined), comparable with [WhisperX](https://github.com/m-bain/whisperX) and [faster-whisper](https://github.com/SYSTRAN/faster-whisper), which enables:
14
 
15
+ - 🤏 **Half the size** of original Huggingface format.
16
  - ⚡️ Batched inference for **70x** real-time transcription using Whisper large-v2.
17
  - 🪶 A faster-whisper backend, requiring **<8GB GPU memory** for large-v2 with beam_size=5.
18
  - 🎯 Accurate word-level timestamps using wav2vec2 alignment.