File size: 2,494 Bytes
0a727d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a24e1fd
 
 
 
0a727d6
 
 
 
 
 
 
 
 
 
 
 
a24e1fd
0a727d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dfc3922
0a727d6
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
language:
- ht
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---

# Whisper small model for CTranslate2

This repository contains the conversion of [YassineKader/whisper-small-haitian](https://huggingface.co./YassineKader/whisper-small-haitian) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.

This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).

## Example
```git
#clone the repo
git clone https://huggingface.co./YassineKader/faster-whisper-small-haitian
```
```python
import ctranslate2
import librosa
import transformers
from datetime import datetime
# Load and resample the audio file.
audio, _ = librosa.load("audio1.wav", sr=16000, mono=True)
# Compute the features of the first 30 seconds of audio.
processor = transformers.WhisperProcessor.from_pretrained("YassineKader/whisper-small-haitian")
inputs = processor(audio, return_tensors="np", sampling_rate=16000)
features = ctranslate2.StorageView.from_array(inputs.input_features)
# Load the model on CPU.
model = ctranslate2.models.Whisper("faster-whisper-small-haitian")
# Detect the language.
results = model.detect_language(features)
language, probability = results[0][0]
print("Detected language %s with probability %f" % (language, probability))
print(datetime.now())
# Describe the task in the prompt.
# See the prompt format in https://github.com/openai/whisper.
prompt = processor.tokenizer.convert_tokens_to_ids(
    [
        "<|startoftranscript|>",
        language,
        "<|transcribe|>",
        "<|notimestamps|>",  # Remove this token to generate timestamps.
    ]
)
# Run generation for the 30-second window.
results = model.generate(features, [prompt])
transcription = processor.decode(results[0].sequences_ids[0])

print(datetime.now())
print(transcription)
```

## Conversion details

The original model was converted with the following command:

```
ct2-transformers-converter --model YassineKader/whisper-small-haitian --output_dir faster-whisper-small-ht --copy_files tokenizer.json --quantization float32
```

Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).

## More information

**For more information about the original model, see its [model card](https://huggingface.co./openai/whisper-small).**