YassineKader's picture
Update README.md
a24e1fd
metadata
language:
  - ht
tags:
  - audio
  - automatic-speech-recognition
license: mit
library_name: ctranslate2

Whisper small model for CTranslate2

This repository contains the conversion of YassineKader/whisper-small-haitian to the CTranslate2 model format.

This model can be used in CTranslate2 or projects based on CTranslate2 such as faster-whisper.

Example

#clone the repo
git clone https://huggingface.co./YassineKader/faster-whisper-small-haitian
import ctranslate2
import librosa
import transformers
from datetime import datetime
# Load and resample the audio file.
audio, _ = librosa.load("audio1.wav", sr=16000, mono=True)
# Compute the features of the first 30 seconds of audio.
processor = transformers.WhisperProcessor.from_pretrained("YassineKader/whisper-small-haitian")
inputs = processor(audio, return_tensors="np", sampling_rate=16000)
features = ctranslate2.StorageView.from_array(inputs.input_features)
# Load the model on CPU.
model = ctranslate2.models.Whisper("faster-whisper-small-haitian")
# Detect the language.
results = model.detect_language(features)
language, probability = results[0][0]
print("Detected language %s with probability %f" % (language, probability))
print(datetime.now())
# Describe the task in the prompt.
# See the prompt format in https://github.com/openai/whisper.
prompt = processor.tokenizer.convert_tokens_to_ids(
    [
        "<|startoftranscript|>",
        language,
        "<|transcribe|>",
        "<|notimestamps|>",  # Remove this token to generate timestamps.
    ]
)
# Run generation for the 30-second window.
results = model.generate(features, [prompt])
transcription = processor.decode(results[0].sequences_ids[0])

print(datetime.now())
print(transcription)

Conversion details

The original model was converted with the following command:

ct2-transformers-converter --model YassineKader/whisper-small-haitian --output_dir faster-whisper-small-ht --copy_files tokenizer.json --quantization float32

Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the compute_type option in CTranslate2.

More information

For more information about the original model, see its model card.