Edit model card

This is a quantized version of distil-whisper-medium.en, optimized with ctranslate2 to use 8-bit integers for faster inference while maintaining accuracy. Ideal for speech-to-text tasks where speed is critical.

Downloads last month
1
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Rejekts/fastest-distil-whisper-medium.en

Finetuned
(4)
this model