--- license: apache-2.0 language: - ja library_name: mlx tags: - whisper --- # kotoba-whisper-v1.1-mlx This repository contains a converted `mlx-whisper` model of [kotoba-whisper-v1.1](https://huggingface.co./kotoba-tech/kotoba-whisper-v1.1) which is suitable for running with Apple Silicon. As `kotoba-whisper-v1.1` is derived from `distil-large-v3`, this model is significantly faster than [mlx-community/whisper-large-v3-mlx](https://huggingface.co./mlx-community/whisper-large-v3-mlx) without losing much accuracy for Japanese transcription. **CAUTION: While the original model contains a custom pipeline implementation, this repository does NOT include them. Some functionalities such as `stable_ts` and `punctuator` may NOT work.** ## Usage ```sh pip install mlx-whisper ``` ```py import mlx_whisper mlx_whisper.transcribe(speech_file, path_or_hf_repo="kaiinui/kotoba-whisper-v1.1-mlx") ``` ## Related Links * [kotoba-whisper-v1.1](https://huggingface.co./kotoba-tech/kotoba-whisper-v1.1) (The original model) * [mlx-whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper)