--- license: apache-2.0 language: - th - en base_model: - openai/whisper-large-v3 pipeline_tag: automatic-speech-recognition library_name: transformers metrics: - wer --- # Pathumma Whisper Large V3 (TH) ## Model Description Additional information is needed ## Quickstart You can transcribe audio files using the [`pipeline`](https://huggingface.co./docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class with the following code snippet: ```python import torch from transformers import pipeline device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32 lang = "th" task = "transcribe" pipe = pipeline( task="automatic-speech-recognition", model="nectec/Pathumma-whisper-th-large-v3", torch_dtype=torch_dtype, device=device, ) pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task=task) text = pipe("audio_path.wav")["text"] print(text) ``` ## Limitations and Future Work Additional information is needed ## Acknowledgements We extend our appreciation to the research teams engaged in the creation of the open speech model, including AIResearch, BiodatLab, Looloo Technology, SCB 10X, and OpenAI. We would like to express our gratitude to Dr. Titipat Achakulwisut of BiodatLab for the evaluation pipeline. We express our gratitude to ThaiSC, or NSTDA Supercomputer Centre, for supplying the LANTA used for model training, fine-tuning, and evaluation. ## Pathumma Audio Team *Pattara Tipaksorn*, Wayupuk Sommuang, Oatsada Chatthong, *Kwanchiva Thangthai* ## Citation ``` @misc{tipaksorn2024PathummaWhisper, title = { {Pathumma Whisper Large V3 (TH)} }, author = { Pattara Tipaksorn and Wayupuk Sommuang and Oatsada Chatthong and Kwanchiva Thangthai }, url = { https://huggingface.co./nectec/Pathumma-whisper-th-large-v3 }, publisher = { Hugging Face }, year = { 2024 }, } ```