--- language: - th - en license: apache-2.0 library_name: transformers base_model: - Qwen/Qwen2.5-14B-Instruct - Qwen/Qwen2.5-14B pipeline_tag: text-generation --- Tsunami Model # Tsunami-1.0-14B-Instruct **TSUNAMI**: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence. **TSUNAMI** full name was created by ChatGPT. --- ### infomation **Tsunami-1.0-14B-Instruct** is Thai Large Language Model that fine-tuned from **Qwen2.5-14B** in Thai dataset. --- ### Author - Pollakrit Lorprasertkul | game.pollakrit@gmail.com --- ### Performance Evaluation Below are the benchmark results of **Tsunami-1.0-14B-Instruct** compared to similar models in its class: | Model | Average | Thai Exam | M3Exam | | --- | --- | --- | --- | | Qwen2.5-14B-Instruct | 58.45 | 57.35 | 59.55 | | Meta-Llama-3.1-70B-Instruct | 59.38 | 58.23 | 60.52 | | llama-3-typhoon-v1.5x-70b-instruct | 59.34 | 58.76 | 59.92 | | openthaigpt1.5-14b-instruct | 60.41 | 58.41 | 62.41 | | **Tsunami-1.0-14B-Instruct** | **62.05** | **61.06** | **63.05** | --- ### Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System}<|im_end|> <|im_start|>user {User}<|im_end|> <|im_start|>assistant {Assistant} ```` --- ### How to use ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "Tsunami-th/Tsunami-1.0-14B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "สวัสดีครับ"} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer(text, return_tensors="pt") inputs = inputs.to(model.device) with torch.no_grad(): output = model.generate(**inputs, max_new_tokens=512) response = tokenizer.decode(output[0, len(inputs['input_ids'][0]):], skip_special_tokens=True) ``` ---