gamepollakrit commited on
Commit
50cf305
1 Parent(s): 53998c9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - th
4
+ - en
5
+ license: apache-2.0
6
+ library_name: transformers
7
+ base_model:
8
+ - Qwen/Qwen2.5-14B-Instruct
9
+ - Qwen/Qwen2.5-14B
10
+ pipeline_tag: text-generation
11
+ ---
12
+ <img src="./Tsunami.webp" alt="Tsunami Model" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
13
+
14
+ # Tsunami-1.0-14B-Instruct
15
+ **TSUNAMI**: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence.
16
+
17
+ **TSUNAMI** full name was created by ChatGPT.
18
+
19
+ ---
20
+
21
+ ### infomation
22
+ **Tsunami-1.0-14B-Instruct** is Thai Large Language Model that fine-tuned from **Qwen2.5-14B** in Thai dataset.
23
+
24
+ ---
25
+
26
+ ### Author
27
+ - Pollakrit Lorprasertkul | [email protected]
28
+
29
+ ---
30
+
31
+ ### Prompt Template
32
+
33
+ This model uses `ChatML` prompt template:
34
+
35
+ ```
36
+ <|im_start|>system
37
+ {System}<|im_end|>
38
+ <|im_start|>user
39
+ {User}<|im_end|>
40
+ <|im_start|>assistant
41
+ {Assistant}
42
+ ````
43
+
44
+ ### How to use
45
+
46
+
47
+ ```python
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+ import torch
50
+ model_name = "Tsunami-th/Tsunami-1.0-14B-Instruct"
51
+ model = AutoModelForCausalLM.from_pretrained(
52
+ model_name,
53
+ torch_dtype="auto",
54
+ device_map="auto"
55
+ )
56
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
57
+ messages = [
58
+ {"role": "system", "content": "You are a helpful assistant."},
59
+ {"role": "user", "content": "สวัสดีครับ"}
60
+ ]
61
+ text = tokenizer.apply_chat_template(
62
+ messages,
63
+ tokenize=False,
64
+ add_generation_prompt=True
65
+ )
66
+ inputs = tokenizer(text, return_tensors="pt")
67
+ inputs = inputs.to(model.device)
68
+ with torch.no_grad():
69
+ output = model.generate(**inputs, max_new_tokens=512)
70
+ response = tokenizer.decode(output[0, len(inputs['input_ids'][0]):], skip_special_tokens=True)
71
+ ```
72
+
73
+ ---