huseinzol05 commited on
Commit
d2b3f0a
1 Parent(s): 6fa587d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ms
4
+ ---
5
+
6
+ # Full Parameter Finetuning 7B 32768 context length Mistral on Malaysian instructions dataset
7
+
8
+ README at https://github.com/mesolitica/malaya/tree/5.1/session/mistral#instructions-7b-16384-context-length
9
+
10
+ We use exact Mistral Instruct chat template.
11
+
12
+ WandB, https://wandb.ai/huseinzol05/fpf-mistral-7b-hf-instructions-16k?workspace=user-huseinzol05
13
+
14
+ WandB report, https://wandb.ai/huseinzol05/fpf-tinyllama-1.1b-hf-instructions-16k/reports/Instruction-finetuning--Vmlldzo2MzQ3OTcz
15
+
16
+ ## Dataset
17
+
18
+ Dataset gathered at https://huggingface.co/collections/mesolitica/malaysian-synthetic-dataset-656c2673fe7fe0b1e9e25fe2
19
+
20
+ Notebook to prepare dataset at https://github.com/mesolitica/malaysian-dataset/blob/master/llm-instruction/combine-malay-no-alignment-multitasks-partial-ultrachat-v2.ipynb
21
+
22
+ ## Limitations
23
+
24
+ This model is a quick demonstration that the base model can be easily fine-tuned to achieve some performance.
25
+ It does have minimal moderation mechanisms.
26
+
27
+ ## how-to
28
+
29
+ ```python
30
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
31
+ import torch
32
+ import json
33
+
34
+ def parse_mistral_chat(messages, function_call = None):
35
+
36
+ user_query = messages[-1]['content']
37
+
38
+ users, assistants = [], []
39
+ for q in messages[:-1]:
40
+ if q['role'] == 'user':
41
+ users.append(q['content'])
42
+ elif q['role'] == 'assistant':
43
+ assistants.append(q['content'])
44
+
45
+ texts = ['<s>']
46
+
47
+ if function_call:
48
+ fs = []
49
+ for f in function_call:
50
+ f = json.dumps(f, indent=4)
51
+ fs.append(f)
52
+ fs = '\n\n'.join(fs)
53
+ texts.append(f'\n[FUNCTIONCALL]\n{fs}\n')
54
+
55
+ for u, a in zip(users, assistants):
56
+ texts.append(f'[INST] {u.strip()} [/INST] {a.strip()}</s>')
57
+
58
+ texts.append(f'[INST] {user_query.strip()} [/INST]')
59
+ prompt = ''.join(texts).strip()
60
+ return prompt
61
+
62
+ TORCH_DTYPE = 'bfloat16'
63
+ nf4_config = BitsAndBytesConfig(
64
+ load_in_4bit=True,
65
+ bnb_4bit_quant_type='nf4',
66
+ bnb_4bit_use_double_quant=True,
67
+ bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE)
68
+ )
69
+
70
+ tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-mistral-7b-32k-instructions-v3')
71
+ model = AutoModelForCausalLM.from_pretrained(
72
+ 'mesolitica/malaysian-mistral-7b-32k-instructions-v3',
73
+ use_flash_attention_2 = True,
74
+ quantization_config = nf4_config
75
+ )
76
+
77
+ messages = [
78
+ {'role': 'user', 'content': 'kwsp tu apa'}
79
+ ]
80
+ prompt = parse_mistral_chat(messages)
81
+ inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
82
+ generate_kwargs = dict(
83
+ inputs,
84
+ max_new_tokens=1024,
85
+ top_p=0.95,
86
+ top_k=50,
87
+ temperature=0.9,
88
+ do_sample=True,
89
+ num_beams=1,
90
+ )
91
+ r = model.generate(**generate_kwargs)
92
+ tokenizer.decode(r[0])
93
+ ```
94
+
95
+ ```text
96
+ <s> [INST] kwsp tu apa [/INST]KWSP bermaksud Kumpulan Wang Simpanan Pekerja. Ia adalah sebuah institusi simpanan persaraan yang ditubuhkan oleh Kementerian Kewangan Malaysia untuk tujuan mengumpul simpanan ahli untuk dibayar pada umur persaraan, penuh atau penuh persaraan penuh. KWSP ditubuhkan pada tahun 1951 dan mula beroperasi pada tahun 1952. KWSP adalah salah satu institusi simpanan persaraan terbesar di dunia, dengan pangkalan ahli sekitar 14 juta ahli.</s>
97
+ ```