Ichsan2895 commited on
Commit
3f3b648
1 Parent(s): 69c92f2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +139 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - wikipedia
4
+ language:
5
+ - id
6
+ - en
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # Happy to announce the release of our first model, Merak-7B!
11
+
12
+ Merak-7B is the Large Language Model of Indonesia Languange
13
+
14
+ This model is based on Meta Llama-2-7B-Chat-HF and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
15
+
16
+ Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
17
+
18
+ Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.
19
+
20
+ Big thanks to all my friends and communities that help to build our first model. Feel free, to ask me about the model and please share the news on your social media.
21
+
22
+ Google Colab Notebook coming soon
23
+
24
+ ## HOW TO USE
25
+ ### Installation
26
+ Please make sure you have installed CUDA driver in your system, Python 3.10 and PyTorch 2. Then install this library in terminal
27
+ ```
28
+ pip install bitsandbytes==0.39.1
29
+ pip install transformers==4.31.0
30
+ pip install git+https://github.com/huggingface/peft.git
31
+ pip install accelerate==0.20.3
32
+ pip install einops==0.6.1 scipy sentencepiece datasets
33
+ ```
34
+ ### Using BitsandBytes and it run with >= 10 GB VRAM GPU
35
+ ```
36
+ import torch
37
+ from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
38
+ from peft import PeftModel, PeftConfig
39
+
40
+ model_id = "Ichsan2895/Merak-7B-v1"
41
+ config = AutoConfig.from_pretrained(model_id)
42
+
43
+ BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True,
44
+ bnb_4bit_compute_dtype=torch.bfloat16,
45
+ bnb_4bit_use_double_quant=True,
46
+ bnb_4bit_quant_type="nf4",
47
+ )
48
+
49
+ model = AutoModelForCausalLM.from_pretrained(model_id,
50
+ quantization_config=BNB_CONFIG,
51
+ device_map="auto",
52
+ trust_remote_code=True)
53
+
54
+ tokenizer = LlamaTokenizer.from_pretrained(model_id)
55
+
56
+ def generate_response(question: str) -> str:
57
+ prompt = f"<|prompt|>{question}<|answer|>".strip()
58
+
59
+ encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
60
+ with torch.inference_mode():
61
+ outputs = model.generate(input_ids=encoding.input_ids,
62
+ attention_mask=encoding.attention_mask,
63
+ eos_token_id=tokenizer.pad_token_id,
64
+ do_sample=False,
65
+ num_beams=2,
66
+ temperature=0.3,
67
+ repetition_penalty=1.2,
68
+ max_length=200)
69
+
70
+ response = tokenizer.decode(outputs[0], skip_special_tokes=True)
71
+
72
+ assistant_start = "<|answer|>"
73
+ response_start = response.find(assistant_start)
74
+ return response[response_start + len(assistant_start) :].strip()
75
+
76
+ prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
77
+ print(generate_response(prompt))
78
+ ```
79
+
80
+
81
+ ### From my experience, For better answer, please don’t use BitsandBytes 4-bit Quantization, but it using higher VRAM
82
+ ```
83
+ import torch
84
+ from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
85
+ from peft import PeftModel, PeftConfig
86
+
87
+ model_id = "Ichsan2895/Merak-7B-v1"
88
+ config = AutoConfig.from_pretrained(model_id)
89
+ model = AutoModelForCausalLM.from_pretrained(model_id,
90
+ device_map="auto",
91
+ trust_remote_code=True)
92
+
93
+ tokenizer = LlamaTokenizer.from_pretrained(model_id)
94
+
95
+ def generate_response(question: str) -> str:
96
+ prompt = f"<|prompt|>{question}<|answer|>".strip()
97
+
98
+ encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
99
+ with torch.inference_mode():
100
+ outputs = model.generate(input_ids=encoding.input_ids,
101
+ attention_mask=encoding.attention_mask,
102
+ eos_token_id=tokenizer.pad_token_id,
103
+ do_sample=False,
104
+ num_beams=2,
105
+ temperature=0.3,
106
+ repetition_penalty=1.2,
107
+ max_length=200)
108
+
109
+ response = tokenizer.decode(outputs[0], skip_special_tokes=True)
110
+
111
+ assistant_start = "<|answer|>"
112
+ response_start = response.find(assistant_start)
113
+ return response[response_start + len(assistant_start) :].strip()
114
+
115
+ prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
116
+ print(generate_response(prompt))
117
+ ```
118
+ ## CITATION
119
+ ```
120
+ @Paper{arXiv,
121
+ author = {Touvron, et al},
122
+ title = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
123
+ journal = {arXiv preprint arXiv:2307.09288},
124
+ year = {2023}
125
+ }
126
+
127
+ @ONLINE{wikidump,
128
+ author = "Wikimedia Foundation",
129
+ title = "Wikimedia Downloads",
130
+ url = "https://dumps.wikimedia.org"
131
+ }
132
+
133
+ @article{dettmers2023qlora,
134
+ title = {QLoRA: Efficient Finetuning of Quantized LLMs},
135
+ author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
136
+ journal = {arXiv preprint arXiv:2305.14314},
137
+ year = {2023}
138
+ }
139
+ ```