File size: 5,138 Bytes
3f3b648
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
datasets:
- wikipedia
language:
- id
- en
pipeline_tag: text-generation
---

# Happy to announce the release of our first model, Merak-7B!

Merak-7B is the Large Language Model of Indonesia Languange 

This model is based on Meta Llama-2-7B-Chat-HF and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.

Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM

Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.

Big thanks to all my friends and communities that help to build our first model. Feel free, to ask me about the model and please share the news on your social media.

Google Colab Notebook coming soon

## HOW TO USE
### Installation
Please make sure you have installed CUDA driver in your system, Python 3.10 and PyTorch 2. Then install this library in terminal
```
pip install bitsandbytes==0.39.1
pip install transformers==4.31.0
pip install git+https://github.com/huggingface/peft.git
pip install accelerate==0.20.3
pip install einops==0.6.1 scipy sentencepiece datasets
```
### Using BitsandBytes and it run with >= 10 GB VRAM GPU
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig

model_id = "Ichsan2895/Merak-7B-v1"
config = AutoConfig.from_pretrained(model_id)

BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True,
                                bnb_4bit_compute_dtype=torch.bfloat16,
                                bnb_4bit_use_double_quant=True,
                                bnb_4bit_quant_type="nf4",
    )

model = AutoModelForCausalLM.from_pretrained(model_id,
                                             quantization_config=BNB_CONFIG,
                                             device_map="auto",
                                             trust_remote_code=True)

tokenizer = LlamaTokenizer.from_pretrained(model_id)

def generate_response(question: str) -> str:
  prompt = f"<|prompt|>{question}<|answer|>".strip()

  encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
  with torch.inference_mode():
    outputs = model.generate(input_ids=encoding.input_ids,
                             attention_mask=encoding.attention_mask,
                             eos_token_id=tokenizer.pad_token_id,
                             do_sample=False,
                             num_beams=2,
                             temperature=0.3,
                             repetition_penalty=1.2,
                             max_length=200)
    
    response = tokenizer.decode(outputs[0], skip_special_tokes=True)

    assistant_start = "<|answer|>"
    response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()

prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```


### From my experience, For better answer, please don’t use BitsandBytes 4-bit Quantization, but it using higher VRAM
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig

model_id = "Ichsan2895/Merak-7B-v1"
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
                                             device_map="auto",
                                             trust_remote_code=True)

tokenizer = LlamaTokenizer.from_pretrained(model_id)

def generate_response(question: str) -> str:
  prompt = f"<|prompt|>{question}<|answer|>".strip()

  encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
  with torch.inference_mode():
    outputs = model.generate(input_ids=encoding.input_ids,
                             attention_mask=encoding.attention_mask,
                             eos_token_id=tokenizer.pad_token_id,
                             do_sample=False,
                             num_beams=2,
                             temperature=0.3,
                             repetition_penalty=1.2,
                             max_length=200)
    
    response = tokenizer.decode(outputs[0], skip_special_tokes=True)

    assistant_start = "<|answer|>"
    response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()

prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```
## CITATION
```
@Paper{arXiv,
  author  = {Touvron, et al},
  title   = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
  journal = {arXiv preprint arXiv:2307.09288},
  year    = {2023}
}

@ONLINE{wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"
}

@article{dettmers2023qlora,
  title   = {QLoRA: Efficient Finetuning of Quantized LLMs},
  author  = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
  journal = {arXiv preprint arXiv:2305.14314},
  year    = {2023}
}
```