Merak-7B-v1 / README.md
Ichsan2895's picture
Upload README.md
3f3b648
|
raw
history blame
5.14 kB
metadata
datasets:
  - wikipedia
language:
  - id
  - en
pipeline_tag: text-generation

Happy to announce the release of our first model, Merak-7B!

Merak-7B is the Large Language Model of Indonesia Languange

This model is based on Meta Llama-2-7B-Chat-HF and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.

Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM

Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.

Big thanks to all my friends and communities that help to build our first model. Feel free, to ask me about the model and please share the news on your social media.

Google Colab Notebook coming soon

HOW TO USE

Installation

Please make sure you have installed CUDA driver in your system, Python 3.10 and PyTorch 2. Then install this library in terminal

pip install bitsandbytes==0.39.1
pip install transformers==4.31.0
pip install git+https://github.com/huggingface/peft.git
pip install accelerate==0.20.3
pip install einops==0.6.1 scipy sentencepiece datasets

Using BitsandBytes and it run with >= 10 GB VRAM GPU

import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig

model_id = "Ichsan2895/Merak-7B-v1"
config = AutoConfig.from_pretrained(model_id)

BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True,
                                bnb_4bit_compute_dtype=torch.bfloat16,
                                bnb_4bit_use_double_quant=True,
                                bnb_4bit_quant_type="nf4",
    )

model = AutoModelForCausalLM.from_pretrained(model_id,
                                             quantization_config=BNB_CONFIG,
                                             device_map="auto",
                                             trust_remote_code=True)

tokenizer = LlamaTokenizer.from_pretrained(model_id)

def generate_response(question: str) -> str:
  prompt = f"<|prompt|>{question}<|answer|>".strip()

  encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
  with torch.inference_mode():
    outputs = model.generate(input_ids=encoding.input_ids,
                             attention_mask=encoding.attention_mask,
                             eos_token_id=tokenizer.pad_token_id,
                             do_sample=False,
                             num_beams=2,
                             temperature=0.3,
                             repetition_penalty=1.2,
                             max_length=200)
    
    response = tokenizer.decode(outputs[0], skip_special_tokes=True)

    assistant_start = "<|answer|>"
    response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()

prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))

From my experience, For better answer, please don’t use BitsandBytes 4-bit Quantization, but it using higher VRAM

import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig

model_id = "Ichsan2895/Merak-7B-v1"
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
                                             device_map="auto",
                                             trust_remote_code=True)

tokenizer = LlamaTokenizer.from_pretrained(model_id)

def generate_response(question: str) -> str:
  prompt = f"<|prompt|>{question}<|answer|>".strip()

  encoding = tokenizer(prompt, return_tensors='pt').to("cuda")
  with torch.inference_mode():
    outputs = model.generate(input_ids=encoding.input_ids,
                             attention_mask=encoding.attention_mask,
                             eos_token_id=tokenizer.pad_token_id,
                             do_sample=False,
                             num_beams=2,
                             temperature=0.3,
                             repetition_penalty=1.2,
                             max_length=200)
    
    response = tokenizer.decode(outputs[0], skip_special_tokes=True)

    assistant_start = "<|answer|>"
    response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()

prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))

CITATION

@Paper{arXiv,
  author  = {Touvron, et al},
  title   = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
  journal = {arXiv preprint arXiv:2307.09288},
  year    = {2023}
}

@ONLINE{wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"
}

@article{dettmers2023qlora,
  title   = {QLoRA: Efficient Finetuning of Quantized LLMs},
  author  = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
  journal = {arXiv preprint arXiv:2305.14314},
  year    = {2023}
}