--- base_model: karakuri-ai/karakuri-lm-70b-chat-v0.1 license: llama2 language: - ja pipeline_tag: text-generation --- # CapyBaraHermes 2.5 Mistral 7B - GPTQ - Model creator: [karakuri-ai](https://huggingface.co./karakuri-ai) - Original model: [KARAKURI LM 70B Chat v0.1](https://huggingface.co./karakuri-ai/karakuri-lm-70b-chat-v0.1) # Description This repo contains AWQ model files for [KARAKURI LM 70B Chat v0.1](https://huggingface.co./karakuri-ai/karakuri-lm-70b-chat-v0.1). ## How to get the AWQ model I created AWQ model files by using used autoawq==0.2.3. ```bash pip install autoawq==0.2.3 ``` This is the Python code to create AWQ model. ```bash from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_path = "karakuri-ai/karakuri-lm-70b-chat-v0.1" quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" } # Load model model = AutoAWQForCausalLM.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) # Quantize model.quantize(tokenizer, quant_config=quant_config, calib_data="mmnga/wikipedia-ja-20230720-1k") quant_path = "karakuri-lm-70b-v0.1-AWQ" model.save_quantized(quant_path) tokenizer.save_pretrained(quant_path) ``` ## Usage ```bash from vllm import LLM, SamplingParams sampling_params = SamplingParams(temperature=0.0, max_tokens=100) llm = LLM(model="masao1211/karakuri-lm-70b-chat-v0.1-AWQ", max_model_len=4096) system_prompt = "System prompt" messages = [{"role": "system", "content": "System prompt"}] messages.append({"role": "user", "content": "User Prompt"}) prompt = llm.llm_engine.tokenizer.tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False) prompts = [prompt] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ```