Meta-Llama-2-7b-chat-hf-Quantized
Collection
Different quantized versions of Meta's Llama-2-7b-chat-hf model
•
8 items
•
Updated
This repo contains 4-bit quantized (using AutoAWQ) model of Meta's meta-llama/Llama-2-7b-chat-hf
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration is developed by MIT-HAN-Lab
@inproceedings{lin2023awq, title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration}, author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Chen, Wei-Ming and Wang, Wei-Chen and Xiao, Guangxuan and Dang, Xingyu and Gan, Chuang and Han, Song}, booktitle={MLSys}, year={2024} }
Use the code below to get started with the model.
!pip install autoawq
!pip install accelerate
import torch
import os
from torch import bfloat16
from huggingface_hub import login, HfApi, create_repo
from transformers import AutoTokenizer, pipeline
from awq import AutoAWQForCausalLM
# define the model ID
model_id_llama = "alokabhishek/Llama-2-7b-chat-hf-4bit-AWQ"
# Load model
tokenizer_llama = AutoTokenizer.from_pretrained(model_id_llama, use_fast=True)
model_llama = AutoAWQForCausalLM.from_quantized(model_id_llama, fuse_layer=True, trust_remote_code = False, safetensors = True)
# Set up the prompt and prompt template. Change instruction as per requirements.
prompt_llama = "Tell me a funny joke about Large Language Models meeting a Blackhole in an intergalactic Bar."
fromatted_prompt = f'''[INST] <<SYS>> You are a helpful, and fun loving assistant. Always answer as jestfully as possible. <</SYS>> {prompt_llama} [/INST] '''
tokens = tokenizer_llama(fromatted_prompt, return_tensors="pt").input_ids.cuda()
# Generate output, adjust parameters as per requirements
generation_output = model_llama.generate(tokens, do_sample=True, temperature=1.7, top_p=0.95, top_k=40, max_new_tokens=512)
# Print the output
print(tokenizer_llama.decode(generation_output[0], skip_special_tokens=True))
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]