--- license: other language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - llama-3.1 - meta - autoawq --- > [!IMPORTANT] > This repository is a community-driven quantized version of the original model [`meta-llama/Meta-Llama-3.1-405B-Instruct`](https://huggingface.co./meta-llama/Meta-Llama-3.1-405B-Instruct) which is the FP16 half-precision official version released by Meta AI. ## Model Information he Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. This repository contains [`meta-llama/Meta-Llama-3.1-405B-Instruct`](https://huggingface.co./meta-llama/Meta-Llama-3.1-405B-Instruct) quantized using [AutoAWQ](https://github.com/casperhansen/AutoAWQ) from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128. ## Model Usage > [!NOTE] > In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4, around 203 GiB of VRAM are needed only for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available. In order to use the current quantized model, support is offered for different solutions: ### 🤗 transformers To run the inference on top of Llama 3.1 405B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4" prompt = [ {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."}, {"role": "user", "content": "What's Deep Learning?"}, ] tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer.padding_side = "left" terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>"), ] inputs = tokenizer.apply_chat_template(prompt, tokenize=True, add_generation_prompt=True, return_tensors="pt").cuda() model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto", ) outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256, eos_token_id=terminators) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` ### AutoAWQ Alternatively, one may want to run that via `AutoAWQ` even though it's built on top of 🤗 `transformers`, which is the recommended approach instead as described above. ```python import torch from autoawq import AutoAWQForCausalLM from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4" prompt = [ {"role": "system", "content": "You are a helpful assistant, that responds as a pirate."}, {"role": "user", "content": "What's Deep Learning?"}, ] tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer.padding_side = "left" inputs = tokenizer.apply_chat_template(prompt, tokenize=True, add_generation_prompt=True, return_tensors="pt").cuda() model = AutoAWQForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto", fuse_layers=True, ) outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` The AutoAWQ script has been adapted from [AutoAWQ/examples/generate.py](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py). ### 🤗 Text Generation Inference (TGI) Coming soon! ## Quantization Reproduction > [!NOTE] > In order to quantize Llama 3.1 405B Instruct using AutoAWQ, you will need to use an instance with at least enough CPU RAM to fit the whole model i.e. ~800GiB, and an NVIDIA GPU with 80GiB of VRAM to quantize it. In order to quantize Llama 3.1 405B Instruct, first install `torch` and `autoawq` as follows: ```bash pip install "torch>=2.2.0,<2.3.0" autoawq --upgrade ``` Otherwise the quantization may fail, since the AutoAWQ kernels are built with PyTorch 2.2.1, meaning that those will break with PyTorch 2.3.0. Then install the latest version of `transformers` as follows: ```bash pip install "transformers>=4.43.0" --upgrade ``` And then, run the following script, adapted from [`AutoAWQ/examples/quantize.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/quantize.py) as follows: ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_path = "meta-llama/Meta-Llama-3.1-405B-Instruct" quant_path = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4" quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM", } # Load model model = AutoAWQForCausalLM.from_pretrained( model_path, **{"low_cpu_mem_usage": True, "use_cache": False} ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) # Quantize model.quantize(tokenizer, quant_config=quant_config) # Save quantized model model.save_quantized(quant_path) tokenizer.save_pretrained(quant_path) print(f'Model is quantized and saved at "{quant_path}"') ```