Commit
•
6e5043b
1
Parent(s):
c8a3851
Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ tags:
|
|
22 |
|
23 |
## Model Information
|
24 |
|
25 |
-
|
26 |
|
27 |
This repository contains [`meta-llama/Meta-Llama-3.1-405B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) quantized using [AutoAWQ](https://github.com/casperhansen/AutoAWQ) from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128.
|
28 |
|
@@ -31,10 +31,22 @@ This repository contains [`meta-llama/Meta-Llama-3.1-405B-Instruct`](https://hug
|
|
31 |
> [!NOTE]
|
32 |
> In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4, around 203 GiB of VRAM are needed only for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available.
|
33 |
|
34 |
-
In order to use the current quantized model, support is offered for different solutions
|
35 |
|
36 |
### 🤗 transformers
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
To run the inference on top of Llama 3.1 405B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally.
|
39 |
|
40 |
```python
|
|
|
22 |
|
23 |
## Model Information
|
24 |
|
25 |
+
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
|
26 |
|
27 |
This repository contains [`meta-llama/Meta-Llama-3.1-405B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) quantized using [AutoAWQ](https://github.com/casperhansen/AutoAWQ) from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128.
|
28 |
|
|
|
31 |
> [!NOTE]
|
32 |
> In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4, around 203 GiB of VRAM are needed only for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available.
|
33 |
|
34 |
+
In order to use the current quantized model, support is offered for different solutions as `transformers`, `autoawq`, or `text-generation-inference`.
|
35 |
|
36 |
### 🤗 transformers
|
37 |
|
38 |
+
In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4, both `torch` and `autoawq` need to be installed as:
|
39 |
+
|
40 |
+
```bash
|
41 |
+
pip install "torch>=2.2.0,<2.3.0" autoawq --upgrade
|
42 |
+
```
|
43 |
+
|
44 |
+
Then, the latest version of `transformers` need to be installed, being 4.43.0 or higher, as:
|
45 |
+
|
46 |
+
```bash
|
47 |
+
pip install "transformers[accelerate]>=4.43.0" --upgrade
|
48 |
+
```
|
49 |
+
|
50 |
To run the inference on top of Llama 3.1 405B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally.
|
51 |
|
52 |
```python
|