justheuristic commited on
Commit
88653f0
1 Parent(s): 618fe07

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -12,7 +12,9 @@ tags:
12
  An official quantization of [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118).
13
  For this quantization, we used 1 codebook of 16 bits for groups of 16 weights, totalling about 1.58 bits per weight.
14
 
15
- __The 1x16g16 models require aqlm inference library v1.1.6 or newer:__ `pip install aqlm[gpu,cpu]>=1.1.6`
 
 
16
 
17
 
18
  | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
 
12
  An official quantization of [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118).
13
  For this quantization, we used 1 codebook of 16 bits for groups of 16 weights, totalling about 1.58 bits per weight.
14
 
15
+ __The 1x16g16 models require aqlm inference library v1.1.6 or newer:__
16
+
17
+ `pip install aqlm[gpu,cpu]>=1.1.6`
18
 
19
 
20
  | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |