bartowski commited on
Commit
01add68
·
verified ·
1 Parent(s): dbacdf5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -11
README.md CHANGED
@@ -1,25 +1,18 @@
1
  ---
2
  quantized_by: bartowski
3
  pipeline_tag: text-generation
4
- language:
5
- - en
6
- license_name: nvidia-open-model-license
7
- tags:
8
- - nvidia
9
- - llama-3
10
- license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
11
- license: other
12
- base_model: nvidia/Llama-3_1-Nemotron-51B-Instruct
13
  ---
14
 
15
  ## Llamacpp imatrix Quantizations of Llama-3_1-Nemotron-51B-Instruct
16
 
17
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4381">b4381</a> for quantization.
18
 
19
  Original model: https://huggingface.co/nvidia/Llama-3_1-Nemotron-51B-Instruct
20
 
21
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
22
 
 
 
23
  ## Prompt format
24
 
25
  ```
@@ -33,17 +26,22 @@ Today Date: 26 Jul 2024
33
  {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
34
  ```
35
 
 
 
 
 
36
  ## Download a file (not the whole branch) from below:
37
 
38
  | Filename | Quant type | File Size | Split | Description |
39
  | -------- | ---------- | --------- | ----- | ----------- |
40
- | [Llama-3_1-Nemotron-51B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/tree/main/Llama-3_1-Nemotron-51B-Instruct-Q8_0) | Q8_0 | 54.73GB | true | Extremely high quality, generally unneeded but max available quant. |
41
  | [Llama-3_1-Nemotron-51B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/tree/main/Llama-3_1-Nemotron-51B-Instruct-Q8_0) | Q8_0 | 54.73GB | true | Extremely high quality, generally unneeded but max available quant. |
42
  | [Llama-3_1-Nemotron-51B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q6_K_L.gguf) | Q6_K_L | 42.77GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
43
  | [Llama-3_1-Nemotron-51B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q6_K.gguf) | Q6_K | 42.26GB | false | Very high quality, near perfect, *recommended*. |
44
  | [Llama-3_1-Nemotron-51B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_L.gguf) | Q5_K_L | 37.11GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
45
  | [Llama-3_1-Nemotron-51B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_M.gguf) | Q5_K_M | 36.47GB | false | High quality, *recommended*. |
46
  | [Llama-3_1-Nemotron-51B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_S.gguf) | Q5_K_S | 35.56GB | false | High quality, *recommended*. |
 
47
  | [Llama-3_1-Nemotron-51B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_L.gguf) | Q4_K_L | 31.82GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
48
  | [Llama-3_1-Nemotron-51B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_M.gguf) | Q4_K_M | 31.04GB | false | Good quality, default size for most use cases, *recommended*. |
49
  | [Llama-3_1-Nemotron-51B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_S.gguf) | Q4_K_S | 29.48GB | false | Slightly lower quality with more space savings, *recommended*. |
 
1
  ---
2
  quantized_by: bartowski
3
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  ## Llamacpp imatrix Quantizations of Llama-3_1-Nemotron-51B-Instruct
7
 
8
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4404">b4404</a> for quantization.
9
 
10
  Original model: https://huggingface.co/nvidia/Llama-3_1-Nemotron-51B-Instruct
11
 
12
  All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
13
 
14
+ Run them in [LM Studio](https://lmstudio.ai/)
15
+
16
  ## Prompt format
17
 
18
  ```
 
26
  {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
27
  ```
28
 
29
+ ## What's new:
30
+
31
+ Fix rope
32
+
33
  ## Download a file (not the whole branch) from below:
34
 
35
  | Filename | Quant type | File Size | Split | Description |
36
  | -------- | ---------- | --------- | ----- | ----------- |
37
+ | [Llama-3_1-Nemotron-51B-Instruct-f16.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/tree/main/Llama-3_1-Nemotron-51B-Instruct-f16) | f16 | 103.01GB | true | Full F16 weights. |
38
  | [Llama-3_1-Nemotron-51B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/tree/main/Llama-3_1-Nemotron-51B-Instruct-Q8_0) | Q8_0 | 54.73GB | true | Extremely high quality, generally unneeded but max available quant. |
39
  | [Llama-3_1-Nemotron-51B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q6_K_L.gguf) | Q6_K_L | 42.77GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
40
  | [Llama-3_1-Nemotron-51B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q6_K.gguf) | Q6_K | 42.26GB | false | Very high quality, near perfect, *recommended*. |
41
  | [Llama-3_1-Nemotron-51B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_L.gguf) | Q5_K_L | 37.11GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
42
  | [Llama-3_1-Nemotron-51B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_M.gguf) | Q5_K_M | 36.47GB | false | High quality, *recommended*. |
43
  | [Llama-3_1-Nemotron-51B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q5_K_S.gguf) | Q5_K_S | 35.56GB | false | High quality, *recommended*. |
44
+ | [Llama-3_1-Nemotron-51B-Instruct-Q4_1.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_1.gguf) | Q4_1 | 32.41GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
45
  | [Llama-3_1-Nemotron-51B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_L.gguf) | Q4_K_L | 31.82GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
46
  | [Llama-3_1-Nemotron-51B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_M.gguf) | Q4_K_M | 31.04GB | false | Good quality, default size for most use cases, *recommended*. |
47
  | [Llama-3_1-Nemotron-51B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/Llama-3_1-Nemotron-51B-Instruct-Q4_K_S.gguf) | Q4_K_S | 29.48GB | false | Slightly lower quality with more space savings, *recommended*. |