XelotX mradermacher commited on
Commit
0f9cc45
0 Parent(s):

Duplicate from mradermacher/Higgs-Llama-3-70B-GGUF

Browse files

Co-authored-by: Michael Radermacher <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Higgs-Llama-3-70B.Q8_0.gguf.part1of2 filter=lfs diff=lfs merge=lfs -text
37
+ Higgs-Llama-3-70B.Q8_0.gguf.part2of2 filter=lfs diff=lfs merge=lfs -text
38
+ Higgs-Llama-3-70B.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Higgs-Llama-3-70B.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Higgs-Llama-3-70B.IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Higgs-Llama-3-70B.f16.gguf.part1of3 filter=lfs diff=lfs merge=lfs -text
42
+ Higgs-Llama-3-70B.f16.gguf.part2of3 filter=lfs diff=lfs merge=lfs -text
43
+ Higgs-Llama-3-70B.f16.gguf.part3of3 filter=lfs diff=lfs merge=lfs -text
44
+ Higgs-Llama-3-70B.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Higgs-Llama-3-70B.Q6_K.gguf.part1of2 filter=lfs diff=lfs merge=lfs -text
46
+ Higgs-Llama-3-70B.Q6_K.gguf.part2of2 filter=lfs diff=lfs merge=lfs -text
47
+ Higgs-Llama-3-70B.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
48
+ Higgs-Llama-3-70B.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
49
+ Higgs-Llama-3-70B.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
50
+ Higgs-Llama-3-70B.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
51
+ Higgs-Llama-3-70B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
52
+ Higgs-Llama-3-70B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
53
+ Higgs-Llama-3-70B.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
54
+ Higgs-Llama-3-70B.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Higgs-Llama-3-70B.IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62fb7a3c3a175a2ffd41d8f2d146c94ebc26b58e40e467923feb90a5a3780d0d
3
+ size 31937034240
Higgs-Llama-3-70B.IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ceb1541ae9b4ab544dd45da3b485c47abae598a5c961f6324898fcc6859d16ff
3
+ size 30912051200
Higgs-Llama-3-70B.IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d9ccf824a3d9e8484c4ca209a43329fdcf5cd265422fffa808b8c66b6bf85d8
3
+ size 29307729920
Higgs-Llama-3-70B.IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e8875c5d62580892dc634ef15b9da6ef036f7c843fda7b4d949338cf3375f63
3
+ size 38269663232
Higgs-Llama-3-70B.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b05a4a35ac6334827e4f7be87ed2dc6121d4d4805c141969bd3ccb319410fe1c
3
+ size 26375108608
Higgs-Llama-3-70B.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67aae4768d4fc010c794d12d0e48d22c3ce72528cedede847d572ef33b93ca63
3
+ size 37140592640
Higgs-Llama-3-70B.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d1458f0db6e2b943de7bbeec1a42481bff4fe14e95e35c75f3d2c4b7f0e1ecc
3
+ size 34267494400
Higgs-Llama-3-70B.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e11a60d9effebd2687d254f91cbd294283a17e27e534f29675496a43681efed
3
+ size 30912051200
Higgs-Llama-3-70B.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:357fdf94c157c49736abe093e80882a5b07353e05fb7d3f55f93b363f58e2f33
3
+ size 42520393728
Higgs-Llama-3-70B.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb1ba7dd3ed182720773f1fd217cb2151a56264736bdbb549e43077c8b4517b3
3
+ size 40347219968
Higgs-Llama-3-70B.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7858ee05072945a9bcfc626e8850d5b16d486846a2a4b3a21de3a4d6a81e610
3
+ size 49949816832
Higgs-Llama-3-70B.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:631548d2e5860366cd7300a017b6f6879443559f8f042a48ad8bd55208be113a
3
+ size 48657446912
Higgs-Llama-3-70B.Q6_K.gguf.part1of2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2da15e302ef11f487f65aacc74e1e6871321c8c4ff58e3d97fb31d93138c51c7
3
+ size 28991029248
Higgs-Llama-3-70B.Q6_K.gguf.part2of2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a0eaa29835d864b70829177218a1879a70cb852b37d47eb8c120a03395d9edb
3
+ size 28897114112
Higgs-Llama-3-70B.Q8_0.gguf.part1of2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24ec21d327251212c74f7f8e6d43b344115d762e1a3bca0974e9edd0f6828253
3
+ size 37580963840
Higgs-Llama-3-70B.Q8_0.gguf.part2of2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb58e167be05681273e3eb309cd9e74ec98fee1d552c3bdda47c4a4ccc00bff9
3
+ size 37394085888
Higgs-Llama-3-70B.f16.gguf.part1of3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98e1d35c321e593f7ecbd83bc488d017070f87abc4167acd49a5f951aad19eac
3
+ size 47244640256
Higgs-Llama-3-70B.f16.gguf.part2of3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d6e94d1b1946b6fd63e7058ad02434561643028a26e97fb102533f5b6456035
3
+ size 47244640256
Higgs-Llama-3-70B.f16.gguf.part3of3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d1cdbfe2931a30b1caedfae121bc6064d1e261405e36c28f17d4f036329ddfa
3
+ size 46628632576
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bosonai/Higgs-Llama-3-70B
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ license: other
7
+ quantized_by: mradermacher
8
+ ---
9
+ ## About
10
+
11
+ <!-- ### quantize_version: 2 -->
12
+ <!-- ### output_tensor_quantised: 1 -->
13
+ <!-- ### convert_type: hf -->
14
+ <!-- ### vocab_type: -->
15
+ <!-- ### tags: -->
16
+ static quants of https://huggingface.co/bosonai/Higgs-Llama-3-70B
17
+
18
+ <!-- provided-files -->
19
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/Higgs-Llama-3-70B-i1-GGUF
20
+ ## Usage
21
+
22
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
23
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
24
+ more details, including on how to concatenate multi-part files.
25
+
26
+ ## Provided Quants
27
+
28
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
29
+
30
+ | Link | Type | Size/GB | Notes |
31
+ |:-----|:-----|--------:|:------|
32
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
33
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
34
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
35
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
36
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.IQ3_M.gguf) | IQ3_M | 32.0 | |
37
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
38
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
39
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
40
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
41
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
42
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
43
+ | [GGUF](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
44
+ | [PART 1](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
45
+ | [PART 1](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
46
+ | [PART 1](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.f16.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.f16.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Higgs-Llama-3-70B-GGUF/resolve/main/Higgs-Llama-3-70B.f16.gguf.part3of3) | f16 | 141.2 | 16 bpw, overkill |
47
+
48
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
49
+ types (lower is better):
50
+
51
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
52
+
53
+ And here are Artefact2's thoughts on the matter:
54
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
55
+
56
+ ## FAQ / Model Request
57
+
58
+ See https://huggingface.co/mradermacher/model_requests for some answers to
59
+ questions you might have and/or if you want some other model quantized.
60
+
61
+ ## Thanks
62
+
63
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
64
+ me use its servers and providing upgrades to my workstation to enable
65
+ this work in my free time.
66
+
67
+ <!-- end -->