NeoChen1024
commited on
Commit
•
9621d61
1
Parent(s):
a222259
Update README.md
Browse files
README.md
CHANGED
@@ -6,5 +6,5 @@ base_model:
|
|
6 |
IQ4_XS (fits into 24GiB VRAM + 8192 context with q4_1 KV cache, also room for 2048 ubatch)
|
7 |
IQ4_NL (fits into 24GiB VRAM + 8192 context with q4_1 KV cache)
|
8 |
Q4_K_M (fits into 24GiB VRAM + 6144 context with q4_1 KV cache, also good for CPU inference on E5-26xx v3/v4)
|
9 |
-
Q8_0 (probably isn't practical for anything unless you have big GPU array, imatrix derived from it)
|
10 |
BF16 (IDK if there's any use of it)
|
|
|
6 |
IQ4_XS (fits into 24GiB VRAM + 8192 context with q4_1 KV cache, also room for 2048 ubatch)
|
7 |
IQ4_NL (fits into 24GiB VRAM + 8192 context with q4_1 KV cache)
|
8 |
Q4_K_M (fits into 24GiB VRAM + 6144 context with q4_1 KV cache, also good for CPU inference on E5-26xx v3/v4)
|
9 |
+
Q8_0 (probably isn't practical for anything unless you have big GPU array, imatrix derived from it)
|
10 |
BF16 (IDK if there's any use of it)
|