Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ Q_6_K is good also; if you want a better result; take this one instead of Q_5_K_
|
|
23 |
|
24 |
Q_8_0 which is very good; need a reasonable size of RAM otherwise you might expect a long wait
|
25 |
|
26 |
-
16-bit and 32-bit are not provided here (we changed our mind; for research perspectives, all quantized GGUFs should be provided for
|
27 |
|
28 |
### how to run it
|
29 |
|
|
|
23 |
|
24 |
Q_8_0 which is very good; need a reasonable size of RAM otherwise you might expect a long wait
|
25 |
|
26 |
+
16-bit and 32-bit are not provided here (we changed our mind; for research perspectives, all quantized GGUFs should be provided for a holistic comparison; that's what we are doing so); since the file size is similar to the original safetensors; once you have a GPU, go ahead with the safetensors, pretty much the same
|
27 |
|
28 |
### how to run it
|
29 |
|