Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,8 @@ language:
|
|
15 |
- en
|
16 |
---
|
17 |
|
|
|
|
|
18 |
I prefer normal gguf quantization for Q8_0 & Q6_K, imatrix doesn't do any favors for those, quite the opposite. Q6_K is recommended.
|
19 |
|
20 |
Quants 5_K_M, 4_K_M, 3_K_M made using imatrix option with dataset provided by bartowski [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
|
|
15 |
- en
|
16 |
---
|
17 |
|
18 |
+
3.0 Farewell model. Next i'm going to wait Sao10K to break the bank again with a new 3.1 RP base.
|
19 |
+
|
20 |
I prefer normal gguf quantization for Q8_0 & Q6_K, imatrix doesn't do any favors for those, quite the opposite. Q6_K is recommended.
|
21 |
|
22 |
Quants 5_K_M, 4_K_M, 3_K_M made using imatrix option with dataset provided by bartowski [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|