Add disclaimer
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ tags:
|
|
20 |
|
21 |
# Quant Infos
|
22 |
|
|
|
|
|
23 |
- Not supported in llama.cpp master; Requires the latest version of the phi3 128k [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
|
24 |
- just bf16 for now, quants & imatrix are still in the oven will follow soon TM
|
25 |
<!-- - quants done with an importance matrix for improved quantization loss -->
|
|
|
20 |
|
21 |
# Quant Infos
|
22 |
|
23 |
+
## ALPHA, based on experimental WIP code, expect bugs, not for the faint of heart
|
24 |
+
|
25 |
- Not supported in llama.cpp master; Requires the latest version of the phi3 128k [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
|
26 |
- just bf16 for now, quants & imatrix are still in the oven will follow soon TM
|
27 |
<!-- - quants done with an importance matrix for improved quantization loss -->
|