Update README.md
Browse files
README.md
CHANGED
@@ -8,11 +8,11 @@ tags:
|
|
8 |
quantized_by: bartowski
|
9 |
---
|
10 |
|
11 |
-
## Llamacpp imatrix Quantizations of Astral-Fusion-
|
12 |
|
13 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3878">b3878</a> for quantization.
|
14 |
|
15 |
-
Original model: https://huggingface.co/ProdeusUnity/Astral-Fusion-
|
16 |
|
17 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
18 |
|
|
|
8 |
quantized_by: bartowski
|
9 |
---
|
10 |
|
11 |
+
## Llamacpp imatrix Quantizations of Astral-Fusion-8b-v0.0
|
12 |
|
13 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3878">b3878</a> for quantization.
|
14 |
|
15 |
+
Original model: https://huggingface.co/ProdeusUnity/Astral-Fusion-8b-v0.0
|
16 |
|
17 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
18 |
|