Update README.md
Browse filesAdded link to provided GGUFs
README.md
CHANGED
@@ -13,7 +13,10 @@ license: cc-by-nc-4.0
|
|
13 |
kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
14 |
I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently.
|
15 |
|
16 |
-
I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format
|
|
|
|
|
|
|
17 |
## Merge Details
|
18 |
### Merge Method
|
19 |
|
|
|
13 |
kuno-kunoichi-v1-DPO-v2-SLERP-7B is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
14 |
I'm hoping that the result is more robust against errors or when merging due to "denseness", as the two models likely implement comparable reasoning at least somewhat differently.
|
15 |
|
16 |
+
I've performed some testing with ChatML format prompting using temperature=1.1 and minP=0.03. The model also supports Alpaca format prompts.
|
17 |
+
|
18 |
+
[GGUF quants helpfully provided by Lewdiculous.](https://huggingface.co/Lewdiculous/kuno-kunoichi-v1-DPO-v2-SLERP-7B-GGUF-IQ-Imatrix)
|
19 |
+
|
20 |
## Merge Details
|
21 |
### Merge Method
|
22 |
|