Update README.md
Browse files
README.md
CHANGED
@@ -22,12 +22,11 @@ license: apache-2.0
|
|
22 |
|
23 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
24 |
|
25 |
-
##
|
26 |
-
###
|
|
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
### Models Merged
|
31 |
|
32 |
The following models were included in the merge:
|
33 |
* [NeverSleep/Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
|
@@ -35,7 +34,7 @@ The following models were included in the merge:
|
|
35 |
* [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated) + [Azazelle/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/Azazelle/RP_Format_QuoteAsterisk_Llama3)
|
36 |
* [vicgalle/Configurable-Llama-3.1-8B-Instruct](https://huggingface.co/vicgalle/Configurable-Llama-3.1-8B-Instruct) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic)
|
37 |
|
38 |
-
|
39 |
|
40 |
The following YAML configuration was used to produce this model:
|
41 |
|
|
|
22 |
|
23 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
24 |
|
25 |
+
## Quantizations
|
26 |
+
### GGUF
|
27 |
+
- [Q8_0](https://huggingface.co/dasChronos1/Gluon-8B-Q8_0-GGUF)
|
28 |
|
29 |
+
## Models Merged
|
|
|
|
|
30 |
|
31 |
The following models were included in the merge:
|
32 |
* [NeverSleep/Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
|
|
|
34 |
* [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated) + [Azazelle/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/Azazelle/RP_Format_QuoteAsterisk_Llama3)
|
35 |
* [vicgalle/Configurable-Llama-3.1-8B-Instruct](https://huggingface.co/vicgalle/Configurable-Llama-3.1-8B-Instruct) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic)
|
36 |
|
37 |
+
## Configuration
|
38 |
|
39 |
The following YAML configuration was used to produce this model:
|
40 |
|