Update README.md
Browse files
README.md
CHANGED
@@ -10,13 +10,13 @@ tags:
|
|
10 |
---
|
11 |
|
12 |
# MergeTrix-7B-GGUF
|
13 |
-
Quantisized versions of MergeTrix-7B.
|
14 |
-
|
15 |
Name Quant method Bits Size Max RAM required Use case
|
16 |
-
mergetrix-7b.Q4_K_M.gguf (4.37GB): medium, balanced quality
|
17 |
-
mergetrix-7b.Q5_K_S.gguf (5 GB): large, low quality loss
|
18 |
-
mergetrix-7b.Q5_K_M.gguf (5.13 GB): large, very low quality loss
|
19 |
-
mergetrix-7b.Q6_K.gguf (5.94 GB): very large, extremely low quality loss
|
20 |
|
21 |
|
22 |
MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
|
|
10 |
---
|
11 |
|
12 |
# MergeTrix-7B-GGUF
|
13 |
+
Quantisized versions of MergeTrix-7B. Supports:
|
14 |
+
|
15 |
Name Quant method Bits Size Max RAM required Use case
|
16 |
+
> mergetrix-7b.Q4_K_M.gguf (4.37GB): medium, balanced quality
|
17 |
+
> mergetrix-7b.Q5_K_S.gguf (5 GB): large, low quality loss
|
18 |
+
> mergetrix-7b.Q5_K_M.gguf (5.13 GB): large, very low quality loss
|
19 |
+
> mergetrix-7b.Q6_K.gguf (5.94 GB): very large, extremely low quality loss
|
20 |
|
21 |
|
22 |
MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|