mradermacher commited on
Commit
dae0d55
1 Parent(s): d444d9e

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -4,6 +4,10 @@ datasets:
4
  - argilla/ultrafeedback-binarized-preferences-cleaned
5
  language:
6
  - en
 
 
 
 
7
  library_name: transformers
8
  license: apache-2.0
9
  quantized_by: mradermacher
@@ -47,7 +51,6 @@ more details, including on how to concatenate multi-part files.
47
  | [GGUF](https://huggingface.co/mradermacher/notux-8x7b-v1-GGUF/resolve/main/notux-8x7b-v1.Q6_K.gguf) | Q6_K | 38.6 | very good quality |
48
  | [PART 1](https://huggingface.co/mradermacher/notux-8x7b-v1-GGUF/resolve/main/notux-8x7b-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/notux-8x7b-v1-GGUF/resolve/main/notux-8x7b-v1.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality |
49
 
50
-
51
  Here is a handy graph by ikawrakow comparing some lower-quality quant
52
  types (lower is better):
53
 
 
4
  - argilla/ultrafeedback-binarized-preferences-cleaned
5
  language:
6
  - en
7
+ - de
8
+ - es
9
+ - fr
10
+ - it
11
  library_name: transformers
12
  license: apache-2.0
13
  quantized_by: mradermacher
 
51
  | [GGUF](https://huggingface.co/mradermacher/notux-8x7b-v1-GGUF/resolve/main/notux-8x7b-v1.Q6_K.gguf) | Q6_K | 38.6 | very good quality |
52
  | [PART 1](https://huggingface.co/mradermacher/notux-8x7b-v1-GGUF/resolve/main/notux-8x7b-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/notux-8x7b-v1-GGUF/resolve/main/notux-8x7b-v1.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality |
53
 
 
54
  Here is a handy graph by ikawrakow comparing some lower-quality quant
55
  types (lower is better):
56