Transformers
GGUF
mixtral
Not-For-All-Audiences
nsfw
TheBloke commited on
Commit
7560570
·
1 Parent(s): 3014cfa

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +370 -0
README.md ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Undi95/Llamix2-MLewd-4x13B
3
+ inference: false
4
+ license: cc-by-nc-4.0
5
+ model_creator: Undi
6
+ model_name: Llamix2 MLewd 4X13B
7
+ model_type: mixtral
8
+ prompt_template: 'Below is an instruction that describes a task. Write a response
9
+ that appropriately completes the request.
10
+
11
+
12
+ ### Instruction:
13
+
14
+ {prompt}
15
+
16
+
17
+ ### Response:
18
+
19
+ '
20
+ quantized_by: TheBloke
21
+ tags:
22
+ - not-for-all-audiences
23
+ - nsfw
24
+ ---
25
+ <!-- markdownlint-disable MD041 -->
26
+
27
+ <!-- header start -->
28
+ <!-- 200823 -->
29
+ <div style="width: auto; margin-left: auto; margin-right: auto">
30
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
31
+ </div>
32
+ <div style="display: flex; justify-content: space-between; width: 100%;">
33
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
34
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
35
+ </div>
36
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
37
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
38
+ </div>
39
+ </div>
40
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
41
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
42
+ <!-- header end -->
43
+
44
+ # Llamix2 MLewd 4X13B - GGUF
45
+ - Model creator: [Undi](https://huggingface.co/Undi95)
46
+ - Original model: [Llamix2 MLewd 4X13B](https://huggingface.co/Undi95/Llamix2-MLewd-4x13B)
47
+
48
+ <!-- description start -->
49
+ ## Description
50
+
51
+ This repo contains GGUF format model files for [Undi's Llamix2 MLewd 4X13B](https://huggingface.co/Undi95/Llamix2-MLewd-4x13B).
52
+
53
+ <!-- description end -->
54
+ <!-- README_GGUF.md-about-gguf start -->
55
+ ### About GGUF
56
+
57
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
58
+
59
+ ### Mixtral GGUF
60
+
61
+ Support for Mixtral was merged into Llama.cpp on December 13th.
62
+
63
+ These Mixtral GGUFs are known to work in:
64
+
65
+ * llama.cpp as of December 13th
66
+ * KoboldCpp 1.52 as later
67
+ * LM Studio 0.2.9 and later
68
+ * llama-cpp-python 0.2.23 and later
69
+
70
+ Other clients/libraries, not listed above, may not yet work.
71
+
72
+ <!-- README_GGUF.md-about-gguf end -->
73
+ <!-- repositories-available start -->
74
+ ## Repositories available
75
+
76
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llamix2-MLewd-4x13B-GPTQ)
77
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llamix2-MLewd-4x13B-GGUF)
78
+ * [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Llamix2-MLewd-4x13B)
79
+ <!-- repositories-available end -->
80
+
81
+ <!-- prompt-template start -->
82
+ ## Prompt template: Alpaca
83
+
84
+ ```
85
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
86
+
87
+ ### Instruction:
88
+ {prompt}
89
+
90
+ ### Response:
91
+
92
+ ```
93
+
94
+ <!-- prompt-template end -->
95
+
96
+
97
+ <!-- compatibility_gguf start -->
98
+ ## Compatibility
99
+
100
+ These Mixtral GGUFs are compatible with llama.cpp from December 13th onwards. Other clients/libraries may not work yet.
101
+
102
+ ## Explanation of quantisation methods
103
+
104
+ <details>
105
+ <summary>Click to see details</summary>
106
+
107
+ The new methods available are:
108
+
109
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
110
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
111
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
112
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
113
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
114
+
115
+ Refer to the Provided Files table below to see what files use which methods, and how.
116
+ </details>
117
+ <!-- compatibility_gguf end -->
118
+
119
+ <!-- README_GGUF.md-provided-files start -->
120
+ ## Provided files
121
+
122
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
123
+ | ---- | ---- | ---- | ---- | ---- | ----- |
124
+ | [llamix2-mlewd-4x13b.Q4_0.gguf](https://huggingface.co/TheBloke/Llamix2-MLewd-4x13B-GGUF/blob/main/llamix2-mlewd-4x13b.Q4_0.gguf) | Q4_0 | 4 | 21.70 GB| 24.20 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
125
+ | [llamix2-mlewd-4x13b.Q4_1.gguf](https://huggingface.co/TheBloke/Llamix2-MLewd-4x13B-GGUF/blob/main/llamix2-mlewd-4x13b.Q4_1.gguf) | Q4_1 | 4 | 24.10 GB| 26.60 GB | legacy; small, substantial quality loss - lprefer using Q3_K_L |
126
+ | [llamix2-mlewd-4x13b.Q5_0.gguf](https://huggingface.co/TheBloke/Llamix2-MLewd-4x13B-GGUF/blob/main/llamix2-mlewd-4x13b.Q5_0.gguf) | Q5_0 | 5 | 26.49 GB| 28.99 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
127
+ | [llamix2-mlewd-4x13b.Q5_1.gguf](https://huggingface.co/TheBloke/Llamix2-MLewd-4x13B-GGUF/blob/main/llamix2-mlewd-4x13b.Q5_1.gguf) | Q5_1 | 5 | 28.89 GB| 31.39 GB | legacy; medium, low quality loss - prefer using Q5_K_M |
128
+ | [llamix2-mlewd-4x13b.Q8_0.gguf](https://huggingface.co/TheBloke/Llamix2-MLewd-4x13B-GGUF/blob/main/llamix2-mlewd-4x13b.Q8_0.gguf) | Q8_0 | 8 | 40.91 GB| 43.41 GB | very large, extremely low quality loss - not recommended |
129
+
130
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
131
+
132
+
133
+
134
+ <!-- README_GGUF.md-provided-files end -->
135
+
136
+ <!-- README_GGUF.md-how-to-download start -->
137
+ ## How to download GGUF files
138
+
139
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
140
+
141
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
142
+
143
+ * LM Studio
144
+ * LoLLMS Web UI
145
+ * Faraday.dev
146
+
147
+ ### In `text-generation-webui`
148
+
149
+ Under Download Model, you can enter the model repo: TheBloke/Llamix2-MLewd-4x13B-GGUF and below it, a specific filename to download, such as: llamix2-mlewd-4x13b.Q4_K_M.gguf.
150
+
151
+ Then click Download.
152
+
153
+ ### On the command line, including multiple files at once
154
+
155
+ I recommend using the `huggingface-hub` Python library:
156
+
157
+ ```shell
158
+ pip3 install huggingface-hub
159
+ ```
160
+
161
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
162
+
163
+ ```shell
164
+ huggingface-cli download TheBloke/Llamix2-MLewd-4x13B-GGUF llamix2-mlewd-4x13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
165
+ ```
166
+
167
+ <details>
168
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
169
+
170
+ You can also download multiple files at once with a pattern:
171
+
172
+ ```shell
173
+ huggingface-cli download TheBloke/Llamix2-MLewd-4x13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
174
+ ```
175
+
176
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
177
+
178
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
179
+
180
+ ```shell
181
+ pip3 install hf_transfer
182
+ ```
183
+
184
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
185
+
186
+ ```shell
187
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llamix2-MLewd-4x13B-GGUF llamix2-mlewd-4x13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
188
+ ```
189
+
190
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
191
+ </details>
192
+ <!-- README_GGUF.md-how-to-download end -->
193
+
194
+ <!-- README_GGUF.md-how-to-run start -->
195
+ ## Example `llama.cpp` command
196
+
197
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
198
+
199
+ ```shell
200
+ ./main -ngl 35 -m llamix2-mlewd-4x13b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
201
+ ```
202
+
203
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
204
+
205
+ Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
206
+
207
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
208
+
209
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
210
+
211
+ ## How to run in `text-generation-webui`
212
+
213
+ Note that text-generation-webui may not yet be compatible with Mixtral GGUFs. Please check compatibility first.
214
+
215
+ Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
216
+
217
+ ## How to run from Python code
218
+
219
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later.
220
+
221
+ ### How to load this model in Python code, using llama-cpp-python
222
+
223
+ For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
224
+
225
+ #### First install the package
226
+
227
+ Run one of the following commands, according to your system:
228
+
229
+ ```shell
230
+ # Base ctransformers with no GPU acceleration
231
+ pip install llama-cpp-python
232
+ # With NVidia CUDA acceleration
233
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
234
+ # Or with OpenBLAS acceleration
235
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
236
+ # Or with CLBLast acceleration
237
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
238
+ # Or with AMD ROCm GPU acceleration (Linux only)
239
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
240
+ # Or with Metal GPU acceleration for macOS systems only
241
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
242
+
243
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
244
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
245
+ pip install llama-cpp-python
246
+ ```
247
+
248
+ #### Simple llama-cpp-python example code
249
+
250
+ ```python
251
+ from llama_cpp import Llama
252
+
253
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
254
+ llm = Llama(
255
+ model_path="./llamix2-mlewd-4x13b.Q4_K_M.gguf", # Download the model file first
256
+ n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
257
+ n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
258
+ n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
259
+ )
260
+
261
+ # Simple inference example
262
+ output = llm(
263
+ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
264
+ max_tokens=512, # Generate up to 512 tokens
265
+ stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
266
+ echo=True # Whether to echo the prompt
267
+ )
268
+
269
+ # Chat Completion API
270
+
271
+ llm = Llama(model_path="./llamix2-mlewd-4x13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
272
+ llm.create_chat_completion(
273
+ messages = [
274
+ {"role": "system", "content": "You are a story writing assistant."},
275
+ {
276
+ "role": "user",
277
+ "content": "Write a story about llamas."
278
+ }
279
+ ]
280
+ )
281
+ ```
282
+
283
+ ## How to use with LangChain
284
+
285
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
286
+
287
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
288
+
289
+ <!-- README_GGUF.md-how-to-run end -->
290
+
291
+ <!-- footer start -->
292
+ <!-- 200823 -->
293
+ ## Discord
294
+
295
+ For further support, and discussions on these models and AI in general, join us at:
296
+
297
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
298
+
299
+ ## Thanks, and how to contribute
300
+
301
+ Thanks to the [chirper.ai](https://chirper.ai) team!
302
+
303
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
304
+
305
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
306
+
307
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
308
+
309
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
310
+
311
+ * Patreon: https://patreon.com/TheBlokeAI
312
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
313
+
314
+ **Special thanks to**: Aemon Algiz.
315
+
316
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
317
+
318
+
319
+ Thank you to all my generous patrons and donaters!
320
+
321
+ And thank you again to a16z for their generous grant.
322
+
323
+ <!-- footer end -->
324
+
325
+ <!-- original-model-card start -->
326
+ # Original model card: Undi's Llamix2 MLewd 4X13B
327
+
328
+
329
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/Y9cKDc4heP6TcG4ZjcwPQ.png)
330
+
331
+ THIS MODEL IS MADE FOR LEWD
332
+
333
+ SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED
334
+
335
+ This is a 4x13B MoE Llama2 model, one of the first (if not the first!).
336
+
337
+ Always, a big thanks to [Charles Goddard](https://huggingface.co/chargoddard) who is the brain behind all of those new Mixtral model, and his amazing tools!
338
+
339
+ WARNING: ALL THE "K" GGUF QUANT OF MIXTRAL MODELS SEEMS TO BE [BROKEN](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TvjEP14ps7ZUgJ-0-mhIX.png), PREFER Q4_0, Q5_0 or Q8_0!
340
+
341
+ <!-- description start -->
342
+ ## Description
343
+
344
+ This repo contains fp16 files of Llamix2-MLewd-4x13B, a very hot MoE of Llama2 model.
345
+
346
+ <!-- description end -->
347
+ <!-- description start -->
348
+ ## Models used
349
+
350
+ The list of model used and their activator/theme can be found [here](https://huggingface.co/Undi95/Llamix2-MLewd-4x13B/blob/main/config.yaml)
351
+
352
+ <!-- description end -->
353
+ <!-- prompt-template start -->
354
+ ## Prompt template: Alpaca
355
+
356
+ ```
357
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
358
+
359
+ ### Instruction:
360
+ {prompt}
361
+
362
+ ### Response:
363
+
364
+ ```
365
+
366
+ Special thanks to Sushi and Shena ♥
367
+
368
+ If you want to support me, you can [here](https://ko-fi.com/undiai).
369
+
370
+ <!-- original-model-card end -->