Initial GPTQ model commit
Browse files
README.md
ADDED
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# Open BMB's UltraLM 13B GPTQ
|
21 |
+
|
22 |
+
These files are GPTQ 4bit model files for [Open BMB's UltraLM 13B](https://huggingface.co/openbmb/UltraLM-13b).
|
23 |
+
|
24 |
+
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
25 |
+
|
26 |
+
## Repositories available
|
27 |
+
|
28 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/UltraLM-13B-GPTQ)
|
29 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/UltraLM-13B-GGML)
|
30 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/UltraLM-13B-fp16)
|
31 |
+
|
32 |
+
## How to easily download and use this model in text-generation-webui
|
33 |
+
|
34 |
+
Please make sure you're using the latest version of text-generation-webui
|
35 |
+
|
36 |
+
1. Click the **Model tab**.
|
37 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/UltraLM-13B-GPTQ`.
|
38 |
+
3. Click **Download**.
|
39 |
+
4. The model will start downloading. Once it's finished it will say "Done"
|
40 |
+
5. In the top left, click the refresh icon next to **Model**.
|
41 |
+
6. In the **Model** dropdown, choose the model you just downloaded: `UltraLM-13B-GPTQ`
|
42 |
+
7. The model will automatically load, and is now ready for use!
|
43 |
+
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
44 |
+
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
|
45 |
+
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
|
46 |
+
|
47 |
+
## How to use this GPTQ model from Python code
|
48 |
+
|
49 |
+
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
|
50 |
+
|
51 |
+
`pip install auto-gptq`
|
52 |
+
|
53 |
+
Then try the following example code:
|
54 |
+
|
55 |
+
```python
|
56 |
+
from transformers import AutoTokenizer, pipeline, logging
|
57 |
+
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
58 |
+
import argparse
|
59 |
+
|
60 |
+
model_name_or_path = "TheBloke/UltraLM-13B-GPTQ"
|
61 |
+
model_basename = "ultralm-13b-GPTQ-4bit-128g.no-act.order"
|
62 |
+
|
63 |
+
use_triton = False
|
64 |
+
|
65 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
66 |
+
|
67 |
+
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
68 |
+
model_basename=model_basename,
|
69 |
+
use_safetensors=True,
|
70 |
+
trust_remote_code=False,
|
71 |
+
device="cuda:0",
|
72 |
+
use_triton=use_triton,
|
73 |
+
quantize_config=None)
|
74 |
+
|
75 |
+
# Note: check the prompt template is correct for this model.
|
76 |
+
prompt = "Tell me about AI"
|
77 |
+
prompt_template=f'''USER: {prompt}
|
78 |
+
ASSISTANT:'''
|
79 |
+
|
80 |
+
print("\n\n*** Generate:")
|
81 |
+
|
82 |
+
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
|
83 |
+
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
|
84 |
+
print(tokenizer.decode(output[0]))
|
85 |
+
|
86 |
+
# Inference can also be done using transformers' pipeline
|
87 |
+
|
88 |
+
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
|
89 |
+
logging.set_verbosity(logging.CRITICAL)
|
90 |
+
|
91 |
+
print("*** Pipeline:")
|
92 |
+
pipe = pipeline(
|
93 |
+
"text-generation",
|
94 |
+
model=model,
|
95 |
+
tokenizer=tokenizer,
|
96 |
+
max_new_tokens=512,
|
97 |
+
temperature=0.7,
|
98 |
+
top_p=0.95,
|
99 |
+
repetition_penalty=1.15
|
100 |
+
)
|
101 |
+
|
102 |
+
print(pipe(prompt_template)[0]['generated_text'])
|
103 |
+
```
|
104 |
+
|
105 |
+
## Provided files
|
106 |
+
|
107 |
+
**ultralm-13b-GPTQ-4bit-128g.no-act.order.safetensors**
|
108 |
+
|
109 |
+
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
110 |
+
|
111 |
+
It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed.
|
112 |
+
|
113 |
+
* `ultralm-13b-GPTQ-4bit-128g.no-act.order.safetensors`
|
114 |
+
* Works with AutoGPTQ in CUDA or Triton modes.
|
115 |
+
* LLaMa models also work with [ExLlama](https://github.com/turboderp/exllama}, which usually provides much higher performance, and uses less VRAM, than AutoGPTQ.
|
116 |
+
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
|
117 |
+
* Works with text-generation-webui, including one-click-installers.
|
118 |
+
* Parameters: Groupsize = 128. Act Order / desc_act = False.
|
119 |
+
|
120 |
+
<!-- footer start -->
|
121 |
+
## Discord
|
122 |
+
|
123 |
+
For further support, and discussions on these models and AI in general, join us at:
|
124 |
+
|
125 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
126 |
+
|
127 |
+
## Thanks, and how to contribute.
|
128 |
+
|
129 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
130 |
+
|
131 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
132 |
+
|
133 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
134 |
+
|
135 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
136 |
+
|
137 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
138 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
139 |
+
|
140 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
141 |
+
|
142 |
+
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
|
143 |
+
|
144 |
+
Thank you to all my generous patrons and donaters!
|
145 |
+
|
146 |
+
<!-- footer end -->
|
147 |
+
|
148 |
+
# Original model card: Open BMB's UltraLM 13B
|
149 |
+
|
150 |
+
# UltraLM-13b
|
151 |
+
|
152 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
153 |
+
|
154 |
+
This is UltraLM-13b delta weights, a chat language model trained upon [UltraChat](https://github.com/thunlp/UltraChat)
|
155 |
+
|
156 |
+
|
157 |
+
## Model Details
|
158 |
+
|
159 |
+
### Model Description
|
160 |
+
|
161 |
+
<!-- Provide a longer summary of what this model is. -->
|
162 |
+
|
163 |
+
The model is fine-tuned based on LLaMA-13b with a multi-turn chat-format template as below
|
164 |
+
|
165 |
+
```
|
166 |
+
User: instruction 1<eos_token>
|
167 |
+
Assistant: response 1<eos_token>
|
168 |
+
User: instruction 2<eos_token>
|
169 |
+
Assistant: response 2<eos_token>
|
170 |
+
...
|
171 |
+
```
|
172 |
+
|
173 |
+
- **License:** UltraLM is based on LLaMA and should be used under LLaMA's [model license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
|
174 |
+
- **Finetuned from model:** LLaMA-13b
|
175 |
+
- **Finetuned on data:** [UltraChat](https://github.com/thunlp/UltraChat)
|
176 |
+
|
177 |
+
### Model Sources
|
178 |
+
|
179 |
+
<!-- Provide the basic links for the model. -->
|
180 |
+
|
181 |
+
- **Repository:** [UltraChat](https://github.com/thunlp/UltraChat)
|
182 |
+
- **Paper:** [arxiv](https://arxiv.org/abs/2305.14233)
|
183 |
+
- **Demo:** [More Information Needed]
|
184 |
+
|
185 |
+
## Uses
|
186 |
+
|
187 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
188 |
+
To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tree/main/UltraLM) the full model from the delta weights and perform inference following the template below:
|
189 |
+
|
190 |
+
```
|
191 |
+
[Optional]User: system prompt<eos_token>
|
192 |
+
User: user input<eos_token>
|
193 |
+
Assistant:
|
194 |
+
```
|