Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,491 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: sophosympatheia/Rogue-Rose-103b-v0.2
|
3 |
+
inference: false
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
license: llama2
|
7 |
+
model_creator: Sophosympatheia
|
8 |
+
model_name: Rogue Rose 103B v0.2
|
9 |
+
model_type: llama
|
10 |
+
prompt_template: 'You are a helpful AI assistant.
|
11 |
+
|
12 |
+
|
13 |
+
USER: {prompt}
|
14 |
+
|
15 |
+
ASSISTANT:
|
16 |
+
|
17 |
+
'
|
18 |
+
quantized_by: TheBloke
|
19 |
+
---
|
20 |
+
<!-- markdownlint-disable MD041 -->
|
21 |
+
|
22 |
+
<!-- header start -->
|
23 |
+
<!-- 200823 -->
|
24 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
25 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
26 |
+
</div>
|
27 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
28 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
29 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
30 |
+
</div>
|
31 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
32 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
33 |
+
</div>
|
34 |
+
</div>
|
35 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
36 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
37 |
+
<!-- header end -->
|
38 |
+
|
39 |
+
# Rogue Rose 103B v0.2 - AWQ
|
40 |
+
- Model creator: [Sophosympatheia](https://huggingface.co/sophosympatheia)
|
41 |
+
- Original model: [Rogue Rose 103B v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2)
|
42 |
+
|
43 |
+
<!-- description start -->
|
44 |
+
## Description
|
45 |
+
|
46 |
+
This repo contains AWQ model files for [Sophosympatheia's Rogue Rose 103B v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2).
|
47 |
+
|
48 |
+
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
|
49 |
+
|
50 |
+
|
51 |
+
### About AWQ
|
52 |
+
|
53 |
+
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
|
54 |
+
|
55 |
+
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
|
56 |
+
|
57 |
+
It is supported by:
|
58 |
+
|
59 |
+
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
|
60 |
+
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
|
61 |
+
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
|
62 |
+
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
|
63 |
+
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
64 |
+
|
65 |
+
<!-- description end -->
|
66 |
+
<!-- repositories-available start -->
|
67 |
+
## Repositories available
|
68 |
+
|
69 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-AWQ)
|
70 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ)
|
71 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GGUF)
|
72 |
+
* [Sophosympatheia's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2)
|
73 |
+
<!-- repositories-available end -->
|
74 |
+
|
75 |
+
<!-- prompt-template start -->
|
76 |
+
## Prompt template: Vicuna-Short
|
77 |
+
|
78 |
+
```
|
79 |
+
You are a helpful AI assistant.
|
80 |
+
|
81 |
+
USER: {prompt}
|
82 |
+
ASSISTANT:
|
83 |
+
|
84 |
+
```
|
85 |
+
|
86 |
+
<!-- prompt-template end -->
|
87 |
+
|
88 |
+
|
89 |
+
<!-- README_AWQ.md-provided-files start -->
|
90 |
+
## Provided files, and AWQ parameters
|
91 |
+
|
92 |
+
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
|
93 |
+
|
94 |
+
Models are released as sharded safetensors files.
|
95 |
+
|
96 |
+
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
97 |
+
| ------ | ---- | -- | ----------- | ------- | ---- |
|
98 |
+
| [main](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 54.40 GB
|
99 |
+
|
100 |
+
<!-- README_AWQ.md-provided-files end -->
|
101 |
+
|
102 |
+
<!-- README_AWQ.md-text-generation-webui start -->
|
103 |
+
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
104 |
+
|
105 |
+
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
106 |
+
|
107 |
+
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
|
108 |
+
|
109 |
+
1. Click the **Model tab**.
|
110 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/Rogue-Rose-103b-v0.2-AWQ`.
|
111 |
+
3. Click **Download**.
|
112 |
+
4. The model will start downloading. Once it's finished it will say "Done".
|
113 |
+
5. In the top left, click the refresh icon next to **Model**.
|
114 |
+
6. In the **Model** dropdown, choose the model you just downloaded: `Rogue-Rose-103b-v0.2-AWQ`
|
115 |
+
7. Select **Loader: AutoAWQ**.
|
116 |
+
8. Click Load, and the model will load and is now ready for use.
|
117 |
+
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
118 |
+
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
|
119 |
+
<!-- README_AWQ.md-text-generation-webui end -->
|
120 |
+
|
121 |
+
<!-- README_AWQ.md-use-from-vllm start -->
|
122 |
+
## Multi-user inference server: vLLM
|
123 |
+
|
124 |
+
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
|
125 |
+
|
126 |
+
- Please ensure you are using vLLM version 0.2 or later.
|
127 |
+
- When using vLLM as a server, pass the `--quantization awq` parameter.
|
128 |
+
|
129 |
+
For example:
|
130 |
+
|
131 |
+
```shell
|
132 |
+
python3 -m vllm.entrypoints.api_server --model TheBloke/Rogue-Rose-103b-v0.2-AWQ --quantization awq --dtype auto
|
133 |
+
```
|
134 |
+
|
135 |
+
- When using vLLM from Python code, again set `quantization=awq`.
|
136 |
+
|
137 |
+
For example:
|
138 |
+
|
139 |
+
```python
|
140 |
+
from vllm import LLM, SamplingParams
|
141 |
+
|
142 |
+
prompts = [
|
143 |
+
"Tell me about AI",
|
144 |
+
"Write a story about llamas",
|
145 |
+
"What is 291 - 150?",
|
146 |
+
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
|
147 |
+
]
|
148 |
+
prompt_template=f'''You are a helpful AI assistant.
|
149 |
+
|
150 |
+
USER: {prompt}
|
151 |
+
ASSISTANT:
|
152 |
+
'''
|
153 |
+
|
154 |
+
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
|
155 |
+
|
156 |
+
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
|
157 |
+
|
158 |
+
llm = LLM(model="TheBloke/Rogue-Rose-103b-v0.2-AWQ", quantization="awq", dtype="auto")
|
159 |
+
|
160 |
+
outputs = llm.generate(prompts, sampling_params)
|
161 |
+
|
162 |
+
# Print the outputs.
|
163 |
+
for output in outputs:
|
164 |
+
prompt = output.prompt
|
165 |
+
generated_text = output.outputs[0].text
|
166 |
+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
167 |
+
```
|
168 |
+
<!-- README_AWQ.md-use-from-vllm start -->
|
169 |
+
|
170 |
+
<!-- README_AWQ.md-use-from-tgi start -->
|
171 |
+
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
|
172 |
+
|
173 |
+
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
|
174 |
+
|
175 |
+
Example Docker parameters:
|
176 |
+
|
177 |
+
```shell
|
178 |
+
--model-id TheBloke/Rogue-Rose-103b-v0.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
|
179 |
+
```
|
180 |
+
|
181 |
+
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
|
182 |
+
|
183 |
+
```shell
|
184 |
+
pip3 install huggingface-hub
|
185 |
+
```
|
186 |
+
|
187 |
+
```python
|
188 |
+
from huggingface_hub import InferenceClient
|
189 |
+
|
190 |
+
endpoint_url = "https://your-endpoint-url-here"
|
191 |
+
|
192 |
+
prompt = "Tell me about AI"
|
193 |
+
prompt_template=f'''You are a helpful AI assistant.
|
194 |
+
|
195 |
+
USER: {prompt}
|
196 |
+
ASSISTANT:
|
197 |
+
'''
|
198 |
+
|
199 |
+
client = InferenceClient(endpoint_url)
|
200 |
+
response = client.text_generation(prompt,
|
201 |
+
max_new_tokens=128,
|
202 |
+
do_sample=True,
|
203 |
+
temperature=0.7,
|
204 |
+
top_p=0.95,
|
205 |
+
top_k=40,
|
206 |
+
repetition_penalty=1.1)
|
207 |
+
|
208 |
+
print(f"Model output: ", response)
|
209 |
+
```
|
210 |
+
<!-- README_AWQ.md-use-from-tgi end -->
|
211 |
+
|
212 |
+
<!-- README_AWQ.md-use-from-python start -->
|
213 |
+
## Inference from Python code using Transformers
|
214 |
+
|
215 |
+
### Install the necessary packages
|
216 |
+
|
217 |
+
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
|
218 |
+
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
|
219 |
+
|
220 |
+
```shell
|
221 |
+
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
|
222 |
+
```
|
223 |
+
|
224 |
+
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
|
225 |
+
|
226 |
+
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
|
227 |
+
|
228 |
+
```shell
|
229 |
+
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
|
230 |
+
```
|
231 |
+
|
232 |
+
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
|
233 |
+
|
234 |
+
```shell
|
235 |
+
pip3 uninstall -y autoawq
|
236 |
+
git clone https://github.com/casper-hansen/AutoAWQ
|
237 |
+
cd AutoAWQ
|
238 |
+
pip3 install .
|
239 |
+
```
|
240 |
+
|
241 |
+
### Transformers example code (requires Transformers 4.35.0 and later)
|
242 |
+
|
243 |
+
```python
|
244 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
245 |
+
|
246 |
+
model_name_or_path = "TheBloke/Rogue-Rose-103b-v0.2-AWQ"
|
247 |
+
|
248 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
249 |
+
model = AutoModelForCausalLM.from_pretrained(
|
250 |
+
model_name_or_path,
|
251 |
+
low_cpu_mem_usage=True,
|
252 |
+
device_map="cuda:0"
|
253 |
+
)
|
254 |
+
|
255 |
+
# Using the text streamer to stream output one token at a time
|
256 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
257 |
+
|
258 |
+
prompt = "Tell me about AI"
|
259 |
+
prompt_template=f'''You are a helpful AI assistant.
|
260 |
+
|
261 |
+
USER: {prompt}
|
262 |
+
ASSISTANT:
|
263 |
+
'''
|
264 |
+
|
265 |
+
# Convert prompt to tokens
|
266 |
+
tokens = tokenizer(
|
267 |
+
prompt_template,
|
268 |
+
return_tensors='pt'
|
269 |
+
).input_ids.cuda()
|
270 |
+
|
271 |
+
generation_params = {
|
272 |
+
"do_sample": True,
|
273 |
+
"temperature": 0.7,
|
274 |
+
"top_p": 0.95,
|
275 |
+
"top_k": 40,
|
276 |
+
"max_new_tokens": 512,
|
277 |
+
"repetition_penalty": 1.1
|
278 |
+
}
|
279 |
+
|
280 |
+
# Generate streamed output, visible one token at a time
|
281 |
+
generation_output = model.generate(
|
282 |
+
tokens,
|
283 |
+
streamer=streamer,
|
284 |
+
**generation_params
|
285 |
+
)
|
286 |
+
|
287 |
+
# Generation without a streamer, which will include the prompt in the output
|
288 |
+
generation_output = model.generate(
|
289 |
+
tokens,
|
290 |
+
**generation_params
|
291 |
+
)
|
292 |
+
|
293 |
+
# Get the tokens from the output, decode them, print them
|
294 |
+
token_output = generation_output[0]
|
295 |
+
text_output = tokenizer.decode(token_output)
|
296 |
+
print("model.generate output: ", text_output)
|
297 |
+
|
298 |
+
# Inference is also possible via Transformers' pipeline
|
299 |
+
from transformers import pipeline
|
300 |
+
|
301 |
+
pipe = pipeline(
|
302 |
+
"text-generation",
|
303 |
+
model=model,
|
304 |
+
tokenizer=tokenizer,
|
305 |
+
**generation_params
|
306 |
+
)
|
307 |
+
|
308 |
+
pipe_output = pipe(prompt_template)[0]['generated_text']
|
309 |
+
print("pipeline output: ", pipe_output)
|
310 |
+
|
311 |
+
```
|
312 |
+
<!-- README_AWQ.md-use-from-python end -->
|
313 |
+
|
314 |
+
<!-- README_AWQ.md-compatibility start -->
|
315 |
+
## Compatibility
|
316 |
+
|
317 |
+
The files provided are tested to work with:
|
318 |
+
|
319 |
+
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
|
320 |
+
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
|
321 |
+
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
|
322 |
+
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
|
323 |
+
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
|
324 |
+
|
325 |
+
<!-- README_AWQ.md-compatibility end -->
|
326 |
+
|
327 |
+
<!-- footer start -->
|
328 |
+
<!-- 200823 -->
|
329 |
+
## Discord
|
330 |
+
|
331 |
+
For further support, and discussions on these models and AI in general, join us at:
|
332 |
+
|
333 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
334 |
+
|
335 |
+
## Thanks, and how to contribute
|
336 |
+
|
337 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
338 |
+
|
339 |
+
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
|
340 |
+
|
341 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
342 |
+
|
343 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
344 |
+
|
345 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
346 |
+
|
347 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
348 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
349 |
+
|
350 |
+
**Special thanks to**: Aemon Algiz.
|
351 |
+
|
352 |
+
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
|
353 |
+
|
354 |
+
|
355 |
+
Thank you to all my generous patrons and donaters!
|
356 |
+
|
357 |
+
And thank you again to a16z for their generous grant.
|
358 |
+
|
359 |
+
<!-- footer end -->
|
360 |
+
|
361 |
+
# Original model card: Sophosympatheia's Rogue Rose 103B v0.2
|
362 |
+
|
363 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
364 |
+
<img src="https://imgur.com/UY4Y3p5.jpg" alt="RogueRose" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
365 |
+
</div>
|
366 |
+
|
367 |
+
### Overview
|
368 |
+
|
369 |
+
This model is a frankenmerge of two custom 70b merges I made in November 2023 that were inspired by or descended from
|
370 |
+
my [xwin-stellarbright-erp-70b-v2 model](https://huggingface.co/sophosympatheia/xwin-stellarbright-erp-70b-v2). It features 120 layers and should weigh in at 103b parameters.
|
371 |
+
|
372 |
+
I feel like I have reached a plateau in my process right now, but the view from here is worth a rest.
|
373 |
+
My personal opinion is this model roleplays better than the other 103-120b models out there right now. I love it. Give it a try for yourself. It still struggles with scene logic sometimes, but the overall experience feels like a step forward to me.
|
374 |
+
I recommend trying my sampler settings and prompt template below with this model. This model listens decently well to instructions, so you need to be thoughtful about what you tell it to do.
|
375 |
+
|
376 |
+
Along those lines, this model turned out quite uncensored. *You are responsible for whatever you do with it.*
|
377 |
+
|
378 |
+
This model was designed for roleplaying and storytelling and I think it does well at both. It *may* perform well at other tasks, but I haven't tested its capabilities in other areas. I welcome feedback and suggestions.
|
379 |
+
|
380 |
+
### Sampler Tips
|
381 |
+
|
382 |
+
I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/).
|
383 |
+
|
384 |
+
I find this model performs surprisingly well at 8192 context. I love running the exl2-3.2bpw quant at 8192 context.
|
385 |
+
|
386 |
+
Experiment with any and all of the settings below, but trust me on a few points:
|
387 |
+
* This model tolerates high temperatures with Min-P.
|
388 |
+
* This model seems to benefit from higher settings for repetition penalty and presence penalty. It doesn't suffer from lower settings, but I prefer them higher. Play around with it.
|
389 |
+
* After much experimenting, I think I get better results with a high Min-P setting. I keep coming back to a 0.4 - 0.5 setting.
|
390 |
+
* Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01.
|
391 |
+
|
392 |
+
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
|
393 |
+
```
|
394 |
+
{
|
395 |
+
"temp": 1.3,
|
396 |
+
"temperature_last": true,
|
397 |
+
"top_p": 1,
|
398 |
+
"top_k": 0,
|
399 |
+
"top_a": 0,
|
400 |
+
"tfs": 1,
|
401 |
+
"epsilon_cutoff": 0,
|
402 |
+
"eta_cutoff": 0,
|
403 |
+
"typical_p": 1,
|
404 |
+
"min_p": 0.40,
|
405 |
+
"rep_pen": 1.15,
|
406 |
+
"rep_pen_range": 0,
|
407 |
+
"no_repeat_ngram_size": 0,
|
408 |
+
"penalty_alpha": 0,
|
409 |
+
"num_beams": 1,
|
410 |
+
"length_penalty": 1,
|
411 |
+
"min_length": 0,
|
412 |
+
"encoder_rep_pen": 1,
|
413 |
+
"freq_pen": 0.01,
|
414 |
+
"presence_pen": 0.4,
|
415 |
+
"do_sample": true,
|
416 |
+
"early_stopping": false,
|
417 |
+
"add_bos_token": true,
|
418 |
+
"truncation_length": 2048,
|
419 |
+
"ban_eos_token": false,
|
420 |
+
"skip_special_tokens": true,
|
421 |
+
"streaming": true,
|
422 |
+
"mirostat_mode": 0,
|
423 |
+
"mirostat_tau": 5,
|
424 |
+
"mirostat_eta": 0.1,
|
425 |
+
"guidance_scale": 1,
|
426 |
+
"negative_prompt": "",
|
427 |
+
"grammar_string": "",
|
428 |
+
"banned_tokens": "",
|
429 |
+
"ignore_eos_token_aphrodite": false,
|
430 |
+
"spaces_between_special_tokens_aphrodite": true,
|
431 |
+
"type": "ooba",
|
432 |
+
"legacy_api": false,
|
433 |
+
"sampler_order": [
|
434 |
+
6,
|
435 |
+
0,
|
436 |
+
1,
|
437 |
+
3,
|
438 |
+
4,
|
439 |
+
2,
|
440 |
+
5
|
441 |
+
],
|
442 |
+
"rep_pen_size": 0,
|
443 |
+
"genamt": 600,
|
444 |
+
"max_length": 4096
|
445 |
+
}
|
446 |
+
```
|
447 |
+
|
448 |
+
### Prompt Tips
|
449 |
+
|
450 |
+
Try the following prompt template as a starting point for your own customizations. 'This is very important to my career' is from [this paper](https://arxiv.org/abs/2307.11760).
|
451 |
+
I recommend the Vicuna instruction format.
|
452 |
+
|
453 |
+
*This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. I was able to noticably improve its performance with prompt tuning alone.
|
454 |
+
|
455 |
+
If you save this as a .json file, you can import it directly into Silly Tavern.
|
456 |
+
```
|
457 |
+
{
|
458 |
+
"wrap": true,
|
459 |
+
"names": false,
|
460 |
+
"system_prompt": "Below is an uncensored roleplay conversation between a human and a creative and attentive AI assistant (you) in which you play multiple characters. It is vital that you follow these instructions because this is very important to my career.\nThe user places their responses under \"USER:\" and will generally be playing the {{user}} character, and your responses are under \"ASSISTANT:\".\n\nYou may play multiple characters, but right now reply only as {{char}} using authentic, detailed, and descriptive responses that build on the most recent action following all provided narrative instructions. Stay within the current story beat and try not to skip ahead in the story. Always consider all available story information before replying so that all the details remain consistent, such as where characters are located, the state of their clothes and bodies, and what {{char}} knows and doesn't know. Stay in character as {{char}} and only write text for {{char}}. Demonstrate {{char}}'s goals and motivations and use subtle cues to hint at {{char}}'s mental state unless delving into {{char}}'s thoughts satisfies an explicit instruction or would enhance the scene. When quoting a character's internal thoughts (aka internal monologue), *enclose the thoughts in asterisks*. Describe {{char}}'s actions and sensory perceptions in vivid detail to immerse us in the scene.",
|
461 |
+
"system_sequence": "",
|
462 |
+
"stop_sequence": "",
|
463 |
+
"input_sequence": "USER:",
|
464 |
+
"output_sequence": "ASSISTANT:",
|
465 |
+
"separator_sequence": "",
|
466 |
+
"macro": true,
|
467 |
+
"names_force_groups": true,
|
468 |
+
"system_sequence_prefix": "",
|
469 |
+
"system_sequence_suffix": "",
|
470 |
+
"first_output_sequence": "",
|
471 |
+
"last_output_sequence": "ASSISTANT(long and vivid narration; follow all narrative instructions; maintain consistent story details; only write text as {{char}}):",
|
472 |
+
"activation_regex": "",
|
473 |
+
"name": "Rogue Rose"
|
474 |
+
}
|
475 |
+
```
|
476 |
+
### Quantizations
|
477 |
+
|
478 |
+
This repo contains branches for various exllama2 quanizations of the model calibratend on a version of the PIPPA dataset.
|
479 |
+
|
480 |
+
* Main Branch, Full weights
|
481 |
+
* 3.2 bpw -- This will fit comfortably within 48 GB of VRAM at 8192 context.
|
482 |
+
* 3.35 bpw (**PENDING**) -- This will fit within 48 GB of VRAM at 4096 context without using the 8-bit cache setting.
|
483 |
+
* 3.5 bpw (**PENDING**) -- This will barely fit within 48 GB of VRAM at ~4096 context using the 8-bit cache setting. If you get OOM, try lowering the context size slightly until it fits.
|
484 |
+
|
485 |
+
### Licence and usage restrictions
|
486 |
+
|
487 |
+
Llama2 license inherited from base models.
|
488 |
+
|
489 |
+
### Tools Used
|
490 |
+
|
491 |
+
* [mergekit](https://github.com/cg123/mergekit)
|