TheBloke commited on
Commit
b5acf9e
1 Parent(s): 19fe9c1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +167 -0
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ inference: false
6
+ ---
7
+
8
+ <!-- header start -->
9
+ <div style="width: 100%;">
10
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
11
+ </div>
12
+ <div style="display: flex; justify-content: space-between; width: 100%;">
13
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
14
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
15
+ </div>
16
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
17
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
18
+ </div>
19
+ </div>
20
+ <!-- header end -->
21
+
22
+ # Eric Hartford's Samantha-Falcon-7B GPTQ
23
+
24
+ This repo contains an experimental GPTQ 4bit model of [Eric Hartford's Samantha-Falcon-7B](https://huggingface.co/ehartford/samantha-falcon-7B).
25
+
26
+ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
27
+
28
+ ## Repositories available
29
+
30
+ * [4bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ).
31
+ * [Eric's float16 HF format model for GPU inference and further conversions](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b).
32
+
33
+ ## EXPERIMENTAL
34
+
35
+ Please note this is an experimental GPTQ model. Support for it is currently quite limited.
36
+
37
+ It is also expected to be **VERY SLOW**. This is unavoidable at the moment, but is being looked at.
38
+
39
+ To use it you will require:
40
+
41
+ 1. AutoGPTQ, from the latest `main` branch and compiled with `pip install .`
42
+ 2. `pip install einops`
43
+
44
+ You can then use it immediately from Python code - see example code below - or from text-generation-webui.
45
+
46
+ ## AutoGPTQ
47
+
48
+ To install AutoGPTQ please follow these instructions:
49
+ ```
50
+ git clone https://github.com/PanQiWei/AutoGPTQ
51
+ cd AutoGPTQ
52
+ pip install .
53
+ ```
54
+
55
+ These steps will require that you have the [Nvidia CUDA toolkit](https://developer.nvidia.com/cuda-12-0-1-download-archive) installed.
56
+
57
+ ## text-generation-webui
58
+
59
+ There is also provisional AutoGPTQ support in text-generation-webui.
60
+
61
+ This requires text-generation-webui as of commit 204731952ae59d79ea3805a425c73dd171d943c3.
62
+
63
+ So please first update text-genration-webui to the latest version.
64
+
65
+ ## How to download and use this model in text-generation-webui
66
+
67
+ 1. Launch text-generation-webui with the following command-line arguments: `--autogptq --trust-remote-code`
68
+ 2. Click the **Model tab**.
69
+ 3. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ`.
70
+ 4. Click **Download**.
71
+ 5. Wait until it says it's finished downloading.
72
+ 6. Click the **Refresh** icon next to **Model** in the top left.
73
+ 7. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-Uncensored-Falcon-40B-GPTQ`.
74
+ 8. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
75
+
76
+ ## Prompt template
77
+
78
+
79
+
80
+ ## About `trust-remote-code`
81
+
82
+ Please be aware that this command line argument causes Python code provided by Falcon to be executed on your machine.
83
+
84
+ This code is required at the moment because Falcon is too new to be supported by Hugging Face transformers. At some point in the future transformers will support the model natively, and then `trust_remote_code` will no longer be needed.
85
+
86
+ In this repo you can see two `.py` files - these are the files that get executed. They are copied from the base repo at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
87
+
88
+ ## Simple Python example code
89
+
90
+ To run this code you need to install AutoGPTQ from source:
91
+ ```
92
+ git clone https://github.com/PanQiWei/AutoGPTQ
93
+ cd AutoGPTQ
94
+ pip install . # This step requires CUDA toolkit installed
95
+ ```
96
+ And install einops:
97
+ ```
98
+ pip install einops
99
+ ```
100
+
101
+ You can then run this example code:
102
+ ```python
103
+ import torch
104
+ from transformers import AutoTokenizer
105
+ from auto_gptq import AutoGPTQForCausalLM
106
+
107
+ # If you've already downloaded the model, reference its location here:
108
+ quantized_model_dir = "/path/to/TheBloke_WizardLM-Uncensored-Falcon-40B-GPTQ"
109
+ # Or to download it from the hub and store it in the Hugging Face cache directory:
110
+ #quantized_model_dir = "TheBloke/WizardLM-Uncensored-Falcon-40B-GPTQ"
111
+
112
+ from transformers import AutoTokenizer
113
+ tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=False)
114
+
115
+ model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0", use_triton=False, use_safetensors=True, torch_dtype=torch.float32, trust_remote_code=True)
116
+
117
+ prompt = "Write a story about llamas"
118
+ prompt_template = f"You are Samantha, a sentient AI.\nUSER: {prompt}\nASSISTANT:"
119
+
120
+ tokens = tokenizer(prompt_template, return_tensors="pt").to("cuda:0").input_ids
121
+ output = model.generate(input_ids=tokens, max_new_tokens=100, do_sample=True, temperature=0.8)
122
+ print(tokenizer.decode(output[0]))
123
+ ```
124
+
125
+ ## Provided files
126
+
127
+ **gptq_model-4bit--1g.safetensors**
128
+
129
+ This will work with AutoGPTQ 0.2.0 and later.
130
+
131
+ It was created without group_size to reduce VRAM usage, and with `desc_act` (act-order) to improve inference accuracy.
132
+
133
+ * `gptq_model-4bit--1g.safetensors`
134
+ * Works only with latest AutoGPTQ CUDA, compiled from source as of commit `3cb1bf5`
135
+ * At this time it does not work with AutoGPTQ Triton, but support will hopefully be added in time.
136
+ * Works with text-generation-webui using `--autogptq --trust_remote_code`
137
+ * At this time it does NOT work with one-click-installers
138
+ * Does not work with any version of GPTQ-for-LLaMa
139
+ * Parameters: Groupsize = 64. No act-order.
140
+
141
+ <!-- footer start -->
142
+ ## Discord
143
+
144
+ For further support, and discussions on these models and AI in general, join us at:
145
+
146
+ [TheBloke AI's Discord server](https://discord.gg/UBgz4VXf)
147
+
148
+ ## Thanks, and how to contribute.
149
+
150
+ Thanks to the [chirper.ai](https://chirper.ai) team!
151
+
152
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
153
+
154
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
155
+
156
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
157
+
158
+ * Patreon: https://patreon.com/TheBlokeAI
159
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
160
+
161
+ **Patreon special mentions**: Aemon Algiz; Dmitiry Samsonov; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad; Nikolai Manek; senxiiz; Talal Aujan; vamX; Eugene Pentland; Lone Striker; Luke Pendergrass; Johann-Peter Hartmann.
162
+
163
+ Thank you to all my generous patrons and donaters.
164
+ <!-- footer end -->
165
+
166
+ # Original model card
167
+