Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ inference: false
|
|
14 |
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
15 |
</div>
|
16 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
17 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute?
|
18 |
</div>
|
19 |
</div>
|
20 |
|
@@ -25,12 +25,6 @@ This repo contains an experimantal GPTQ 4bit model for [Falcon-40B-Instruct](htt
|
|
25 |
|
26 |
It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
|
27 |
|
28 |
-
## Need support? Want to discuss? I now have a Discord!
|
29 |
-
|
30 |
-
Join me at: https://discord.gg/UBgz4VXf
|
31 |
-
|
32 |
-
Want to support me and help pay my cloud computing bill? I also now have a Patreon! https://www.patreon.com/TheBlokeAI
|
33 |
-
|
34 |
## EXPERIMENTAL
|
35 |
|
36 |
Please note this is an experimental GPTQ model. Support for it is currently quite limited.
|
@@ -133,15 +127,26 @@ It was created with no groupsize to reduce VRAM requirements as much as possible
|
|
133 |
* Does not work with any version of GPTQ-for-LLaMa
|
134 |
* Parameters: Groupsize = 64. No act-order.
|
135 |
|
136 |
-
##
|
|
|
|
|
137 |
|
138 |
-
|
139 |
|
140 |
-
|
141 |
|
142 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
144 |
|
|
|
|
|
|
|
145 |
# ✨ Original model card: Falcon-40B-Instruct
|
146 |
|
147 |
# ✨ Falcon-40B-Instruct
|
|
|
14 |
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
15 |
</div>
|
16 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
17 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
18 |
</div>
|
19 |
</div>
|
20 |
|
|
|
25 |
|
26 |
It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
## EXPERIMENTAL
|
29 |
|
30 |
Please note this is an experimental GPTQ model. Support for it is currently quite limited.
|
|
|
127 |
* Does not work with any version of GPTQ-for-LLaMa
|
128 |
* Parameters: Groupsize = 64. No act-order.
|
129 |
|
130 |
+
## Discord
|
131 |
+
|
132 |
+
For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/UBgz4VXf)
|
133 |
|
134 |
+
## Thanks, and how to contribute.
|
135 |
|
136 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
137 |
|
138 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
139 |
+
|
140 |
+
If you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on new AI projects.
|
141 |
+
|
142 |
+
Donaters will get priority support on any and all AI/LLM/model questions, plus other benefits.
|
143 |
+
|
144 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
145 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
146 |
|
147 |
+
**Patreon special mentions**: Aemon Algiz; Talal Aujan; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad;
|
148 |
+
senxiiz. Thank you all, and to all my other generous patrons and donaters.
|
149 |
+
|
150 |
# ✨ Original model card: Falcon-40B-Instruct
|
151 |
|
152 |
# ✨ Falcon-40B-Instruct
|