Upload README.md
Browse files
README.md
CHANGED
@@ -90,15 +90,8 @@ Below is an instruction that describes a task. Write a response that appropriate
|
|
90 |
```
|
91 |
|
92 |
<!-- prompt-template end -->
|
93 |
-
<!-- licensing start -->
|
94 |
-
## Licensing
|
95 |
|
96 |
-
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
|
97 |
|
98 |
-
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
|
99 |
-
|
100 |
-
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Gryphe's MythoLogic 13B](https://huggingface.co/Gryphe/MythoLogic-13b).
|
101 |
-
<!-- licensing end -->
|
102 |
<!-- compatibility_gguf start -->
|
103 |
## Compatibility
|
104 |
|
@@ -157,7 +150,7 @@ The following clients/libraries will automatically download models for you, prov
|
|
157 |
|
158 |
### In `text-generation-webui`
|
159 |
|
160 |
-
Under Download Model, you can enter the model repo: TheBloke/MythoLogic-13B-GGUF and below it, a specific filename to download, such as: mythologic-13b.
|
161 |
|
162 |
Then click Download.
|
163 |
|
@@ -172,7 +165,7 @@ pip3 install huggingface-hub>=0.17.1
|
|
172 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
173 |
|
174 |
```shell
|
175 |
-
huggingface-cli download TheBloke/MythoLogic-13B-GGUF mythologic-13b.
|
176 |
```
|
177 |
|
178 |
<details>
|
@@ -195,7 +188,7 @@ pip3 install hf_transfer
|
|
195 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
196 |
|
197 |
```shell
|
198 |
-
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MythoLogic-13B-GGUF mythologic-13b.
|
199 |
```
|
200 |
|
201 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
@@ -208,7 +201,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
|
|
208 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
209 |
|
210 |
```shell
|
211 |
-
./main -ngl 32 -m mythologic-13b.
|
212 |
```
|
213 |
|
214 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
@@ -248,7 +241,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
|
248 |
from ctransformers import AutoModelForCausalLM
|
249 |
|
250 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
251 |
-
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MythoLogic-13B-GGUF", model_file="mythologic-13b.
|
252 |
|
253 |
print(llm("AI is going to"))
|
254 |
```
|
|
|
90 |
```
|
91 |
|
92 |
<!-- prompt-template end -->
|
|
|
|
|
93 |
|
|
|
94 |
|
|
|
|
|
|
|
|
|
95 |
<!-- compatibility_gguf start -->
|
96 |
## Compatibility
|
97 |
|
|
|
150 |
|
151 |
### In `text-generation-webui`
|
152 |
|
153 |
+
Under Download Model, you can enter the model repo: TheBloke/MythoLogic-13B-GGUF and below it, a specific filename to download, such as: mythologic-13b.Q4_K_M.gguf.
|
154 |
|
155 |
Then click Download.
|
156 |
|
|
|
165 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
166 |
|
167 |
```shell
|
168 |
+
huggingface-cli download TheBloke/MythoLogic-13B-GGUF mythologic-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
169 |
```
|
170 |
|
171 |
<details>
|
|
|
188 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
189 |
|
190 |
```shell
|
191 |
+
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MythoLogic-13B-GGUF mythologic-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
192 |
```
|
193 |
|
194 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
|
|
201 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
202 |
|
203 |
```shell
|
204 |
+
./main -ngl 32 -m mythologic-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
|
205 |
```
|
206 |
|
207 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
|
|
241 |
from ctransformers import AutoModelForCausalLM
|
242 |
|
243 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
244 |
+
llm = AutoModelForCausalLM.from_pretrained("TheBloke/MythoLogic-13B-GGUF", model_file="mythologic-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
|
245 |
|
246 |
print(llm("AI is going to"))
|
247 |
```
|