TheBloke commited on
Commit
014b4f2
1 Parent(s): 1816da6

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -100,15 +100,8 @@ Below is an instruction that describes a task. Write a response that appropriate
100
  ```
101
 
102
  <!-- prompt-template end -->
103
- <!-- licensing start -->
104
- ## Licensing
105
 
106
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
107
 
108
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
109
-
110
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [digitous' 13B HyperMantis](https://huggingface.co/digitous/13B-HyperMantis).
111
- <!-- licensing end -->
112
  <!-- compatibility_gguf start -->
113
  ## Compatibility
114
 
@@ -167,7 +160,7 @@ The following clients/libraries will automatically download models for you, prov
167
 
168
  ### In `text-generation-webui`
169
 
170
- Under Download Model, you can enter the model repo: TheBloke/13B-HyperMantis-GGUF and below it, a specific filename to download, such as: 13B-HyperMantis.q4_K_M.gguf.
171
 
172
  Then click Download.
173
 
@@ -182,7 +175,7 @@ pip3 install huggingface-hub>=0.17.1
182
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
183
 
184
  ```shell
185
- huggingface-cli download TheBloke/13B-HyperMantis-GGUF 13B-HyperMantis.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
186
  ```
187
 
188
  <details>
@@ -205,7 +198,7 @@ pip3 install hf_transfer
205
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
206
 
207
  ```shell
208
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/13B-HyperMantis-GGUF 13B-HyperMantis.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
209
  ```
210
 
211
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -218,7 +211,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
218
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
219
 
220
  ```shell
221
- ./main -ngl 32 -m 13B-HyperMantis.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
222
  ```
223
 
224
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -258,7 +251,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
258
  from ctransformers import AutoModelForCausalLM
259
 
260
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
261
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/13B-HyperMantis-GGUF", model_file="13B-HyperMantis.q4_K_M.gguf", model_type="llama", gpu_layers=50)
262
 
263
  print(llm("AI is going to"))
264
  ```
 
100
  ```
101
 
102
  <!-- prompt-template end -->
 
 
103
 
 
104
 
 
 
 
 
105
  <!-- compatibility_gguf start -->
106
  ## Compatibility
107
 
 
160
 
161
  ### In `text-generation-webui`
162
 
163
+ Under Download Model, you can enter the model repo: TheBloke/13B-HyperMantis-GGUF and below it, a specific filename to download, such as: 13B-HyperMantis.Q4_K_M.gguf.
164
 
165
  Then click Download.
166
 
 
175
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
176
 
177
  ```shell
178
+ huggingface-cli download TheBloke/13B-HyperMantis-GGUF 13B-HyperMantis.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
179
  ```
180
 
181
  <details>
 
198
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
199
 
200
  ```shell
201
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/13B-HyperMantis-GGUF 13B-HyperMantis.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
202
  ```
203
 
204
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
211
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
212
 
213
  ```shell
214
+ ./main -ngl 32 -m 13B-HyperMantis.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
215
  ```
216
 
217
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
251
  from ctransformers import AutoModelForCausalLM
252
 
253
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
254
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/13B-HyperMantis-GGUF", model_file="13B-HyperMantis.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
255
 
256
  print(llm("AI is going to"))
257
  ```