Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,10 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -34,6 +38,15 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
34 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GGML)
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-33b-V1.0-Uncensored)
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
<!-- compatibility_ggml start -->
|
38 |
## Compatibility
|
39 |
|
@@ -87,7 +100,7 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
87 |
I use the following command line; adjust for your tastes and needs:
|
88 |
|
89 |
```
|
90 |
-
./main -t 10 -ngl 32 -m wizardlm-33b-v1.0-uncensored.ggmlv3.
|
91 |
```
|
92 |
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
|
93 |
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
datasets:
|
5 |
+
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
|
6 |
+
language:
|
7 |
+
- en
|
8 |
---
|
9 |
|
10 |
<!-- header start -->
|
|
|
38 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-33B-V1.0-Uncensored-GGML)
|
39 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-33b-V1.0-Uncensored)
|
40 |
|
41 |
+
## Prompt template
|
42 |
+
|
43 |
+
```
|
44 |
+
You are a helpful AI assistant.
|
45 |
+
|
46 |
+
USER: <prompt>
|
47 |
+
ASSISTANT:
|
48 |
+
```
|
49 |
+
|
50 |
<!-- compatibility_ggml start -->
|
51 |
## Compatibility
|
52 |
|
|
|
100 |
I use the following command line; adjust for your tastes and needs:
|
101 |
|
102 |
```
|
103 |
+
./main -t 10 -ngl 32 -m wizardlm-33b-v1.0-uncensored.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
|
104 |
```
|
105 |
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
|
106 |
|