shimmyshimmer
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,38 @@ tags:
|
|
14 |
|
15 |
## ***See [our collection](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) for versions of Deepseek-R1 including GGUF and original formats.***
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
# Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
|
19 |
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
|
20 |
|
|
|
14 |
|
15 |
## ***See [our collection](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) for versions of Deepseek-R1 including GGUF and original formats.***
|
16 |
|
17 |
+
### Instructions to run this model in llama.cpp:
|
18 |
+
Or you can view more detailed instructions here: [unsloth.ai/blog/deepseek-r1](https://unsloth.ai/blog/deepseek-r1)
|
19 |
+
1. Use K quantization (not V quantization)
|
20 |
+
2. Do not forget about `<|User|>` and `<|Assistant|>` tokens! - Or use a chat template formatter
|
21 |
+
3. Example with Q5_0 K quantized cache (V quantized cache doesn't work):
|
22 |
+
```bash
|
23 |
+
./llama.cpp/llama-cli
|
24 |
+
--model unsloth/DeepSeek-V3-GGUF/DeepSeek-R1-Distill-Llama-8B-Q2_K_XS/DeepSeek-R1-Distill-Llama-8B-Q2_K_XS-00001-of-00005.gguf
|
25 |
+
--cache-type-k q5_0
|
26 |
+
--threads 16
|
27 |
+
--prompt '<|User|>What is 1+1?<|Assistant|>'
|
28 |
+
```
|
29 |
+
Example output:
|
30 |
+
```txt
|
31 |
+
The sum of 1 and 1 is **2**. Here's a simple step-by-step breakdown:
|
32 |
+
|
33 |
+
1. **Start with the number 1.**
|
34 |
+
2. **Add another 1 to it.**
|
35 |
+
3. **The result is 2.**
|
36 |
+
|
37 |
+
So, **1 + 1 = 2**. [end of text]
|
38 |
+
```
|
39 |
+
4. If you have a GPU (RTX 4090 for example) with 24GB, you can offload 5 layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers.
|
40 |
+
```bash
|
41 |
+
/llama.cpp/llama-cli \
|
42 |
+
--model DeepSeek-R1-Distill-Llama-8B-F16.gguf\
|
43 |
+
--cache-type-k q8_0 \
|
44 |
+
--prompt '<|User|>What is 1+1?<|Assistant|>' \
|
45 |
+
--threads 32 \
|
46 |
+
-no-cnv
|
47 |
+
```
|
48 |
+
|
49 |
# Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
|
50 |
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
|
51 |
|