Add instructions to run R1-AWQ on SGLang
Browse files
README.md
CHANGED
@@ -13,6 +13,8 @@ AWQ of DeepSeek R1.
|
|
13 |
|
14 |
This quant modified some of the model code to fix an overflow issue when using float16.
|
15 |
|
|
|
|
|
16 |
To serve using vLLM with 8x 80GB GPUs, use the following command:
|
17 |
```sh
|
18 |
VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ
|
@@ -26,3 +28,13 @@ Inference speed with batch size 1 and short prompt:
|
|
26 |
Note:
|
27 |
- Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
|
28 |
- vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
This quant modified some of the model code to fix an overflow issue when using float16.
|
15 |
|
16 |
+
## Serving with vLLM
|
17 |
+
|
18 |
To serve using vLLM with 8x 80GB GPUs, use the following command:
|
19 |
```sh
|
20 |
VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ
|
|
|
28 |
Note:
|
29 |
- Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
|
30 |
- vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.
|
31 |
+
|
32 |
+
## Serving with SGLang
|
33 |
+
|
34 |
+
```sh
|
35 |
+
python3 -m sglang.launch_server --model cognitivecomputations/DeepSeek-R1-AWQ --tp 8 --trust-remote-code --dtype half
|
36 |
+
```
|
37 |
+
|
38 |
+
Note:
|
39 |
+
- AWQ does not support BF16, so add the `--dtype half` flag if AWQ is used for quantization.
|
40 |
+
- For more information about running DeepSeek-R1 using SGLang, feel free to check out their [documentation](https://docs.sglang.ai/references/deepseek.html).
|