Running the model with vLLM does not actually work

#12
by aikitoria - opened

This blog post: https://unsloth.ai/blog/deepseekr1-dynamic

Claims that the 1.58bpw model can be run with vLLM. However, this is not the case. Attempting to run the model with following command following the vLLM documentation:

vllm serve /raid/models/DeepSeek-R1-UD-IQ1_S.gguf --tokenizer deepseek-ai/DeepSeek-R1

Produces following error message:

ValueError: GGUF model with architecture deepseek2 is not supported yet.

Unsloth AI org

whoopsies ok we'll remove it

Unsloth AI org

This blog post: https://unsloth.ai/blog/deepseekr1-dynamic

Claims that the 1.58bpw model can be run with vLLM. However, this is not the case. Attempting to run the model with following command following the vLLM documentation:

vllm serve /raid/models/DeepSeek-R1-UD-IQ1_S.gguf --tokenizer deepseek-ai/DeepSeek-R1

Produces following error message:

ValueError: GGUF model with architecture deepseek2 is not supported yet.

btw even when u merged it using llama.cpp it didnt work?

I have not tried running it with llama.cpp. I was hoping to use vLLM as it seemed to be the only library capable of running a gguf model with tensor parallelism.

I am also trying to get this working on my end. I can reproduce this issue running the standard vllm/vllm-openai:latest Docker image (see docker-compose.yaml file copied below).

I've been trying to get a docker compose setup for easy local inference with the 1.58bpw DeepSeek-R1-UD-IQ1_S GGUF model. I have 128G Quad Channel DDR4 and 4x3090 for a total of 224G RAM+VRAM running under Ubuntu, so I also thought vLLM seemed like the ideal way to host DeepSeek R1 locally after reading the Blog post. But so far, no luck. Any help getting to the bottom of this would be greatly appreciated.

I did start with combining the GGUF files per the docs:

$ ./llama-gguf-split  --merge /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf
gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf -> /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done
gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf merged from 3 split with 1025 tensors.

After that, I run the below docker-compose.yaml file (just put it in an empty directory) with docker compose up

docker-compose.yaml

version: "3.8"

services:
  vllm:
    image: vllm/vllm-openai:latest

    # Use host networking so the service can be accessed via the host’s network.
    network_mode: host

    # Use host IPC (helps with PyTorch shared memory usage).
    ipc: host

    # If your Docker environment supports GPU device reservations in compose:
    deploy:
      resources:
        reservations:
          devices:
            - driver: "nvidia"
              count: "all"
              capabilities: ["gpu"]

    # Mount your GGUF file from the host machine into the container.
    # Adjust the path on the host side as needed (e.g., ./models/).
    volumes:
      - /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf:/models/

    # The 'command' section calls vLLM in OpenAI-compatible "serve-openai" mode.
    #  - Point --model to the single-file GGUF model in your mounted directory.
    #  - Optionally specify --tensor-parallel-size <N> if you want multiple GPUs.
    command: >
      --model "/models"
      --port 5000
      --tensor-parallel-size 4
      --max-model-len 32768
      --enforce-eager

Output:

vllm-1  | Traceback (most recent call last):
vllm-1  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
vllm-1  |     self.run()
vllm-1  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
vllm-1  |     self._target(*self._args, **self._kwargs)
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine
vllm-1  |     raise e
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
vllm-1  |     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
vllm-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 116, in from_engine_args
vllm-1  |     engine_config = engine_args.create_engine_config(usage_context)
vllm-1  |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1047, in create_engine_config
vllm-1  |     model_config = self.create_model_config()
vllm-1  |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 972, in create_model_config
vllm-1  |     return ModelConfig(
vllm-1  |            ^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 282, in __init__
vllm-1  |     hf_config = get_config(self.model, trust_remote_code, revision,
vllm-1  |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 201, in get_config
vllm-1  |     config_dict, _ = PretrainedConfig.get_config_dict(
vllm-1  |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 591, in get_config_dict
vllm-1  |     config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
vllm-1  |                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 682, in _get_config_dict
vllm-1  |     config_dict = load_gguf_checkpoint(resolved_config_file, return_tensors=False)["config"]
vllm-1  |                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_gguf_pytorch_utils.py", line 387, in load_gguf_checkpoint
vllm-1  |     raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.")
vllm-1  | ValueError: GGUF model with architecture deepseek2 is not supported yet.
vllm-1 exited with code 1

Hopefully this is reproduceable enough for folks to see if anyone else hits the same issues. If anyone has a fix, or can verify if this dynamic quant R1 model is supposed to work with vLLM, I'd greatly appreciate it.

Looks like this is also being discussed on the vLLM GitHub repo here.

Unsloth AI org

I am also trying to get this working on my end. I can reproduce this issue running the standard vllm/vllm-openai:latest Docker image (see docker-compose.yaml file copied below).

I've been trying to get a docker compose setup for easy local inference with the 1.58bpw DeepSeek-R1-UD-IQ1_S GGUF model. I have 128G Quad Channel DDR4 and 4x3090 for a total of 224G RAM+VRAM running under Ubuntu, so I also thought vLLM seemed like the ideal way to host DeepSeek R1 locally after reading the Blog post. But so far, no luck. Any help getting to the bottom of this would be greatly appreciated.

I did start with combining the GGUF files per the docs:

$ ./llama-gguf-split  --merge /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf
gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf -> /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done
gguf_merge: reading metadata /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf done
gguf_merge: writing tensors /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf done
gguf_merge: /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf merged from 3 split with 1025 tensors.

After that, I run the below docker-compose.yaml file (just put it in an empty directory) with docker compose up

docker-compose.yaml

version: "3.8"

services:
  vllm:
    image: vllm/vllm-openai:latest

    # Use host networking so the service can be accessed via the host’s network.
    network_mode: host

    # Use host IPC (helps with PyTorch shared memory usage).
    ipc: host

    # If your Docker environment supports GPU device reservations in compose:
    deploy:
      resources:
        reservations:
          devices:
            - driver: "nvidia"
              count: "all"
              capabilities: ["gpu"]

    # Mount your GGUF file from the host machine into the container.
    # Adjust the path on the host side as needed (e.g., ./models/).
    volumes:
      - /models/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S.gguf:/models/

    # The 'command' section calls vLLM in OpenAI-compatible "serve-openai" mode.
    #  - Point --model to the single-file GGUF model in your mounted directory.
    #  - Optionally specify --tensor-parallel-size <N> if you want multiple GPUs.
    command: >
      --model "/models"
      --port 5000
      --tensor-parallel-size 4
      --max-model-len 32768
      --enforce-eager

Output:

vllm-1  | Traceback (most recent call last):
vllm-1  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
vllm-1  |     self.run()
vllm-1  |   File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
vllm-1  |     self._target(*self._args, **self._kwargs)
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine
vllm-1  |     raise e
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
vllm-1  |     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
vllm-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 116, in from_engine_args
vllm-1  |     engine_config = engine_args.create_engine_config(usage_context)
vllm-1  |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1047, in create_engine_config
vllm-1  |     model_config = self.create_model_config()
vllm-1  |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 972, in create_model_config
vllm-1  |     return ModelConfig(
vllm-1  |            ^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 282, in __init__
vllm-1  |     hf_config = get_config(self.model, trust_remote_code, revision,
vllm-1  |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 201, in get_config
vllm-1  |     config_dict, _ = PretrainedConfig.get_config_dict(
vllm-1  |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 591, in get_config_dict
vllm-1  |     config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
vllm-1  |                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 682, in _get_config_dict
vllm-1  |     config_dict = load_gguf_checkpoint(resolved_config_file, return_tensors=False)["config"]
vllm-1  |                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vllm-1  |   File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_gguf_pytorch_utils.py", line 387, in load_gguf_checkpoint
vllm-1  |     raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.")
vllm-1  | ValueError: GGUF model with architecture deepseek2 is not supported yet.
vllm-1 exited with code 1

Hopefully this is reproduceable enough for folks to see if anyone else hits the same issues. If anyone has a fix, or can verify if this dynamic quant R1 model is supposed to work with vLLM, I'd greatly appreciate it.

oh amazing this is really interesting

Sign up or log in to comment