Deploying production ready service with Unsloth GGUF quants on your AWS account. (4 x L40S)

#171
by samagra-tensorfuse - opened

Hi People

In the past few weeks we have been doing tons of PoCs with enterprises trying to deploy DeepSeek R1. The most popular combination was the Unsloth GGUF
quants on 4xL40S.

We just dropped the guide to deploy it on serverless GPUs on your own cloud: https://tensorfuse.io/docs/guides/integrations/llama_cpp

Single request tok/sec - 24 tok/sec

Context size - 5k

We also ran multiple experiments to figure out the right combination of context size fit and tps. You can modify the the "--n-gpu-layers" and "--ctx-size" paramters to calculate tokens per second for each scenario, here are the results -

  • GPU Layers 30 , context 10k, speed 6.3 t/s
  • GPU Layers 40, context 10k, speed 8.5 t/s
  • GPU Layers 50, context 10k , speed 12 t/s
  • At GPU layers > 50 , 10k context window will not fit.

Is it FP8 based, or Q4 based?

If I had Deepseek R1 running at 6.3 t/s with context 10k, all running locally I'd be happy and probably wouldn't even touch online models or very rarely.
Unfortunately that's not possible on consumer PC's, but on the other hand for servers it sounds too slow... 🤷‍♂️

Is it FP8 based, or Q4 based?

@ghostplant It is a 1.58 bit dynamic quant.

If I had Deepseek R1 running at 6.3 t/s with context 10k, all running locally I'd be happy and probably wouldn't even touch online models or very rarely.
Unfortunately that's not possible on consumer PC's, but on the other hand for servers it sounds too slow... 🤷‍♂️

@MrDevolver You can increase the speed by increasing the number of GPUs. Max I have achieved is arouns 70 tok/sec on L40S

I also tried running on CPU only machines and I was getting 5 tokens per second. If you have a decent mac you can run it at 6.3 tokens per second.

Does Quat1B still outperform o1-mini? If not, why not using 32B distilled what only costs 1 GPU?

Is Q1 still outperform o1-mini? If no, why not using 32B version?

Imho, low quant of bigger model is still better than highest quant of smaller model.

Is Q1 still outperform o1-mini? If no, why not using 32B version?

Imho, low quant of bigger model is still better than highest quant of smaller model.

Is there any formal comparison to show their performance with Distilled-Qwen-32B?

Subjectively the 2.5bit quantization outperforms handily the llama 70B distil in reasoning quality. The 70B distil is much faster though...

Sign up or log in to comment