A Step-by-step deployment guide with ollama

#16
by snowkylin - opened

Just wanna share my deployment process in case of need.

https://snowkylin.github.io/blogs/a-note-on-deepseek-r1.html

Unsloth AI org

how did u manage to run the model directly using the ollama run command? :)

did u merge the ggufs yourself?

how did u manage to run the model directly using the ollama run command? :)

did u merge the ggufs yourself?

Yes, I merged them by llama-gguf-splitin llama.cpp. You can find the detail here.

Sign up or log in to comment