how to install and use on local machine windows

#23
by Merk0701234 - opened

hello brothers how do i install and use the model i am not familiar with this i do not see a requirements.txt

It is very large, you can try to use the API

openroute or deepseek.com

2$ = 100w tokens

I am using deepseek.com, and as a Chinese I enjoy the free 1$ quota.

For commercial purposes, I think you can use it as you like. You don't have to worry about anything, because you are just trying to make money, and deepseek is also for this purpose.

Oke thank you brothers. i really like it is am currently downloading it. it is indeed huge more then 1.4k gb. i have rtx 4090 so I think it will run good. huh where is the requirments.txt to run the R1 model i do not understand how should i run this on windows ? i have downloaded it but i do not know how to run it

Merk0701234 changed discussion title from how to install and use to how to install and use on local machine windows

Oke thank you brothers. i really like it is am currently downloading it. it is indeed huge more then 1.4k gb. i have rtx 4090 so I think it will run good. huh where is the requirments.txt to run the R1 model i do not understand how should i run this on windows ? i have downloaded it but i do not know how to run it

Have you run DeepSeek v3 or similar models on your RTX4090?

I'm amazed at your idea.

However, you can try https://github.com/deepseek-ai/DeepSeek-V3?tab=readme-ov-file#6-how-to-run-locally

@Merk0701234 How would it fit in single RTX 4090?

@Merk0701234 How would it fit in single RTX 4090?

The only possibility is BitNet, a Microsoft project.

Or, use a variant of R1 instead of R1, such as llama distillation based on R1

If someone could convert this model into a 1-bit form that can be used by the BitNet project, the video memory requirements would be greatly reduced and the deployment cost would be lower, but this would probably require graduate students.

Sign up or log in to comment