msr2000 commited on
Commit
5d85599
·
1 Parent(s): 164af3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  library_name: transformers
3
  ---
4
  # DeepSeek-R1
@@ -59,6 +60,8 @@ we introduce DeepSeek-R1, which incorporates cold-start data before RL.
59
  DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
60
  To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
61
 
 
 
62
  <p align="center">
63
  <img width="80%" src="figures/benchmark.jpg">
64
  </p>
@@ -95,7 +98,7 @@ To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSe
95
  </div>
96
 
97
  DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
98
- For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
99
 
100
  ### DeepSeek-R1-Distill Models
101
 
@@ -194,7 +197,20 @@ For instance, you can easily start a service using [vLLM](https://github.com/vll
194
  vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
195
  ```
196
 
197
- **NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
 
 
 
 
 
 
 
 
 
 
 
 
 
198
 
199
  ## 7. License
200
  This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
@@ -205,8 +221,17 @@ DeepSeek-R1 series support commercial use, allow for any modifications and deriv
205
 
206
  ## 8. Citation
207
  ```
 
 
 
 
 
 
 
 
 
208
 
209
  ```
210
 
211
  ## 9. Contact
212
- If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
 
1
  ---
2
+ license: mit
3
  library_name: transformers
4
  ---
5
  # DeepSeek-R1
 
60
  DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
61
  To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
62
 
63
+ **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
64
+
65
  <p align="center">
66
  <img width="80%" src="figures/benchmark.jpg">
67
  </p>
 
98
  </div>
99
 
100
  DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
101
+ For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
102
 
103
  ### DeepSeek-R1-Distill Models
104
 
 
197
  vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
198
  ```
199
 
200
+ You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
201
+
202
+ ```bash
203
+ python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
204
+ ```
205
+
206
+ ### Usage Recommendations
207
+
208
+ **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
209
+
210
+ 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
211
+ 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
212
+ 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
213
+ 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
214
 
215
  ## 7. License
216
  This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
 
221
 
222
  ## 8. Citation
223
  ```
224
+ @misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
225
+ title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
226
+ author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
227
+ year={2025},
228
+ eprint={2501.12948},
229
+ archivePrefix={arXiv},
230
+ primaryClass={cs.CL},
231
+ url={https://arxiv.org/abs/2501.12948},
232
+ }
233
 
234
  ```
235
 
236
  ## 9. Contact
237
+ If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).