DeepScaleR-1.5B-Preview-Reproduce

Overview

This model is a reproduction of the agentica-project/deepscaler project. We have reproduced the results in the repo on an 8x80G A800, achieving an average score of 56.4.

Training

export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export VLLM_ATTENTION_BACKEND=XFORMERS

# Run 8K context length training, 560 steps
export MODEL_PATH="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
nohup bash run_deepscaler_1.5b_8k.sh --model $MODEL_PATH > stage1.log 2>&1 &

# Run 16K context length training, 250 steps
export MODEL_PATH="./checkpoints/deepscaler/deepscaler-1.5b-8k/actor/global_step_560"
nohup bash run_deepscaler_1.5b_16k.sh --model $MODEL_PATH > stage2.log 2>&1 &

# Run 24K context length training, 190 steps
export MODEL_PATH="./checkpoints/deepscaler/deepscaler-1.5b-16k/actor/global_step_250"
nohup bash run_deepscaler_1.5b_24k.sh --model $MODEL_PATH > stage3.log 2>&1 &

# Run 24K context length training, 480 steps
export MODEL_PATH="./checkpoints/deepscaler/deepscaler-1.5b-24k/actor/global_step_190"
nohup bash run_deepscaler_1.5b_24k.sh --model $MODEL_PATH > stage3-continue.log 2>&1 &

Evaluation

Model AIME 2024 MATH 500 AMC 2023 Minerva Math OlympiadBench Avg.
Qwen-2.5-7B-Instruct 13.3 79.8 50.6 34.6 40.7 43.8
rStar-Math-7B 26.7 78.4 47.5 - 47.1 -
Eurus-2-7B-PRIME 26.7 79.2 57.8 38.6 42.1 48.9
Qwen2.5-7B-SimpleRL 26.7 82.4 62.5 39.7 43.3 50.9
DeepSeek-R1-Distill-Qwen-1.5B 28.8 82.8 62.9 26.5 43.3 48.9
Still-1.5B 32.5 84.4 66.7 29.0 45.4 51.6
DeepScaleR-1.5B-Preview 43.1 87.8 73.6 30.2 50.0 57.0
🎉 DeepScaleR-1.5B-Preview-Reproduce 40.4 87.9 72.0 31.5 50.2 56.4
O1-Preview 40.0 81.4 - - - -

Citation

@misc{deepscaler2025,
  title={DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL},
  author={Michael Luo and Sijun Tan and Justin Wong and Xiaoxiang Shi and William Y. Tang and Manan Roongta and Colin Cai and Jeffrey Luo and Tianjun Zhang and Li Erran Li and Raluca Ada Popa and Ion Stoica},
  year={2025},
  howpublished={\url{https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2}},
  note={Notion Blog}
  year={2025}
}
Downloads last month
8
Safetensors
Model size
1.78B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for junnyu/DeepScaleR-1.5B-Preview-Reproduce

Finetuned
(125)
this model

Datasets used to train junnyu/DeepScaleR-1.5B-Preview-Reproduce