Through PPO using a reward model trained on merely 5% of the LeanStatement_RL dataset, our model achieved enhanced performance, reaching a pass@1 compilation success rate of 95.3% (465/488) on MiniF2F and 76.0% (284/374) on ProofNet benchmarks. Combined with our SFT results, these findings establish a new state-of-the-art in mathematics formalization as of December 2024.
- Downloads last month
- 44
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.