Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techniques that are believed certainly to be adopted are only reinforcement learning (RL) and the long chain of thoughts.
We proposes a new RL framework, termed OREAL, to pursue the performance limit that can be achieved through Outcome REwArd-based reinforcement Learning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible.
- We theoretically prove that behavior cloning on positive trajectories from best-of-N (BoN) sampling is sufficient to learn the KL-regularized optimal policy in binary feedback environments.
- This formulation further implies that the rewards of negative samples should be reshaped to ensure the gradient consistency between positive and negative samples.
- To alleviate the long-existing difficulties brought by sparse rewards in RL, which are even exacerbated by the partial correctness of the long chain of thought for reasoning tasks, we further apply a token-level reward model to sample important tokens in reasoning trajectories for learning.
The OREAL implementation pseudocode is as follows:
With OREAL, for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL, being on par with 32B models. OREAL-32B also surpasses previous 32B models trained by distillation with 95.0 pass@1 accuracy on MATH-500.
Our OREAL models are available on Hugging Face 🤗:
Model | Huggingface Repo |
---|---|
OREAL-7B | Model Link |
OREAL-32B | Model Link |
We also release the models of SFT version. You can construct your own RL pipeline on them:)
Model | Huggingface Repo |
---|---|
OREAL-7B-SFT | Model Link |
OREAL-32B-SFT | Model Link |
@misc{lyu2025exploringlimitoutcomereward,
title={Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning},
author={Chengqi Lyu and Songyang Gao and Yuzhe Gu and Wenwei Zhang and Jianfei Gao and Kuikun Liu and Ziyi Wang and Shuaibin Li and Qian Zhao and Haian Huang and Weihan Cao and Jiangning Liu and Hongwei Liu and Junnan Liu and Songyang Zhang and Dahua Lin and Kai Chen},
year={2025},
eprint={2502.06781},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.06781},
}
This project is released under the Apache 2.0 license.