Step-Controlled DPO: Leveraging Stepwise Error for Enhanced Mathematical Reasoning

Z Lu, A Zhou, K Wang, H Ren, W Shi, J Pan…�- arXiv preprint arXiv�…, 2024 - arxiv.org
Z Lu, A Zhou, K Wang, H Ren, W Shi, J Pan, M Zhan
arXiv preprint arXiv:2407.00782, 2024arxiv.org
Direct Preference Optimization (DPO) has proven effective at improving the performance of
large language models (LLMs) on downstream tasks such as reasoning and alignment. In
this work, we propose Step-Controlled DPO (SCDPO), a method for automatically providing
stepwise error supervision by creating negative samples of mathematical reasoning
rationales that start making errors at a specified step. By applying these samples in DPO
training, SCDPO can better align the model to understand reasoning errors and output�…
Direct Preference Optimization (DPO) has proven effective at improving the performance of large language models (LLMs) on downstream tasks such as reasoning and alignment. In this work, we propose Step-Controlled DPO (SCDPO), a method for automatically providing stepwise error supervision by creating negative samples of mathematical reasoning rationales that start making errors at a specified step. By applying these samples in DPO training, SCDPO can better align the model to understand reasoning errors and output accurate reasoning steps. We apply SCDPO to both code-integrated and chain-of-thought solutions, empirically showing that it consistently improves the performance compared to naive DPO on three different SFT models, including one existing SFT model and two models we finetuned. Qualitative analysis of the credit assignment of SCDPO and DPO demonstrates the effectiveness of SCDPO at identifying errors in mathematical solutions. We then apply SCDPO to an InternLM2-20B model, resulting in a 20B model that achieves high scores of 88.5% on GSM8K and 58.1% on MATH, rivaling all other open-source LLMs, showing the great potential of our method.
arxiv.org