File size: 9,498 Bytes
5733aa5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1125c72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5733aa5
1125c72
 
 
5733aa5
1125c72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5733aa5
 
1125c72
5733aa5
 
1125c72
5733aa5
1125c72
 
 
5733aa5
1125c72
 
 
 
 
 
5733aa5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-Math-7B-Instruct
library_name: transformers
pipeline_tag: question-answering
datasets:
- MATH
- GSM8K
---

Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


SuperCorrect-7B - GGUF
- Model creator: https://huggingface.co./BitStarWalkin/
- Original model: https://huggingface.co./BitStarWalkin/SuperCorrect-7B/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SuperCorrect-7B.Q2_K.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q2_K.gguf) | Q2_K | 2.81GB |
| [SuperCorrect-7B.Q3_K_S.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [SuperCorrect-7B.Q3_K.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q3_K.gguf) | Q3_K | 3.55GB |
| [SuperCorrect-7B.Q3_K_M.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [SuperCorrect-7B.Q3_K_L.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [SuperCorrect-7B.IQ4_XS.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [SuperCorrect-7B.Q4_0.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q4_0.gguf) | Q4_0 | 4.13GB |
| [SuperCorrect-7B.IQ4_NL.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [SuperCorrect-7B.Q4_K_S.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [SuperCorrect-7B.Q4_K.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q4_K.gguf) | Q4_K | 4.36GB |
| [SuperCorrect-7B.Q4_K_M.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [SuperCorrect-7B.Q4_1.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q4_1.gguf) | Q4_1 | 4.54GB |
| [SuperCorrect-7B.Q5_0.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q5_0.gguf) | Q5_0 | 4.95GB |
| [SuperCorrect-7B.Q5_K_S.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [SuperCorrect-7B.Q5_K.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q5_K.gguf) | Q5_K | 5.07GB |
| [SuperCorrect-7B.Q5_K_M.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [SuperCorrect-7B.Q5_1.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q5_1.gguf) | Q5_1 | 5.36GB |
| [SuperCorrect-7B.Q6_K.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q6_K.gguf) | Q6_K | 5.82GB |
| [SuperCorrect-7B.Q8_0.gguf](https://huggingface.co./RichardErkhov/BitStarWalkin_-_SuperCorrect-7B-gguf/blob/main/SuperCorrect-7B.Q8_0.gguf) | Q8_0 | 7.54GB |




Original model description:
---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-Math-7B-Instruct
library_name: transformers
pipeline_tag: question-answering
---
## SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights

> [SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights](https://arxiv.org/abs/2410.09008)
> [Ling Yang\*](https://yangling0818.github.io/), [Zhaochen Yu*](https://github.com/BitCodingWalkin), [Tianjun Zhang](https://tianjunz.github.io/), [Minkai Xu](https://minkaixu.com/), [Joseph E. Gonzalez](https://people.eecs.berkeley.edu/~jegonzal/),[Bin Cui](https://cuibinpku.github.io/), [Shuicheng Yan](https://yanshuicheng.info/)
>
> Peking University, Skywork AI, UC Berkeley, Stanford University 

<p align="left">
  <a href='https://arxiv.org/abs/2410.09008'>
  <img src='https://img.shields.io/badge/Arxiv-2410.09008-A42C25?style=flat&logo=arXiv&logoColor=A42C25'></a> 
</p>

## Introduction

![image](intro.png)

This repo provides the official implementation of **SuperCorrect**  a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs. 

Notably, our **SupperCorrect-7B** model significantly surpasses powerful **DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks**, achieving new SOTA performance among all 7B models.

<div align="left">
馃毃 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
</div>

## Examples

![image](example1.png)

<div align="left">
<b>
馃毃 For more concise and clear presentation, we omit some XML tags. 
</b>
</div>

### Model details
*You can check our [Github repo](https://github.com/YangLing0818/SuperCorrect-llm) for more details.*

## Quick Start

### Requirements

* Since our current model is  based on Qwen2.5-Math series, `transformers>=4.37.0` is needed for Qwen2.5-Math models. The latest version is recommended.

> [!Warning]
>
> <div align="center">
> <b>
> 馃毃 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`.
> </b>
> </div>

### Inference

#### 馃 Hugging Face Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "BitStarWalkin/SuperCorrect-7B"
device = "cuda" 

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Find the distance between the foci of the ellipse \[9x^2 + \frac{y^2}{9} = 99.\]"
hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
# HT
messages = [
    {"role": "system", "content":hierarchical_prompt },
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

## Performance

We evaluate our SupperCorrect-7B on two widely used English math benchmarks GSM8K and MATH. All evaluations are tested with our evaluation method which is zero-shot hierarchical thought based prompting.

![image](table.png)

## Citation

```bash
@inproceedings{yang2025supercorrect,
  title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights},
  author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
  booktitle={International Conference on Learning Representations},
  year={2025}
}

@article{yang2024buffer,
  title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
  author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Cao, Shiyi and Xu, Minkai and Zhang, Wentao and Gonzalez, Joseph E and Cui, Bin},
  journal={Advances in Neural Information Processing Systems},
  year={2024}
}
```

## Acknowledgements

Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), [DeepSeek-Math](https://github.com/deepseek-ai/DeepSeek-Math), [Llama3-Series](https://github.com/meta-llama/llama3). Our evaluation method is based on the code base of outstanding works like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math) and  [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). We also want to express our gratitude for amazing works such as [BoT](https://github.com/YangLing0818/buffer-of-thought-llm) which provides the idea of thought template.