nielsr HF staff commited on
Commit
570bd68
·
verified ·
1 Parent(s): fd1ae1f

Add pipeline tag, link to code, and refine tags

Browse files

This PR adds the `pipeline_tag` to the model card, ensuring the model can be found at https://huggingface.co./models?pipeline_tag=question-answering. It also adds a link to the code repository for reproducibility and refines the tags to be more specific to the model's capabilities.

Files changed (1) hide show
  1. README.md +162 -36
README.md CHANGED
@@ -1,25 +1,26 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
 
 
5
  metrics:
6
  - accuracy
7
- base_model: BitStarWalkin/SuperCorrect-7B
8
- library_name: transformers
9
  tags:
10
- - llama-cpp
11
- - gguf-my-repo
 
 
12
  ---
13
 
14
  # Triangle104/SuperCorrect-7B-Q4_K_M-GGUF
15
- This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
- Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the model.
17
 
18
  ---
19
  Model details:
20
  -
21
 
22
-
23
  SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights Ling Yang*, Zhaochen Yu*, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez,Bin Cui, Shuicheng Yan
24
 
25
  Peking University, Skywork AI, UC Berkeley, Stanford University
@@ -27,27 +28,42 @@ Model details:
27
  Introduction
28
  -
29
 
30
- This repo provides the official implementation of SuperCorrect a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs.
31
 
32
  Notably, our SupperCorrect-7B model significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models.
33
- 🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
34
- Examples
 
 
 
35
 
36
- 🚨 For more concise and clear presentation, we omit some XML tags.
37
- Model details
38
 
39
- You can check our Github repo for more details.
40
- Quick Start
41
- Requirements
 
 
 
 
42
 
43
- Since our current model is based on Qwen2.5-Math series, transformers>=4.37.0 is needed for Qwen2.5-Math models. The latest version is recommended.
44
 
45
- 🚨 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`.
46
 
47
- Inference
48
- -
49
- 🤗 Hugging Face Transformers
 
 
 
 
50
 
 
 
 
 
 
51
  from transformers import AutoModelForCausalLM, AutoTokenizer
52
 
53
  model_name = "BitStarWalkin/SuperCorrect-7B"
@@ -60,7 +76,7 @@ model = AutoModelForCausalLM.from_pretrained(
60
  )
61
  tokenizer = AutoTokenizer.from_pretrained(model_name)
62
 
63
- prompt = "Find the distance between the foci of the ellipse \[9x^2 + \frac{y^2}{9} = 99.\]"
64
  hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
65
  # HT
66
  messages = [
@@ -85,29 +101,139 @@ generated_ids = [
85
 
86
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
87
  print(response)
 
88
 
89
- Performance
90
- -
91
- We evaluate our SupperCorrect-7B on two widely used English math benchmarks GSM8K and MATH. All evaluations are tested with our evaluation method which is zero-shot hierarchical thought based prompting.
92
 
93
- Citation
94
- -
95
- @article{yang2024supercorrect,
96
- title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
98
- journal={arXiv preprint arXiv:2410.09008},
99
- year={2024}
100
  }
 
101
  @article{yang2024buffer,
102
  title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
103
  author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Cao, Shiyi and Xu, Minkai and Zhang, Wentao and Gonzalez, Joseph E and Cui, Bin},
104
- journal={arXiv preprint arXiv:2406.04271},
105
  year={2024}
106
  }
 
107
 
108
- Acknowledgements
109
- -
110
- Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like Qwen2.5-Math, DeepSeek-Math, Llama3-Series. Our evaluation method is based on the code base of outstanding works like Qwen2.5-Math and lm-evaluation-harness. We also want to express our gratitude for amazing works such as BoT which provides the idea of thought template.
111
 
112
  ---
113
  ## Use with llama.cpp
@@ -148,4 +274,4 @@ Step 3: Run inference through the main binary.
148
  or
149
  ```
150
  ./llama-server --hf-repo Triangle104/SuperCorrect-7B-Q4_K_M-GGUF --hf-file supercorrect-7b-q4_k_m.gguf -c 2048
151
- ```
 
1
  ---
2
+ base_model: BitStarWalkin/SuperCorrect-7B
3
  language:
4
  - en
5
+ library_name: transformers
6
+ license: apache-2.0
7
  metrics:
8
  - accuracy
9
+ pipeline_tag: question-answering
 
10
  tags:
11
+ - mathematical-reasoning
12
+ - llm
13
+ - qwen
14
+ - llama
15
  ---
16
 
17
  # Triangle104/SuperCorrect-7B-Q4_K_M-GGUF
18
+ This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. The conversion process is described in detail in the "Use with llama.cpp" section below. Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the model's training and original performance.
 
19
 
20
  ---
21
  Model details:
22
  -
23
 
 
24
  SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights Ling Yang*, Zhaochen Yu*, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez,Bin Cui, Shuicheng Yan
25
 
26
  Peking University, Skywork AI, UC Berkeley, Stanford University
 
28
  Introduction
29
  -
30
 
31
+ This repo provides the official implementation of SuperCorrect, a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs.
32
 
33
  Notably, our SupperCorrect-7B model significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models.
34
+ 🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods rely on pure mathematical reasoning abilities of LLMs, instead of leveraging other programming methods such as PoT and ToRA.
35
+
36
+ Code: https://github.com/YangLing0818/SuperCorrect-llm
37
+
38
+ ## Quick Start
39
 
40
+ ### Installation
 
41
 
42
+ ```bash
43
+ git clone https://github.com/YangLing0818/SuperCorrect
44
+ cd SuperCorrect
45
+ conda create -n SuperCorrect python==3.10
46
+ conda activate SuperCorrect
47
+ pip install -r requirements.txt
48
+ ```
49
 
50
+ ### Requirements
51
 
52
+ * Since our current model is based on Qwen2.5-Math series, `transformers>=4.37.0` is needed for Qwen2.5-Math models. The latest version is recommended.
53
 
54
+ > [!Warning]
55
+ >
56
+ > <div align="center">
57
+ > <b>
58
+ > 🚨 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`.
59
+ > </b>
60
+ > </div>
61
 
62
+ ### Inference with Different Library
63
+
64
+ #### 🤗 Hugging Face Transformers
65
+
66
+ ```python
67
  from transformers import AutoModelForCausalLM, AutoTokenizer
68
 
69
  model_name = "BitStarWalkin/SuperCorrect-7B"
 
76
  )
77
  tokenizer = AutoTokenizer.from_pretrained(model_name)
78
 
79
+ prompt = "Find the distance between the foci of the ellipse \\[9x^2 + \\frac{y^2}{9} = 99.\\]"
80
  hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
81
  # HT
82
  messages = [
 
101
 
102
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
103
  print(response)
104
+ ```
105
 
106
+ #### 🔥 vLLM
 
 
107
 
108
+ ```python
109
+ import os
110
+ from vllm import LLM, SamplingParams
111
+ model_name = 'BitStarWalkin/SuperCorrect-7B'
112
+ hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
113
+ prompts = [
114
+ "For what positive value of $t$ is $|{-4+ti}| = 6$?",
115
+ "Find the distance between the foci of the ellipse \\[9x^2 + \\frac{y^2}{9} = 99.\\]",
116
+ "The fourth term of a geometric series is $24$ and the eleventh term is $3072$. What is the common ratio?"
117
+ ]
118
+ combined_prompts = [hierarchial_prompt + '\n' + prompt for prompt in prompts]
119
+ sampling_params = SamplingParams(temperature=0, top_p=1,max_tokens=1024)
120
+ llm = LLM(model=model_name, trust_remote_code=True)
121
+ outputs = llm.generate(combined_prompts, sampling_params)
122
+
123
+ #Print the outputs.
124
+ for output in outputs:
125
+ prompt = output.prompt
126
+ generated_text = output.outputs[0].text
127
+ print(f"Prompt: {prompt}")
128
+ print(f"Generated text: {generated_text}")
129
+ ```
130
+
131
+ Here we also provide inference code with [vLLM](https://github.com/vllm-project/vllm) . vLLM is a fast and easy-to-use library for LLM inference and serving.
132
+
133
+
134
+ ### 1. Our evaluation
135
+
136
+ Here we provide two different evaluation methods: **online version** which utilizes GPT-4o to conduct a more fair and robust judgement and **offline version** which utilizes programming method to verify the final results. Both methods aim to provide a more accurate and strict evaluation results, as the final results in MATH dataset are not always numeric or pure expression. We now provide online version for evaluation, we will update soon for offline version.
137
+
138
+
139
+ ```bash
140
+ API_KEY= "Input your key here"
141
+ MODEL_NAME_OR_PATH="BitStarWalkin/SuperCorrect-7B"
142
+ export CUDA_VISIBLE_DEVICES="0"
143
+ bash evaluation.sh $API_KEY $MODEL_NAME_OR_PATH
144
+
145
+ ```
146
+
147
+
148
+
149
+ ### 2. Evaluation with [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
150
+
151
+ ```bash
152
+ lm_eval --model hf \
153
+ --model_args pretrained="Qwen2.5-Math-7B-Instruct" \
154
+ --tasks minerva_math \
155
+ --log_samples \
156
+ --output_path Qwen2.5-Math-7B-Instruct-lm-evaluation \
157
+ --batch_size 12
158
+
159
+ lm_eval --model hf \
160
+ --model_args pretrained="SuperCorrect-7B" \
161
+ --tasks minerva_math \
162
+ --log_samples \
163
+ --output_path SuperCorrect-7B-lm-evaluation \
164
+ --batch_size 12
165
+ ```
166
+ Evaluation results produced by lm-evaluation:
167
+
168
+ | Qwen2.5-Math-7B-Instruct | Version | Filter | n-shot | Metric | | Value | | Stderr |
169
+ | ----------------------------------- | ------: | ------ | -----: | ----------- | ---- | -----: | ---- | -----: |
170
+ | minerva_math | 1 | none | 4 | exact_match | ↑ | 0.5034 | ± | 0.0064 |
171
+ | - minerva_math_algebra | 1 | none | 4 | exact_match | ↑ | 0.7009 | ± | 0.0133 |
172
+ | - minerva_math_counting_and_prob | 1 | none | 4 | exact_match | ↑ | 0.5232 | ± | 0.0230 |
173
+ | - minerva_math_geometry | 1 | none | 4 | exact_match | ↑ | 0.4635 | ± | 0.0228 |
174
+ | - minerva_math_intermediate_algebra | 1 | none | 4 | exact_match | ↑ | 0.2237 | ± | 0.0139 |
175
+ | - minerva_math_num_theory | 1 | none | 4 | exact_match | ↑ | 0.4667 | ± | 0.0215 |
176
+ | - minerva_math_prealgebra | 1 | none | 4 | exact_match | ↑ | 0.7394 | ± | 0.0149 |
177
+ | - minerva_math_precalc | 1 | none | 4 | exact_match | ↑ | 0.2143 | ± | 0.0176 |
178
+
179
+
180
+
181
+ | SuperCorrect-7B | Version | Filter | n-shot | Metric | | Value | | Stderr |
182
+ | ------------------------------------ | ------: | ------ | -----: | ----------- | ---- | -----: | ---- | -----: |
183
+ | minerva_math | 1 | none | 4 | exact_match | ↑ | 0.6188 (**+0.1154**) | ± | 0.0065 |
184
+ | - minerva_math_algebra | 1 | none | 4 | exact_match | ↑ | 0.7936 (**+0.0927**) | ± | 0.0118 |
185
+ | - minerva_math_counting_and_prob | 1 | none | 4 | exact_match | ↑ | 0.5802 (**+0.0570**) | ± | 0.0227 |
186
+ | - minerva_math_geometry | 1 | none | 4 | exact_match | ↑ | 0.5261 (**+0.0626**) | ± | 0.0228 |
187
+ | - minerva_math_intermediate_algebra | 1 | none | 4 | exact_match | ↑ | 0.4385 (**+0.2148**) | ± | 0.0165 |
188
+ | - minerva_math_num_theory | 1 | none | 4 | exact_match | ↑ | 0.6167 (**+0.1500**) | ± | 0.0209 |
189
+ | - minerva_math_prealgebra | 1 | none | 4 | exact_match | ↑ | 0.7715 (**+0.0321**) | ± | 0.0142 |
190
+ | - minerva_math_precalc | 1 | none | 4 | exact_match | ↑ | 0.4103 (**+0.1960**) | ± | 0.0211 |
191
+
192
+
193
+ | Summary | Version | Filter | n-shot | Metric | | Value | | Stderr |
194
+ | ------------ | ------: | ------ | ------ | ----------- | ---- | -----: | ---- | -----: |
195
+ | Qwen2.5-Math-7B-Instruct | 1 | none | 4| exact_match | ↑ | 0.5034 | ± | 0.0064 |
196
+ | SuperCorrect-7B | 1 | none | 4| exact_match | ↑ | 0.6188 (**+0.1154**) | ± | 0.0065 |
197
+
198
+ ### 3. Evaluation with [Qwen2.5-Math-Evaluation](https://github.com/QwenLM/Qwen2.5-Math)
199
+ ```bash
200
+ export CUDA_VISIBLE_DEVICES="0"
201
+ MODEL_NAME_OR_PATH="Qwen/Qwen2.5-Math-7B-Instruct"
202
+ bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH
203
+
204
+ export CUDA_VISIBLE_DEVICES="0"
205
+ MODEL_NAME_OR_PATH="BitStarWalkin/SuperCorrect-7B"
206
+ bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH
207
+ ```
208
+ Evaluation results produced by Qwen2.5-Math-Eval:
209
+ | Model | MATH Accuracy (%) |
210
+ | ---------------- | ----------------- |
211
+ | Qwen2.5-Math | 80.6 |
212
+ | **SuperCorrect** | **82.1** |
213
+ | **Our Improvement** | **+1.5** |
214
+
215
+
216
+ ## Citation
217
+
218
+ ```bash
219
+ @inproceedings{yang2025supercorrect,
220
+ title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights},
221
  author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
222
+ booktitle={International Conference on Learning Representations},
223
+ year={2025}
224
  }
225
+
226
  @article{yang2024buffer,
227
  title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
228
  author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Cao, Shiyi and Xu, Minkai and Zhang, Wentao and Gonzalez, Joseph E and Cui, Bin},
229
+ journal={Advances in Neural Information Processing Systems},
230
  year={2024}
231
  }
232
+ ```
233
 
234
+ ## Acknowledgements
235
+
236
+ Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), [DeepSeek-Math](https://github.com/deepseek-ai/DeepSeek-Math), [Llama3-Series](https://github.com/meta-llama/llama3). Our evaluation method is based on the code base of outstanding works like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math) and [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). We also want to express our gratitude for amazing works such as [BoT](https://github.com/YangLing0818/buffer-of-thought-llm) which provides the idea of thought template.
237
 
238
  ---
239
  ## Use with llama.cpp
 
274
  or
275
  ```
276
  ./llama-server --hf-repo Triangle104/SuperCorrect-7B-Q4_K_M-GGUF --hf-file supercorrect-7b-q4_k_m.gguf -c 2048
277
+ ```