nielsr HF staff commited on
Commit
b74e55e
·
verified ·
1 Parent(s): e9906dc

Update pipeline tag, base model, and add description

Browse files

This PR updates the model card with the correct `pipeline_tag` to reflect the model's question-answering capabilities. It also corrects the `base_model` to point to the correct repository, and adds a short description at the beginning for clarity. The `tags` field is also adjusted for better accuracy.

Files changed (1) hide show
  1. README.md +129 -67
README.md CHANGED
@@ -1,55 +1,56 @@
1
  ---
2
- license: apache-2.0
 
3
  language:
4
  - en
 
 
5
  metrics:
6
  - accuracy
7
- base_model: BitStarWalkin/SuperCorrect-7B
8
- library_name: transformers
9
  tags:
10
- - llama-cpp
11
- - gguf-my-repo
 
 
12
  ---
13
 
14
- # Triangle104/SuperCorrect-7B-Q5_K_M-GGUF
15
- This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
- Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the model.
 
 
 
17
 
18
  ---
19
  Model details:
20
- -
21
- SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
22
 
23
  SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights Ling Yang*, Zhaochen Yu*, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez,Bin Cui, Shuicheng Yan
24
 
25
  Peking University, Skywork AI, UC Berkeley, Stanford University
26
 
27
  Introduction
28
- -
29
- This repo provides the official implementation of SuperCorrect a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs.
30
 
31
  Notably, our SupperCorrect-7B model significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models.
32
  🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
33
 
34
  Examples
35
- -
36
- 🚨 For more concise and clear presentation, we omit some XML tags.
37
  Model details
38
 
39
  You can check our Github repo for more details.
40
 
41
  Quick Start
42
- -
43
- Requirements
44
- -
45
  Since our current model is based on Qwen2.5-Math series, transformers>=4.37.0 is needed for Qwen2.5-Math models. The latest version is recommended.
46
 
47
  🚨 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`.
48
 
49
  Inference
50
- -
51
- 🤗 Hugging Face Transformers
52
 
 
53
  from transformers import AutoModelForCausalLM, AutoTokenizer
54
 
55
  model_name = "BitStarWalkin/SuperCorrect-7B"
@@ -62,7 +63,7 @@ model = AutoModelForCausalLM.from_pretrained(
62
  )
63
  tokenizer = AutoTokenizer.from_pretrained(model_name)
64
 
65
- prompt = "Find the distance between the foci of the ellipse \[9x^2 + \frac{y^2}{9} = 99.\]"
66
  hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
67
  # HT
68
  messages = [
@@ -87,67 +88,128 @@ generated_ids = [
87
 
88
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
89
  print(response)
 
90
 
91
- Performance
92
- -
93
- We evaluate our SupperCorrect-7B on two widely used English math benchmarks GSM8K and MATH. All evaluations are tested with our evaluation method which is zero-shot hierarchical thought based prompting.
94
 
95
- Citation
96
- -
97
- @article{yang2024supercorrect,
98
- title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights}
99
- author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
100
- journal={arXiv preprint arXiv:2410.09008},
101
- year={2024}
102
- }
103
- @article{yang2024buffer,
104
- title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
105
- author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Cao, Shiyi and Xu, Minkai and Zhang, Wentao and Gonzalez, Joseph E and Cui, Bin},
106
- journal={arXiv preprint arXiv:2406.04271},
107
- year={2024}
108
- }
 
 
 
 
 
 
 
 
109
 
110
- Acknowledgements
111
- -
112
- Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like Qwen2.5-Math, DeepSeek-Math, Llama3-Series. Our evaluation method is based on the code base of outstanding works like Qwen2.5-Math and lm-evaluation-harness. We also want to express our gratitude for amazing works such as BoT which provides the idea of thought template.
113
 
114
- ---
115
- ## Use with llama.cpp
116
- Install llama.cpp through brew (works on Mac and Linux)
117
 
118
- ```bash
119
- brew install llama.cpp
120
 
121
- ```
122
- Invoke the llama.cpp server or the CLI.
123
 
124
- ### CLI:
125
  ```bash
126
- llama-cli --hf-repo Triangle104/SuperCorrect-7B-Q5_K_M-GGUF --hf-file supercorrect-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
 
 
 
127
  ```
128
 
129
- ### Server:
 
130
  ```bash
131
- llama-server --hf-repo Triangle104/SuperCorrect-7B-Q5_K_M-GGUF --hf-file supercorrect-7b-q5_k_m.gguf -c 2048
 
 
 
 
 
 
 
 
 
 
 
 
132
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
 
134
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
135
-
136
- Step 1: Clone llama.cpp from GitHub.
137
- ```
138
- git clone https://github.com/ggerganov/llama.cpp
139
  ```
 
 
 
 
 
 
140
 
141
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
142
- ```
143
- cd llama.cpp && LLAMA_CURL=1 make
144
- ```
145
 
146
- Step 3: Run inference through the main binary.
147
- ```
148
- ./llama-cli --hf-repo Triangle104/SuperCorrect-7B-Q5_K_M-GGUF --hf-file supercorrect-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
149
- ```
150
- or
151
- ```
152
- ./llama-server --hf-repo Triangle104/SuperCorrect-7B-Q5_K_M-GGUF --hf-file supercorrect-7b-q5_k_m.gguf -c 2048
 
 
 
 
 
 
 
153
  ```
 
 
 
 
 
1
  ---
2
+ base_model: YangLing0818/SuperCorrect-7B
3
+ pipeline_tag: question-answering
4
  language:
5
  - en
6
+ library_name: transformers
7
+ license: apache-2.0
8
  metrics:
9
  - accuracy
 
 
10
  tags:
11
+ - llama
12
+ - qwen
13
+ - mathematical-reasoning
14
+ - gguf
15
  ---
16
 
17
+ # SuperCorrect-7B: A Fine-Tuned LLM for Enhanced Mathematical Reasoning
18
+
19
+ SuperCorrect-7B is a 7B parameter Large Language Model fine-tuned for improved mathematical reasoning and self-correction capabilities. It utilizes a two-stage framework incorporating hierarchical thought templates and cross-model collaborative direct preference optimization. This model significantly outperforms other 7B models on MATH and GSM8K benchmarks.
20
+
21
+ This model was converted to GGUF format from [`YangLing0818/SuperCorrect-7B`](https://huggingface.co/YangLing0818/SuperCorrect-7B).
22
+ Refer to the [original model card](https://huggingface.co/YangLing0818/SuperCorrect-7B) for more details on the model.
23
 
24
  ---
25
  Model details:
26
+ - SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights
 
27
 
28
  SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights Ling Yang*, Zhaochen Yu*, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez,Bin Cui, Shuicheng Yan
29
 
30
  Peking University, Skywork AI, UC Berkeley, Stanford University
31
 
32
  Introduction
33
+ - This repo provides the official implementation of SuperCorrect a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs.
 
34
 
35
  Notably, our SupperCorrect-7B model significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models.
36
  🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
37
 
38
  Examples
39
+ - 🚨 For more concise and clear presentation, we omit some XML tags.
 
40
  Model details
41
 
42
  You can check our Github repo for more details.
43
 
44
  Quick Start
45
+ - Requirements
 
 
46
  Since our current model is based on Qwen2.5-Math series, transformers>=4.37.0 is needed for Qwen2.5-Math models. The latest version is recommended.
47
 
48
  🚨 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`.
49
 
50
  Inference
51
+ - 🤗 Hugging Face Transformers
 
52
 
53
+ ```python
54
  from transformers import AutoModelForCausalLM, AutoTokenizer
55
 
56
  model_name = "BitStarWalkin/SuperCorrect-7B"
 
63
  )
64
  tokenizer = AutoTokenizer.from_pretrained(model_name)
65
 
66
+ prompt = "Find the distance between the foci of the ellipse \\[9x^2 + \\frac{y^2}{9} = 99.\\]"
67
  hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
68
  # HT
69
  messages = [
 
88
 
89
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
90
  print(response)
91
+ ```
92
 
93
+ #### 🔥 vLLM
 
 
94
 
95
+ ```python
96
+ import os
97
+ from vllm import LLM, SamplingParams
98
+ model_name = 'BitStarWalkin/SuperCorrect-7B'
99
+ hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
100
+ prompts = [
101
+ "For what positive value of $t$ is $|{-4+ti}| = 6$?",
102
+ "Find the distance between the foci of the ellipse \\[9x^2 + \\frac{y^2}{9} = 99.\\]",
103
+ "The fourth term of a geometric series is $24$ and the eleventh term is $3072$. What is the common ratio?"
104
+ ]
105
+ combined_prompts = [hierarchial_prompt + '\n' + prompt for prompt in prompts]
106
+ sampling_params = SamplingParams(temperature=0, top_p=1,max_tokens=1024)
107
+ llm = LLM(model=model_name, trust_remote_code=True)
108
+ outputs = llm.generate(combined_prompts, sampling_params)
109
+
110
+ #Print the outputs.
111
+ for output in outputs:
112
+ prompt = output.prompt
113
+ generated_text = output.outputs[0].text
114
+ print(f"Prompt: {prompt}")
115
+ print(f"Generated text: {generated_text}")
116
+ ```
117
 
118
+ Here we also provide inference code with [vLLM](https://github.com/vllm-project/vllm) . vLLM is a fast and easy-to-use library for LLM inference and serving.
 
 
119
 
 
 
 
120
 
121
+ ### 1. Our evaluation
 
122
 
123
+ Here we provide two different evaluation methods: **online version** which utilizes GPT-4o to conduct a more fair and robust judgement and **offline version** which utilizes programming method to verify the final results. Both methods aim to provide a more accurate and strict evaluation results, as the final results in MATH dataset are not always numeric or pure expression. We now provide online version for evaluation, we will update soon for offline version.
 
124
 
 
125
  ```bash
126
+ API_KEY= "Input your key here"
127
+ MODEL_NAME_OR_PATH="BitStarWalkin/SuperCorrect-7B"
128
+ export CUDA_VISIBLE_DEVICES="0"
129
+ bash evaluation.sh $API_KEY $MODEL_NAME_OR_PATH
130
  ```
131
 
132
+ ### 2. Evaluation with [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
133
+
134
  ```bash
135
+ lm_eval --model hf \
136
+ --model_args pretrained="Qwen2.5-Math-7B-Instruct" \
137
+ --tasks minerva_math \
138
+ --log_samples \
139
+ --output_path Qwen2.5-Math-7B-Instruct-lm-evaluation \
140
+ --batch_size 12
141
+
142
+ lm_eval --model hf \
143
+ --model_args pretrained="SuperCorrect-7B" \
144
+ --tasks minerva_math \
145
+ --log_samples \
146
+ --output_path SuperCorrect-7B-lm-evaluation \
147
+ --batch_size 12
148
  ```
149
+ Evaluation results produced by lm-evaluation:
150
+
151
+ | Qwen2.5-Math-7B-Instruct | Version | Filter | n-shot | Metric | | Value | | Stderr |
152
+ | ----------------------------------- | ------: | ------ | -----: | ----------- | ---- | -----: | ---- | -----: |
153
+ | minerva_math | 1 | none | 4 | exact_match | ↑ | 0.5034 | ± | 0.0064 |
154
+ | - minerva_math_algebra | 1 | none | 4 | exact_match | ↑ | 0.7009 | ± | 0.0133 |
155
+ | - minerva_math_counting_and_prob | 1 | none | 4 | exact_match | ↑ | 0.5232 | ± | 0.0230 |
156
+ | - minerva_math_geometry | 1 | none | 4 | exact_match | ↑ | 0.4635 | ± | 0.0228 |
157
+ | - minerva_math_intermediate_algebra | 1 | none | 4 | exact_match | ↑ | 0.2237 | ± | 0.0139 |
158
+ | - minerva_math_num_theory | 1 | none | 4 | exact_match | ↑ | 0.4667 | ± | 0.0215 |
159
+ | - minerva_math_prealgebra | 1 | none | 4 | exact_match | ↑ | 0.7394 | ± | 0.0149 |
160
+ | - minerva_math_precalc | 1 | none | 4 | exact_match | ↑ | 0.2143 | ± | 0.0176 |
161
+
162
+ | SuperCorrect-7B | Version | Filter | n-shot | Metric | | Value | | Stderr |
163
+ | ------------------------------------ | ------: | ------ | -----: | ----------- | ---- | -----: | ---- | -----: |
164
+ | minerva_math | 1 | none | 4 | exact_match | ↑ | 0.6188 (**+0.1154**) | ± | 0.0065 |
165
+ | - minerva_math_algebra | 1 | none | 4 | exact_match | ↑ | 0.7936 (**+0.0927**) | ± | 0.0118 |
166
+ | - minerva_math_counting_and_prob | 1 | none | 4 | exact_match | ↑ | 0.5802 (**+0.0570**) | ± | 0.0227 |
167
+ | - minerva_math_geometry | 1 | none | 4 | exact_match | ↑ | 0.5261 (**+0.0626**) | ± | 0.0228 |
168
+ | - minerva_math_intermediate_algebra | 1 | none | 4 | exact_match | ↑ | 0.4385 (**+0.2148**) | ± | 0.0165 |
169
+ | - minerva_math_num_theory | 1 | none | 4 | exact_match | ↑ | 0.6167 (**+0.1500**) | ± | 0.0209 |
170
+ | - minerva_math_prealgebra | 1 | none | 4 | exact_match | ↑ | 0.7715 (**+0.0321**) | ± | 0.0142 |
171
+ | - minerva_math_precalc | 1 | none | 4 | exact_match | ↑ | 0.4103 (**+0.1960**) | ± | 0.0211 |
172
+
173
+ | Summary | Version | Filter | n-shot | Metric | | Value | | Stderr |
174
+ | ------------ | ------: | ------ | ------ | ----------- | ---- | -----: | ---- | -----: |
175
+ | Qwen2.5-Math-7B-Instruct | 1 | none | 4| exact_match | ↑ | 0.5034 | ± | 0.0064 |
176
+ | SuperCorrect-7B | 1 | none | 4| exact_match | ↑ | 0.6188 (**+0.1154**) | ± | 0.0065 |
177
+
178
+ ### 3. Evaluation with [Qwen2.5-Math-Evaluation](https://github.com/QwenLM/Qwen2.5-Math)
179
+ ```bash
180
+ export CUDA_VISIBLE_DEVICES="0"
181
+ MODEL_NAME_OR_PATH="Qwen/Qwen2.5-Math-7B-Instruct"
182
+ bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH
183
 
184
+ export CUDA_VISIBLE_DEVICES="0"
185
+ MODEL_NAME_OR_PATH="BitStarWalkin/SuperCorrect-7B"
186
+ bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH
 
 
187
  ```
188
+ Evaluation results produced by Qwen2.5-Math-Eval:
189
+ | Model | MATH Accuracy (%) |
190
+ | ---------------- | ----------------- |
191
+ | Qwen2.5-Math | 80.6 |
192
+ | **SuperCorrect** | **82.1** |
193
+ | **Our Improvement** | **+1.5** |
194
 
195
+ ## Citation
 
 
 
196
 
197
+ ```bash
198
+ @inproceedings{yang2025supercorrect,
199
+ title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights},
200
+ author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
201
+ booktitle={International Conference on Learning Representations},
202
+ year={2025}
203
+ }
204
+
205
+ @article{yang2024buffer,
206
+ title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
207
+ author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Cao, Shiyi and Xu, Minkai and Zhang, Wentao and Gonzalez, Joseph E and Cui, Bin},
208
+ journal={Advances in Neural Information Processing Systems},
209
+ year={2024}
210
+ }
211
  ```
212
+
213
+ ## Acknowledgements
214
+
215
+ Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), [DeepSeek-Math](https://github.com/deepseek-ai/DeepSeek-Math), [Llama3-Series](https://github.com/meta-llama/llama3). Our evaluation method is based on the code base of outstanding works like [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math) and [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). We also want to express our gratitude for amazing works such as [BoT](https://github.com/YangLing0818/buffer-of-thought-llm) which provides the idea of thought template.