Add code snippet and other sections

#2
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -16,19 +16,58 @@ tags:
16
  This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the model.
18
 
19
- ---
20
-
21
  ## Model Description
22
  SuperCorrect is a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs. It incorporates LLMs with a pre-defined hierarchical thought template (Buffer of Thought (BoT)) to conduct more deliberate reasoning. Notably, SuperCorrect-7B significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models. It relies on pure mathematical reasoning abilities of LLMs, instead of leveraging other programming methods such as PoT and ToRA.
23
 
24
  ## Paper
25
 
26
- [SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights](https://hf.co/papers/2410.09008)
27
 
28
  ## Code
29
 
30
  https://github.com/YangLing0818/SuperCorrect-llm
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## Use with llama.cpp
34
  Install llama.cpp through brew (works on Mac and Linux)
@@ -68,4 +107,29 @@ Step 3: Run inference through the main binary.
68
  or
69
  ```
70
  ./llama-server --hf-repo Triangle104/SuperCorrect-7B-Q6_K-GGUF --hf-file supercorrect-7b-q6_k.gguf -c 2048
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  ```
 
16
  This model was converted to GGUF format from [`BitStarWalkin/SuperCorrect-7B`](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/BitStarWalkin/SuperCorrect-7B) for more details on the model.
18
 
 
 
19
  ## Model Description
20
  SuperCorrect is a novel two-stage fine-tuning method for improving both reasoning accuracy and self-correction ability for LLMs. It incorporates LLMs with a pre-defined hierarchical thought template (Buffer of Thought (BoT)) to conduct more deliberate reasoning. Notably, SuperCorrect-7B significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models. It relies on pure mathematical reasoning abilities of LLMs, instead of leveraging other programming methods such as PoT and ToRA.
21
 
22
  ## Paper
23
 
24
+ [SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights](https://huggingface.co/papers/2410.09008)
25
 
26
  ## Code
27
 
28
  https://github.com/YangLing0818/SuperCorrect-llm
29
 
30
+ ## Use with Transformers
31
+
32
+ ```python
33
+ from transformers import AutoModelForCausalLM, AutoTokenizer
34
+
35
+ model_name = "BitStarWalkin/SuperCorrect-7B"
36
+ device = "cuda"
37
+
38
+ model = AutoModelForCausalLM.from_pretrained(
39
+ model_name,
40
+ torch_dtype="auto",
41
+ device_map="auto"
42
+ )
43
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
44
+
45
+ prompt = "Find the distance between the foci of the ellipse \[9x^2 + \frac{y^2}{9} = 99.\]"
46
+ hierarchical_prompt = "Solve the following math problem in a step-by-step XML format, each step should be enclosed within tags like <Step1></Step1>. For each step enclosed within the tags, determine if this step is challenging and tricky, if so, add detailed explanation and analysis enclosed within <Key> </Key> in this step, as helpful annotations to help you thinking and remind yourself how to conduct reasoning correctly. After all the reasoning steps, summarize the common solution and reasoning steps to help you and your classmates who are not good at math generalize to similar problems within <Generalized></Generalized>. Finally present the final answer within <Answer> </Answer>."
47
+ # HT
48
+ messages = [
49
+ {"role": "system", "content":hierarchical_prompt },
50
+ {"role": "user", "content": prompt}
51
+ ]
52
+
53
+ text = tokenizer.apply_chat_template(
54
+ messages,
55
+ tokenize=False,
56
+ add_generation_prompt=True
57
+ )
58
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
59
+
60
+ generated_ids = model.generate(
61
+ **model_inputs,
62
+ max_new_tokens=1024
63
+ )
64
+ generated_ids = [
65
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
66
+ ]
67
+
68
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
69
+ print(response)
70
+ ```
71
 
72
  ## Use with llama.cpp
73
  Install llama.cpp through brew (works on Mac and Linux)
 
107
  or
108
  ```
109
  ./llama-server --hf-repo Triangle104/SuperCorrect-7B-Q6_K-GGUF --hf-file supercorrect-7b-q6_k.gguf -c 2048
110
+ ```
111
+
112
+ ## Acknowledgements
113
+
114
+ Our SuperCorrect is a two-stage fine-tuning model which based on several extraordinary open-source models like Qwen2.5-Math, DeepSeek-Math, Llama3-Series. Our evaluation method is based on the code base of outstanding works like Qwen2.5-Math and lm-evaluation-harness. We also want to express our gratitude for amazing works such as BoT which provides the idea of thought template.
115
+
116
+ ## Performance
117
+
118
+ We evaluate our SupperCorrect-7B on two widely used English math benchmarks GSM8K and MATH. All evaluations are tested with our evaluation method which is zero-shot hierarchical thought based prompting.
119
+
120
+ ## Citation
121
+
122
+ ```bibtex
123
+ @article{yang2024supercorrect,
124
+ title={SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights}
125
+ author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Xu, Minkai and Gonzalez, Joseph E and Cui, Bin and Yan, Shuicheng},
126
+ journal={arXiv preprint arXiv:2410.09008},
127
+ year={2024}
128
+ }
129
+ @article{yang2024buffer,
130
+ title={Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models},
131
+ author={Yang, Ling and Yu, Zhaochen and Zhang, Tianjun and Cao, Shiyi and Xu, Minkai and Zhang, Wentao and Gonzalez, Joseph E and Cui, Bin},
132
+ journal={arXiv preprint arXiv:2406.04271},
133
+ year={2024}
134
+ }
135
  ```