prithivMLmods commited on
Commit
12db607
·
verified ·
1 Parent(s): ec822de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -3
README.md CHANGED
@@ -1,3 +1,103 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ base_model:
7
+ - Qwen/Qwen2.5-14B-Instruct
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
+ tags:
11
+ - text-generation-inference
12
+ - trl
13
+ - vlm
14
+ - sft
15
+ - code
16
+ - math
17
+ ---
18
+ ![ccccccccccccc.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ii0oEprS2lm6Zoama7CPe.png)
19
+
20
+ # **Gauss-Opus-14B-R999**
21
+
22
+ > Gauss-Opus-14B-R999 is based on the Qwen 2.5 14B modality architecture, designed to enhance mathematical and constructive reasoning capabilities. This model is optimized for advanced problem-solving, logical structuring, and mathematical comprehension. It excels in numerical reasoning, theorem proving, and multi-step calculations. Fine-tuned with specialized datasets in mathematics, physics, and formal logic, it delivers structured, high-accuracy outputs with a strong emphasis on precision and clarity.
23
+
24
+ ## **Key Improvements**
25
+ 1. **Enhanced Mathematical Reasoning**: Optimized for algebra, calculus, number theory, and logical deduction, providing precise and structured solutions.
26
+ 2. **Improved Instruction Following**: Capable of interpreting and following complex mathematical proofs, equations, and problem-solving instructions with high accuracy.
27
+ 3. **Versatile Adaptability**: Handles diverse reasoning tasks, including step-by-step solutions, mathematical proofs, and constructive problem-solving.
28
+ 4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed mathematical derivations.
29
+ 5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more, ensuring broad accessibility.
30
+
31
+ ## **Quickstart with transformers**
32
+
33
+ Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
34
+
35
+ ```python
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+
38
+ model_name = "prithivMLmods/Gauss-Opus-14B-R999"
39
+
40
+ model = AutoModelForCausalLM.from_pretrained(
41
+ model_name,
42
+ torch_dtype="auto",
43
+ device_map="auto"
44
+ )
45
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
46
+
47
+ prompt = "Solve the integral \int x^2 dx and explain the steps."
48
+ messages = [
49
+ {"role": "system", "content": "You are a mathematical assistant specialized in problem-solving and theorem proving."},
50
+ {"role": "user", "content": prompt}
51
+ ]
52
+ text = tokenizer.apply_chat_template(
53
+ messages,
54
+ tokenize=False,
55
+ add_generation_prompt=True
56
+ )
57
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
58
+
59
+ generated_ids = model.generate(
60
+ **model_inputs,
61
+ max_new_tokens=512
62
+ )
63
+ generated_ids = [
64
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
65
+ ]
66
+
67
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
68
+ ```
69
+
70
+ ## **Intended Use**
71
+ 1. **Mathematical Problem-Solving**:
72
+ Designed for high-precision mathematical reasoning, step-by-step calculations, and structured solutions.
73
+
74
+ 2. **Theorem Proving and Logical Reasoning**:
75
+ Useful for verifying mathematical proofs, formal logic derivations, and theorem-based reasoning.
76
+
77
+ 3. **STEM Education and Research**:
78
+ Ideal for educators, researchers, and students requiring assistance in complex problem-solving and mathematical modeling.
79
+
80
+ 4. **Algorithm Development and Optimization**:
81
+ Supports structured reasoning in algorithmic problem-solving, coding optimizations, and computational logic.
82
+
83
+ 5. **Long-Form Explanatory Content**:
84
+ Can generate detailed mathematical articles, research summaries, and explanatory guides with structured step-by-step reasoning.
85
+
86
+ 6. **Multilingual Mathematical Assistance**:
87
+ Supports global accessibility for mathematical discussions, translations, and problem explanations across multiple languages.
88
+
89
+ ## **Limitations**
90
+ 1. **Hardware Requirements**:
91
+ Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
92
+
93
+ 2. **Potential Bias in Training Data**:
94
+ While optimized for accuracy, the model may inherit biases from training data in certain problem-solving approaches.
95
+
96
+ 3. **Complexity in Abstract Theories**:
97
+ May struggle with highly abstract or unsolved mathematical problems that require intuitive leaps beyond computational logic.
98
+
99
+ 4. **Error Propagation in Extended Proofs**:
100
+ Small errors in early steps may compound in multi-step proofs and long-form mathematical derivations.
101
+
102
+ 5. **Prompt Sensitivity**:
103
+ The quality of responses depends on how well the problem is structured and framed within the input prompt.