Daemontatox commited on
Commit
527adbb
·
verified ·
1 Parent(s): 519d640

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -5
README.md CHANGED
@@ -14,10 +14,71 @@ datasets:
14
  library_name: transformers
15
  ---
16
 
17
- ![image](./image.webp)
18
- # Uploaded model
19
 
20
- - **Developed by:** Daemontatox
21
- - **License:** apache-2.0
22
- - **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-32b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
 
14
  library_name: transformers
15
  ---
16
 
17
+ # Zirel-R1: Optimized for Fast and Essential Reasoning
 
18
 
19
+ ![Zirel Logo](./image.webp)
20
+
21
+ ## Model Overview
22
+
23
+ **Zirel-R1** is an advanced **reasoning-optimized** model designed for **short, fast, and necessary reasoning**, avoiding long and unnecessary computation. It surpasses **Cogito-R1** and **PathfinderAI S1** in efficiency, making it ideal for applications requiring structured logical inference and quick decision-making.
24
+
25
+ - **Developed by:** Daemontatox
26
+ - **Model Series:** Zirel
27
+ - **Base Model:** `unsloth/deepseek-r1-distill-qwen-32b`
28
+ - **License:** Apache-2.0
29
+ - **Languages:** English
30
+ - **Finetuned on:** `Daemontatox/math_conv`
31
+ - **Library:** Transformers
32
+
33
+ ## Key Features
34
+
35
+ ✅ **Fast and Concise Reasoning** – Delivers precise answers with minimal computational overhead.
36
+ ✅ **Optimized for Short-Form Problem Solving** – Excels in extracting core insights efficiently.
37
+ ✅ **Enhanced Logical Inference** – Ideal for applications in structured decision-making, math reasoning, and controlled AI workflows.
38
+
39
+ ## Usage
40
+
41
+ You can load the model using `transformers`:
42
+
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+
46
+ model_name = "Daemontatox/Zirel-R1"
47
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
48
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
49
+
50
+ prompt = "What is the next number in the sequence: 2, 4, 8, 16?"
51
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
52
+ output = model.generate(**inputs, max_new_tokens=50)
53
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
54
+
55
+ Performance
56
+
57
+ Speed: 🚀 Optimized for rapid inference and low-latency responses.
58
+
59
+ Accuracy: 🎯 Fine-tuned on high-quality mathematical and reasoning datasets.
60
+
61
+ Efficiency: ⚡ Processes only the necessary information for an answer.
62
+
63
+
64
+ Citation
65
+
66
+ If you use Zirel-R1, please cite:
67
+
68
+ @misc{daemontatox2025zirel,
69
+ author = {Daemontatox},
70
+ title = {Zirel-R1: Optimized for Fast and Essential Reasoning},
71
+ year = {2025},
72
+ publisher = {Hugging Face},
73
+ url = {https://huggingface.co/Daemontatox/Zirel-R1}
74
+ }
75
+
76
+ License
77
+
78
+ This model is released under the Apache-2.0 License.
79
+
80
+
81
+ ---
82
+
83
+ This template ensures your model card looks professional and informative on Hugging Face. Let me know if you need modifications!
84