--- base_model: - Pinkstack/PARM-V2-phi-4-4k-CoT-PyTorch tags: - text-generation-inference - transformers - unsloth - llama - gguf - code - phi3 - cot - o1 - reasoning - cot license: mit license_link: https://huggingface.co./microsoft/phi-4/resolve/main/LICENSE language: - en - multilingual pipeline_tag: text-generation --- [Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905) # We are very proud to announce, PHI-4 CoT, but you can just call it o1 mini 😉 Please check the examples we provided: https://huggingface.co./Pinkstack/PARM-V2-phi-4-16k-CoT-o1-gguf#%F0%9F%A7%80-examples Unlike previous models we've uploaded, this one is the best one we've published! Answers in two steps: Reasoning -> Final answer like o1 mini and other similar reasoning ai models. This model is our new flagship. Please note that this is an experimental Cot model by us, if there are any issues report them! a system prompt is very important but it would do everything in two steps (Reasoning -> Final answer) regardless. # 🧀 Which quant is right for you? (all tested!) - ***Q3:*** This quant should be used on most high-end devices like rtx 2080TI's, Responses are very high quality, but its slightly slower than Q4. (Runs at ~1 tokens per second or less on a Samsung z fold 5 smartphone.) - ***Q4:*** This quant should be used on high-end modern devices like rtx 3080's or any GPU,TPU,CPU that is powerful enough and has at minimum 15gb of available memory, (On servers and high-end computers we personally use it.) reccomened. - ***Q8:*** This quant should be used on very high-end modern devices which can handle it's power, it is very powerful but q4 is more well rounded, not reccomened. the model uses this prompt format: (modified phi-4 prompt) ``` {{ if .System }}<|system|> {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|user|> {{ .Prompt }}<|im_end|> {{ end }}<|assistant|>{{ .CoT }}<|CoT|> {{ .Response }}<|FinalAnswer|><|im_end|> ``` # 🧀 Examples: (q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.) **example 1:** ![example1](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/NoLJREYFU8LdMwynyLLMG.png) **example 2:** ![example2](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/c4h-nw0DPTrQgX-_tvBoT.png) **example 3:** ![example1part1.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/Dcd6-wbpDQuXoulHaqATo.png) ![example1part2.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/CoBYmYiRt9Z4IDFoOwHxc.png) All generated locally and pretty quickly too! 😲 Due to our very limited resources we weren't able to evaluate this model (yet..) if you evaluate it please do let us know! # 🧀 Information - ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3 - 0.8! - ⚠️ Due to the current prompt format, it may sometimes put <|FinalAnswer|> without providing a final answer at the end, you can ignore this or modify the prompt format. - this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, we are not comparing to qwq as it has much longer results which waste tokens. # 🧀 Uploaded model - **Developed by:** Pinkstack - **License:** MIT - **Finetuned from model :** Pinkstack/PARM-V1-phi-4-4k-CoT-pytorch This Phi-4 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.