Llamacpp Fixes (to GGUF), and System Prompt to invoke "thinking"
To repair for use with LLamacpp / Create GGUFS:
- Rename all model safetensor files, remove the "ft-"
- Fix "model.safetensors.index.json" => remove the "ft-" from all entries. (search/replace in NOTEPAD)
Likely same issue with 14B model too (?) - same format.
Operation:
Tested q2k quant in LMStudio, with this system prompt:
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside tags, and then provide your solution or response to the problem.
Seemed to work well with the model (using Jinja Template).
Higher temps seemed to cause/invoke more reasoning.
OpenPipe:
This will severely limit the users of your model.
Quanters like Mradermacher will not pick it up, likewise "GGUF my repo" will crash and burn.
After it (and 14B?) fixed, submit a ticket at MRadermacher 's repo to auto-quant the model to GGUF.
The will create the 32B in gguf, gguf-imatrix (and 14B too).