image/png

GGUF * GGUF-imat

Qwenstein 2.5 32B Instruct

Qwenstein 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models:

output_base_model: "Qwen/Qwen2.5-32B"
finetune_merge:
  - { "model": "maldv/Qwentile2.5-32B-Instruct", "base": "Qwen/Qwen2.5-32B", "alpha": 1.0, "is_input": true, "is_output": true }
  - { "model": "NovaSky-AI/Sky-T1-32B-Preview", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }
  - { "model": "Sao10K/32B-Qwen2.5-Kunou-v1", "base": "Qwen/Qwen2.5-32B", "alpha": 0.6 }
  - { "model": "6cf/QwQ-32B-Preview-IdeaWhiz-v1", "base": "Qwen/Qwen2.5-32B", "alpha": 0.7 }

In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model.

What is this?

This is my second attempt to make Qwentile more intelligent.

Is it?

Yeah, it's pretty good! While not smart enough to figure out the "If you have one bucket that holds two gallons and another bucket that holds five gallons, how do you fill one of the buckets with exactly 4 gallons?" problem because like every other model it wants to fill the 5 gallon bucket first, it did realize my proposed solution was correct when I offered and didn't get stuck on it's own invalid logic.

It also has pretty strong LaTeX capability.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwenstein.5-32b-instruct,
    title = {Qwenstein 2.5 32B Instruct},
    url = {https://huggingface.co./maldv/Qwenstein2.5-32B-Instruct},
    author = {Praxis Maldevide},
    month = {January},
    year = {2025}
}
Downloads last month
4
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maldv/Qwenstein2.5-32B-Instruct

Base model

Qwen/Qwen2.5-32B
Finetuned
(1)
this model
Quantizations
2 models