image/png

Qwen2.5-32B-Marigold-v0-pre-release

Second-runner-up to v0

Severian's notes: Between three models, this came second. I personally preferred some of the responses I got while testing this model over actual v0, and some of ours testers did the same. It's a little more unhinged than v0, so if that's what you're looking for, consider trying this out.

Recommended settings

Context/instruct template: Chatml for the base experience, Mistral V7 shouldn't work and results to some system token bleed, but the prose can be dramatic and expressive with it too, try at your own risk.

Samplers: temperature at 0.9, min_p at 0.05, top_a at 0.3, TFS at 0.75, repetition_penalty at 1.03, DRY if you have access to it.

A virt-io derivative prompt worked best during our testing, but feel free to use what you like.

Thank you!

Big thanks to the folks in the trashpanda-org discord for testing and sending over some logs!

(datasets to be attributed later here)

Reviews

Feels rather tame compared to v0. However, I still enjoyed this model and its reasoning. Had 0 issues with latching onto the character personalization. Slight impersation was heavy with this model, but it wasn't anything too terrible. I enjoyed it very much!

— Mooth

A peek into Hasnonname's thoughts during testing

image/png

image/png

image/png

Some logs

image/png

image/png

image/png

image/png

image/png

image/png

image/png

image/png

image/png

image/png

Downloads last month
228
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for trashpanda-org/Qwen2.5-32B-Marigold-v0-exp

Finetuned
(6)
this model
Merges
4 models

Collection including trashpanda-org/Qwen2.5-32B-Marigold-v0-exp