Text Generation
Transformers
Safetensors
Chinese
English
qwen2
conversational
text-generation-inference
Inference Endpoints

Why train on anthracite-org/kalo-opus-instruct-22k-no-refusal?

#3
by finding1 - opened

Why is this trained on anthracite-org/kalo-opus-instruct-22k-no-refusal?

That dataset does not seem to have anything to do with removing alignment. The questions and answers are not on controversial topics. The "no refusal" in the name seems to mean that the dataset contains no question-answer pairs where the "answer" is a refusal, but it appears to do this by having no controversial questions.

Ai Closer org

Not a rigorous choice, their magnum finetune is best qwen finetune I tried.
AGI is not a Purely Decensorship model.πŸ€” Maybe I should change my model card.

Sign up or log in to comment