nbeerbower's picture
Update README.md
d754599 verified
metadata
library_name: transformers
license: apache-2.0
base_model:
  - nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
datasets:
  - nbeerbower/Arkhaios-DPO
  - nbeerbower/Purpura-DPO

image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Prism-12B

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.

The goal was to reduce archaic language and purple prose in a completely uncensored model.

Method

ORPO tuned with 2x A100 for 5 epochs.

The learning rate was lowered to 3e-6 for this version. In addition, a system prompt was introduced to further augment the prompts and encourage responses to match the data.