Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
RedMist137
/
DPO-Zephyr-7B
like
0
Safetensors
RedMist137/AIHF_DPO_iter0
opt
alignment-handbook
trl
dpo
Generated from Trainer
License:
other
Model card
Files
Files and versions
Community
ef5e318
DPO-Zephyr-7B
/
merges.txt
RedMist137
Training in progress, step 100
85cbd5e
verified
21 days ago
raw
Copy download link
history
Safe
456 kB
File too large to display, you can
check the raw version
instead.