Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
RedMist137
/
DPO-Zephyr-7B
like
0
Safetensors
RedMist137/AIHF_DPO_iter0
opt
alignment-handbook
trl
dpo
Generated from Trainer
License:
other
Model card
Files
Files and versions
Community
main
DPO-Zephyr-7B
/
config.json
Commit History
End of training
ef5e318
verified
RedMist137
commited on
20 days ago
Training in progress, step 100
a2cc2d6
verified
RedMist137
commited on
21 days ago
End of training
be374ad
verified
RedMist137
commited on
21 days ago
Training in progress, step 100
85cbd5e
verified
RedMist137
commited on
21 days ago
Training in progress, step 100
b91e871
verified
RedMist137
commited on
25 days ago