Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
tpo-alignment
/
Mistral-Instruct-7B-TPO-y4
like
0
Follow
TPO
4
Safetensors
princeton-nlp/mistral-instruct-ultrafeedback
mistral
alignment-handbook
Generated from Trainer
arxiv:
2405.16681
License:
mit
Model card
Files
Files and versions
Community
Train
main
Mistral-Instruct-7B-TPO-y4
/
README.md
Commit History
Update README.md
60721f9
verified
sahsaeedi
commited on
10 days ago
Update README.md
a6a94bb
verified
sahsaeedi
commited on
10 days ago
Upload MistralForCausalLM
6f2386d
verified
sahsaeedi
commited on
Jan 23