Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
trl-lib
's Collections
Preference datasets
Stepwise supervision datasets
Prompt-completion datasets
Prompt-only datasets
Unpaired preference datasets
Comparing DPO with IPO and KTO
Online-DPO
Online-DPO
updated
17 days ago
Upvote
1
trl-lib/pythia-1b-deduped-tldr-online-dpo
Updated
Aug 2, 2024
•
2
trl-lib/pythia-1b-deduped-tldr-sft
Updated
Aug 2, 2024
•
92
trl-lib/pythia-6.9b-deduped-tldr-online-dpo
Updated
Aug 2, 2024
•
4
trl-lib/pythia-2.8b-deduped-tldr-sft
Updated
Aug 2, 2024
•
4
trl-lib/pythia-2.8b-deduped-tldr-rm
Updated
Aug 2, 2024
•
2
trl-lib/pythia-6.9b-deduped-tldr-sft
Updated
Aug 2, 2024
•
2
trl-lib/pythia-6.9b-deduped-tldr-rm
Updated
Aug 2, 2024
•
4
trl-lib/pythia-1b-deduped-tldr-offline-dpo
Text Generation
•
Updated
Aug 2, 2024
•
75
trl-lib/pythia-2.8b-deduped-tldr-offline-dpo
Text Generation
•
Updated
Aug 2, 2024
•
8
trl-lib/pythia-6.9b-deduped-tldr-offline-dpo
Text Generation
•
Updated
Aug 2, 2024
•
6
trl-lib/pythia-2.8b-deduped-tldr-online-dpo
Text Generation
•
Updated
Aug 2, 2024
•
4
Upvote
1
Share collection
View history
Collection guide
Browse collections