Abstract
Recent advances in human preference alignment have significantly enhanced multimodal generation and understanding. A key approach is training reward models to guide preference optimization. However, existing models are often task-specific, limiting their adaptability across diverse visual applications. We also argue that jointly learning to assess multiple tasks may foster a synergistic effect, where improved image understanding enhances image generation assessment, and refined image evaluation benefits video assessment through better frame analysis. To this end, this paper proposes UnifiedReward, the first unified reward model for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model <PRE_TAG>preference alignment</POST_TAG>. Specifically, (1) we first develop UnifiedReward on our constructed large-scale human preference dataset, including both image and video generation/understanding tasks. (2) Then, it is utilized to automatically construct high-quality preference pair data based on the vision models, fine-gradually filtering their outputs through pair ranking and point sifting. (3) Finally, these data are used for their preference alignment through Direct Preference Optimization (DPO). Experimental results demonstrate that joint learning to assess diverse visual tasks can lead to substantial mutual benefits and we apply our pipeline to both image and video understanding/generation tasks, significantly improving the performance in each domain.
Community
🌟Project page: https://codegoat24.github.io/UnifiedReward/
📖Paper: https://arxiv.org/pdf/2503.05236
💥Github: https://github.com/CodeGoat24/UnifiedReward
🤗Models: https://huggingface.co./collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
🤗Datasets: https://huggingface.co./collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- IPO: Iterative Preference Optimization for Text-to-Video Generation (2025)
- Improving Video Generation with Human Feedback (2025)
- MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation (2025)
- Harness Local Rewards for Global Benefits: Effective Text-to-Video Generation Alignment with Patch-level Reward Models (2025)
- InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model (2025)
- Temporal Preference Optimization for Long-Form Video Understanding (2025)
- HuViDPO:Enhancing Video Generation through Direct Preference Optimization for Human-Centric Alignment (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 8
Browse 8 datasets citing this paperSpaces citing this paper 0
No Space linking this paper