VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation
📃 Paper • 🖼 Dataset • 🤗 HF Repo • 🌐 中文博客
VisionReward is a fine-grained, multi-dimensional reward model designed to capture human preferences in images and videos. By breaking down subjective judgments into interpretable dimensions with weighted scoring, it delivers precise and comprehensive evaluations. Excelling in video quality prediction, VisionReward sets a new benchmark by thoroughly analyzing dynamic video features.
✨ Key Highlights:
- New reward model& SOTA Performance: VisionReward, a fine-grained, multi-dimensional, interpretable reward model, achieves 64.0 (Tau) / 72.1 (Diff) on Video Preference Test Set, surpassing VideoScore by 17.2% and setting a new state-of-the-art!
- Fine-Grained Multidimensional Dataset: A rich, high-quality dataset with detailed annotations drives VisionReward’s precise understanding of human preferences across images and videos.
- Multi-objective preference optimization(MPO): Achives stable and controllable RLHF, enabling the generate model to consider and balance multiple dimensions of human preferences simultaneously.
📋 Model | 🧠 Base Model | 🤗 HF Link | 🤖 MS Link |
---|---|---|---|
VisionReward-Image | cogvlm2-llama3-chat-19B | 🤗 Huggingface | 🤖 ModelScope (coming soon) |
VisionReward-Video | cogvlm2-video-llama3-chat | 🤗 Huggingface | 🤖 ModelScope (coming soon) |
📋 Dataset | 📝 Annotation | 🤗 HF Link | 🤖 MS Link |
---|---|---|---|
VisionRewardDB-Image | 48K * 60 (dimensions) | 🤗 Huggingface | 🤖 ModelScope (coming soon) |
VisionRewardDB-Video | 33K * 64 (dimensions) | 🤗 Huggingface | 🤖 ModelScope (coming soon) |
Run the following commands to install dependencies:
pip install -r requirements.txt
Perform a checklist query using the commands below. Available image and video questions can be found in VisionReward_Image/VisionReward_image_qa.txt
and VisionReward_Video/VisionReward_video_qa.txt
, respectively.
# For Image QA
python inference-image.py --bf16 --question [[your_question]]
# Input: image_path + prompt + question
# Output: yes/no
# For Video QA
python inference-video.py --question [[your_question]]
# Input: video_path + prompt + question
# Output: yes/no
Calculate scores for images/videos with the following commands. The corresponding weights are in VisionReward_Image/weight.json
and VisionReward_Video/weight.json
.
# Scoring an Image
python inference-image.py --bf16 --score
# Input: image_path + prompt
# Output: score
# Scoring a Video
python inference-video.py --score
# Input: video_path + prompt
# Output: score
Directly compare the quality of two videos, leveraging the weights in VisionReward_Video/weight.json
.
python inference-video.py --compare
# Input: video_path1 + video_path2 + prompt
# Output: better_video
If you find VisionReward helpful, please cite us:
@misc{xu2024visionrewardfinegrainedmultidimensionalhuman,
title={VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation},
author={Jiazheng Xu and Yu Huang and Jiale Cheng and Yuanming Yang and Jiajun Xu and Yuan Wang and Wenbo Duan and Shen Yang and Qunlin Jin and Shurun Li and Jiayan Teng and Zhuoyi Yang and Wendi Zheng and Xiao Liu and Ming Ding and Xiaohan Zhang and Xiaotao Gu and Shiyu Huang and Minlie Huang and Jie Tang and Yuxiao Dong},
year={2024},
eprint={2412.21059},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.21059},
}