DanceGRPO is the first unified RL-based framework for visual generation.
We develop DanceGRPO based on FastVideo, a scalable and efficient framework for video and image generation.
DanceGRPO has the following features:
- Support Stable Diffusion
- Support FLUX
- Support HunyuanVideo (todo)
- Download the Stable Diffusion v1.4 checkpoints from here to "./data/stable-diffusion-v1-4".
- Download the FLUX checkpoints from here to "./data/flux".
- Download the HPS-v2.1 checkpoint (HPS_v2.1_compressed.pt) from here to "./hps_ckpt".
- Download the CLIP H-14 checkpoint (open_clip_pytorch_model.bin) from here to "./hps_ckpt".
./env_setup.sh fastvideo# for Stable Diffusion, with 8 H800s
bash scripts/finetune/finetune_sd_grpo.sh # for FLUX, preprocessing with 8 H800s
bash scripts/preprocess/preprocess_flux_rl_embeddings.sh
# for FLUX, training with 16 H800s
bash scripts/finetune/finetune_flux_grpo.sh We give the (moving average) reward curves of Stable Diffusion (left) and FLUX (right). We can complete the FLUX training (200 iterations) within 12 hours with 16 H800s.
We provide more visualization examples (base, 80 iters rlhf, 160 iters rlhf) in "./assets/flux_visualization". The visualization scripts can be find in "./scripts/visulization/vis_flux.py".
More discussion on FLUX can be found in "./fastvideo/README.md".
We learned and reused code from the following projects:
If you use DanceGRPO for your research, please cite our paper:
@article{xue2025dancegrpo,
title={DanceGRPO: Unleashing GRPO on Visual Generation},
author={Xue, Zeyue and Wu, Jie and Gao, Yu and Kong, Fangyuan and Zhu, Lingting and Chen, Mengzhao and Liu, Zhiheng and Liu, Wei and Guo, Qiushan and Huang, Weilin and others},
journal={arXiv preprint arXiv:2505.07818},
year={2025}
}
