Skip to content

yaoliliu/DanceGRPO-kontext

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DanceGRPO

DanceGRPO is the first unified RL-based framework for visual generation.

This is the official implementation for paper, DanceGRPO: Unleashing GRPO on Visual Generation. We develop DanceGRPO based on FastVideo, a scalable and efficient framework for video and image generation.

Key Features

DanceGRPO has the following features:

  • Support Stable Diffusion
  • Support FLUX
  • Support HunyuanVideo

Updates

  • [2025.05.12]: 🔥 We released the paper in arXiv!
  • [2025.05.28]: 🔥 We released the training scripts of FLUX and Stable Diffusion!
  • [2025.07.03]: 🔥 We released the training scripts of HunyuanVideo!

Getting Started

Downloading checkpoints

You should use "mkdir" for these folders first.

For image generation,

  1. Download the Stable Diffusion v1.4 checkpoints from here to "./data/stable-diffusion-v1-4".
  2. Download the FLUX checkpoints from here to "./data/flux".
  3. Download the HPS-v2.1 checkpoint (HPS_v2.1_compressed.pt) from here to "./hps_ckpt".
  4. Download the CLIP H-14 checkpoint (open_clip_pytorch_model.bin) from here to "./hps_ckpt".

For video generation,

  1. Download the HunyuanVideo checkpoints from here to "./data/HunyuanVideo".
  2. Download the Qwen2-VL-2B-Instruct checkpoints from here to "./Qwen2-VL-2B-Instruct".
  3. Download the VideoAlign checkpoints from here to "./videoalign_ckpt".

Installation

./env_setup.sh fastvideo

Training

# for Stable Diffusion, with 8 H800 GPUs
bash scripts/finetune/finetune_sd_grpo.sh   
# for FLUX, preprocessing with 8 H800 GPUs
bash scripts/preprocess/preprocess_flux_rl_embeddings.sh
# for FLUX, training with 16 H800 GPUs for better convergence,
# or you can use finetune_flux_grpo_8gpus.sh with 8 H800 GPUs, but with relatively slower convergence
bash scripts/finetune/finetune_flux_grpo.sh   

For image generation open-source version, we use the prompts in HPD dataset for training, as shown in "./prompts.txt".

# for HunyuanVideo, preprocessing with 8 H800 GPUs
bash scripts/preprocess/preprocess_hunyuan_rl_embeddings.sh
# for HunyuanVideo, using the following script for training with 16/32 H800 GPUs
bash scripts/finetune/finetune_hunyuan_grpo.sh   

For the video generation open-source version, we filter the prompts from VidProM dataset for training, as shown in "./video_prompts.txt".

Image Generation Rewards

We give the (moving average) reward curves (also the results in reward.txt or hps_reward.txt) of Stable Diffusion (left or upper) and FLUX (right or lower). We can complete the FLUX training (200 iterations) within 12 hours with 16 H800 GPUs.

  1. We provide more visualization examples (base, 80 iters rlhf, 160 iters rlhf) in "./assets/flux_visualization".
  2. Here is the visualization script "./scripts/visualization/vis_flux.py" for FLUX. First, run rm -rf ./data/flux/transformer/* to clear the directory, then copy the files from a trained checkpoint (e.g., checkpoint-160-0) into ./data/flux/transformer. After that, you can run the visualization. If it's trained for 160 iterations, the results are already provided in my repo.
  3. More discussion on FLUX can be found in "./fastvideo/README.md".
  4. (Thanks for a community contribution from @Jinfa Huang, if you change the train_batch_size and train_sp_batch_size from 1 to 2, change the gradient_accumulation_steps from 4 to 12, you can train the FLUX with 8 H800 GPUs, and you can finish the FLUX training within a day.)

Video Generation Rewards

We give the (moving average) reward curves (also the results in vq_reward.txt) of HunyuanVideo with 16/32 H800 GPUs.

With 16 H800 GPUs,

With 32 H800 GPUs,

  1. For the open-source version, our mission is to reduce the training cost. So we reduce the number of frames, sampling steps, and GPUs compared with the settings in the paper. So the reward curves will be different, but the VQ improvements are similar (50%~60%).
  2. For visualization, run rm -rf ./data/HunyuanVideo/transformer/* to clear the directory, then copy the files from a trained checkpoint (e.g., checkpoint-100-0) into ./data/HunyuanVideo/transformer. After that, you can run the visualization script "./scripts/visualization/vis_hunyuanvideo.sh".
  3. Although training with 16 H800 GPUs has similar rewards with 32 H800 GPUs, I still find that 32 H800 GPUs leads to better visulization results.
  4. We plot the rewards by de-normalizing, with the formula VQ = VQ * 2.2476 + 3.6757 by following here.

Multi-reward Training

The Multi-reward training code and reward curves can be found here.

Important Discussion and Results with More Reward Models

Thanks for the issue from @Yi-Xuan XU, the results of more reward models and better visualization (how to avoid grid patterns) on FLUX can be found here. We also support the pickscore for FLUX with --use_pickscore.

Acknowledgement

We learned and reused code from the following projects:

We thank the authors for their contributions to the community!

Citation

If you use DanceGRPO for your research, please cite our paper:

@article{xue2025dancegrpo,
  title={DanceGRPO: Unleashing GRPO on Visual Generation},
  author={Xue, Zeyue and Wu, Jie and Gao, Yu and Kong, Fangyuan and Zhu, Lingting and Chen, Mengzhao and Liu, Zhiheng and Liu, Wei and Guo, Qiushan and Huang, Weilin and others},
  journal={arXiv preprint arXiv:2505.07818},
  year={2025}
}

About

A Fork for implementing flux-kontext RL training in DanceGRPO

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • Shell 0.4%