You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to consult about the training and inference speeds of Wan-I2V-14B-480P. My setup consists of 4×A6000 (49GB GPUs). After installing Diffsynth-Studio, I ran the example code test and observed the following performance:
wan-1.3B-T2V: ~5 minutes per video generation
wan-14B-I2V-480P:
~50 minutes for 81 frames (bfloat16, 50 iterations)
~37 minutes for 21 frames
My questions:
Baseline Validation: Are these inference times normal?
Inference Acceleration: Is multi-GPU parallelization supported for inference? (I couldn't find related documentation)
Training Acceleration: The current 50min/it training speed is impractical. Are there optimization strategies?
Thank you for your help!
The text was updated successfully, but these errors were encountered:
Hi @Artiprocher,
I'd like to consult about the training and inference speeds of Wan-I2V-14B-480P. My setup consists of 4×A6000 (49GB GPUs). After installing Diffsynth-Studio, I ran the example code test and observed the following performance:
wan-1.3B-T2V: ~5 minutes per video generation
wan-14B-I2V-480P:
~50 minutes for 81 frames (bfloat16, 50 iterations)
~37 minutes for 21 frames
My questions:
Baseline Validation: Are these inference times normal?
Inference Acceleration: Is multi-GPU parallelization supported for inference? (I couldn't find related documentation)
Training Acceleration: The current 50min/it training speed is impractical. Are there optimization strategies?
Thank you for your help!
The text was updated successfully, but these errors were encountered: