Skip to content

[fix] fix for prior preservation and mixed precision sampling #11873

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Brvcket
Copy link

@Brvcket Brvcket commented Jul 6, 2025

There were 4 issues fixed:

  1. When sampling with prior preservation, the prompt was not passed to the correct argument position, causing a generation error during sampling.
  2. When sampling with prior preservation, the transformer passed to the pipeline did not have the torch_dtype argument, causing issues with mixed precision.
  3. When sampling with prior preservation, the prompt was passed in fp32 without proper casting, leading to mixed precision mismatches.
  4. When not args.train_text_encoder, prompt_embeds and pooled_prompt_embeds were None, leading to the same error as reported in RuntimeError: The size of tensor a (4608) must match the size of tensor b (5120) at non-singleton dimension 2 during DreamBooth Training with Prior Preservation #10722.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant