-
Notifications
You must be signed in to change notification settings - Fork 6.1k
fix a bug in from_pretrained
when load optional components
#4745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
from_pretrained
when load none value optional componentsfrom_pretrained
when load optional components
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great find & fix!
You're totally right. The problem is that for newer pipelines, we started allowing optional components to be None
, e.g. as done here:
diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
Line 102 in 0a0fe69
feature_extractor: Optional[CLIPImageProcessor] = None, |
safety_checker: StableDiffusionSafetyChecker, |
Your fix is very nice - could you please add a test for this? Maybe both for StableDiffusionUpscale and StableDiffusionXL pipelines (both have optional componens)
Co-authored-by: Patrick von Platen <[email protected]>
@@ -689,3 +690,26 @@ def test_stable_diffusion_xl_multi_prompts(self): | |||
|
|||
# ensure the results are not equal | |||
assert np.abs(image_slice_1.flatten() - image_slice_3.flatten()).max() > 1e-4 | |||
|
|||
@require_torch_gpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need a GPU here - this should also work on CPU no?
@@ -422,3 +423,42 @@ def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): | |||
mem_bytes = torch.cuda.max_memory_allocated() | |||
# make sure that less than 2.9 GB is allocated | |||
assert mem_bytes < 2.9 * 10**9 | |||
|
|||
def test_stable_diffusion_upscale_pipeline_from_save_pretrained(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use a dummy model here to not put too much burden on the tests (cc @DN6)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool good to merge!
@sayakpaul can you maybe also give this a review?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important one. Thank you.
…face#4745) * fix --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: Patrick von Platen <[email protected]>
…face#4745) * fix --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: Patrick von Platen <[email protected]>
Below script currently fails but will be fixed with this PR (see more details about the error here #4686
)
currently, when we call
from_pretrained()
, optional components are treated same as optional kwargs - they are passed to theinit()
directly, but they should be handled as moduleshttps://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_utils.py#L1016