You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
510
509
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
511
510
"""
512
-
depr_message=f"Calling `enable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_slicing()`."
513
-
deprecate(
514
-
"enable_vae_slicing",
515
-
"0.40.0",
516
-
depr_message,
517
-
)
518
511
self.vae.enable_slicing()
519
512
520
513
defdisable_vae_slicing(self):
521
514
r"""
522
515
Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
523
516
computing decoding in one step.
524
517
"""
525
-
depr_message=f"Calling `disable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_slicing()`."
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
537
524
processing larger images.
538
525
"""
539
-
depr_message=f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
540
-
deprecate(
541
-
"enable_vae_tiling",
542
-
"0.40.0",
543
-
depr_message,
544
-
)
545
526
self.vae.enable_tiling()
546
527
547
528
defdisable_vae_tiling(self):
548
529
r"""
549
530
Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
550
531
computing decoding in one step.
551
532
"""
552
-
depr_message=f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
553
-
deprecate(
554
-
"disable_vae_tiling",
555
-
"0.40.0",
556
-
depr_message,
557
-
)
558
533
self.vae.disable_tiling()
559
534
560
535
# Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.prepare_latents
@@ -688,11 +663,11 @@ def __call__(
688
663
their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed
689
664
will be used.
690
665
guidance_scale (`float`, *optional*, defaults to 3.5):
691
-
Guidance scale as defined in [Classifier-Free Diffusion
692
-
Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2.
693
-
of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting
694
-
`guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to
695
-
the text `prompt`, usually at the expense of lower image quality.
666
+
Embedded guiddance scale is enabled by setting `guidance_scale` > 1. Higher `guidance_scale` encourages
667
+
a model to generate images more aligned with `prompt` at the expense of lower image quality.
668
+
669
+
Guidance-distilled models approximates true classifer-free guidance for `guidance_scale` > 1. Refer to
670
+
the [paper](https://huggingface.co/papers/2210.03142) to learn more.
696
671
num_images_per_prompt (`int`, *optional*, defaults to 1):
697
672
The number of images to generate per prompt.
698
673
generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
@@ -701,7 +676,7 @@ def __call__(
701
676
latents (`torch.Tensor`, *optional*):
702
677
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
703
678
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
704
-
tensor will be generated by sampling using the supplied random `generator`.
679
+
tensor will ge generated by sampling using the supplied random `generator`.
705
680
prompt_embeds (`torch.Tensor`, *optional*):
706
681
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
707
682
provided, text embeddings will be generated from `prompt` input argument.
@@ -904,31 +879,49 @@ def __call__(
904
879
# broadcast to batch dimension in a way that's compatible with ONNX/Core ML
0 commit comments