You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[WIP][Docs] Use DiffusionPipeline Instead of Child Classes when Loading Pipeline (huggingface#2809)
* Change the docs to use the parent DiffusionPipeline class when loading a checkpoint using from_pretrained() instead of a child class (e.g. StableDiffusionPipeline) where possible.
* Run make style to fix style issues.
* Change more docs to use DiffusionPipeline rather than a subclass.
---------
Co-authored-by: Patrick von Platen <[email protected]>
@@ -72,13 +75,13 @@ For even additional memory savings, you can use a sliced version of attention th
72
75
each head which can save a significant amount of memory.
73
76
</Tip>
74
77
75
-
To perform the attention computation sequentially over each head, you only need to invoke [`~StableDiffusionPipeline.enable_attention_slicing`] in your pipeline before inference, like here:
78
+
To perform the attention computation sequentially over each head, you only need to invoke [`~DiffusionPipeline.enable_attention_slicing`] in your pipeline before inference, like here:
76
79
77
80
```Python
78
81
import torch
79
-
from diffusers importStableDiffusionPipeline
82
+
from diffusers importDiffusionPipeline
80
83
81
-
pipe =StableDiffusionPipeline.from_pretrained(
84
+
pipe =DiffusionPipeline.from_pretrained(
82
85
"runwayml/stable-diffusion-v1-5",
83
86
84
87
torch_dtype=torch.float16,
@@ -402,10 +405,10 @@ To leverage it just make sure you have:
We aim at generating a beautiful photograph of an *old warrior chief* and will later try to find the best prompt to generate such a photograph. For now, let's keep the prompt simple:
@@ -88,7 +88,7 @@ The default run we did above used full float32 precision and ran the default num
0 commit comments