Skip to content

Commit 663c654

Browse files
[WIP][Docs] Use DiffusionPipeline Instead of Child Classes when Loading Pipeline (huggingface#2809)
* Change the docs to use the parent DiffusionPipeline class when loading a checkpoint using from_pretrained() instead of a child class (e.g. StableDiffusionPipeline) where possible. * Run make style to fix style issues. * Change more docs to use DiffusionPipeline rather than a subclass. --------- Co-authored-by: Patrick von Platen <[email protected]>
1 parent 920a15c commit 663c654

File tree

7 files changed

+25
-24
lines changed

7 files changed

+25
-24
lines changed

docs/source/en/optimization/fp16.mdx

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,10 @@ torch.backends.cuda.matmul.allow_tf32 = True
4545
To save more GPU memory and get more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named `fp16`, and telling PyTorch to use the `float16` type when loading them:
4646

4747
```Python
48-
pipe = StableDiffusionPipeline.from_pretrained(
48+
import torch
49+
from diffusers import DiffusionPipeline
50+
51+
pipe = DiffusionPipeline.from_pretrained(
4952
"runwayml/stable-diffusion-v1-5",
5053

5154
torch_dtype=torch.float16,
@@ -72,13 +75,13 @@ For even additional memory savings, you can use a sliced version of attention th
7275
each head which can save a significant amount of memory.
7376
</Tip>
7477

75-
To perform the attention computation sequentially over each head, you only need to invoke [`~StableDiffusionPipeline.enable_attention_slicing`] in your pipeline before inference, like here:
78+
To perform the attention computation sequentially over each head, you only need to invoke [`~DiffusionPipeline.enable_attention_slicing`] in your pipeline before inference, like here:
7679

7780
```Python
7881
import torch
79-
from diffusers import StableDiffusionPipeline
82+
from diffusers import DiffusionPipeline
8083

81-
pipe = StableDiffusionPipeline.from_pretrained(
84+
pipe = DiffusionPipeline.from_pretrained(
8285
"runwayml/stable-diffusion-v1-5",
8386

8487
torch_dtype=torch.float16,
@@ -402,10 +405,10 @@ To leverage it just make sure you have:
402405
- Cuda available
403406
- [Installed the xformers library](xformers).
404407
```python
405-
from diffusers import StableDiffusionPipeline
408+
from diffusers import DiffusionPipeline
406409
import torch
407410

408-
pipe = StableDiffusionPipeline.from_pretrained(
411+
pipe = DiffusionPipeline.from_pretrained(
409412
"runwayml/stable-diffusion-v1-5",
410413
torch_dtype=torch.float16,
411414
).to("cuda")

docs/source/en/optimization/mps.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,9 @@ The snippet below demonstrates how to use the `mps` backend using the familiar `
3535
We strongly recommend you use PyTorch 2 or better, as it solves a number of problems like the one described in the previous tip.
3636

3737
```python
38-
from diffusers import StableDiffusionPipeline
38+
from diffusers import DiffusionPipeline
3939

40-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
40+
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
4141
pipe = pipe.to("mps")
4242

4343
# Recommended if your computer has < 64 GB of RAM

docs/source/en/optimization/torch2.0.mdx

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,9 @@ pip install --upgrade torch torchvision diffusers
3535

3636
```Python
3737
import torch
38-
from diffusers import StableDiffusionPipeline
38+
from diffusers import DiffusionPipeline
3939

40-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
40+
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
4141
pipe = pipe.to("cuda")
4242

4343
prompt = "a photo of an astronaut riding a horse on mars"
@@ -48,10 +48,10 @@ pip install --upgrade torch torchvision diffusers
4848

4949
```Python
5050
import torch
51-
from diffusers import StableDiffusionPipeline
51+
from diffusers import DiffusionPipeline
5252
from diffusers.models.attention_processor import AttnProcessor2_0
5353

54-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
54+
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
5555
pipe.unet.set_attn_processor(AttnProcessor2_0())
5656

5757
prompt = "a photo of an astronaut riding a horse on mars"
@@ -68,11 +68,9 @@ pip install --upgrade torch torchvision diffusers
6868

6969
```python
7070
import torch
71-
from diffusers import StableDiffusionPipeline
71+
from diffusers import DiffusionPipeline
7272

73-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to(
74-
"cuda"
75-
)
73+
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
7674
pipe.unet = torch.compile(pipe.unet)
7775

7876
batch_size = 10

docs/source/en/quicktour.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ Different schedulers come with different denoising speeds and quality trade-offs
141141
```py
142142
>>> from diffusers import EulerDiscreteScheduler
143143

144-
>>> pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
144+
>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
145145
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
146146
```
147147

docs/source/en/stable_diffusion.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,9 @@ Let's load the pipeline.
4747
## Speed Optimization
4848

4949
``` python
50-
from diffusers import StableDiffusionPipeline
50+
from diffusers import DiffusionPipeline
5151

52-
pipe = StableDiffusionPipeline.from_pretrained(model_id)
52+
pipe = DiffusionPipeline.from_pretrained(model_id)
5353
```
5454

5555
We aim at generating a beautiful photograph of an *old warrior chief* and will later try to find the best prompt to generate such a photograph. For now, let's keep the prompt simple:
@@ -88,7 +88,7 @@ The default run we did above used full float32 precision and ran the default num
8888
``` python
8989
import torch
9090

91-
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
91+
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
9292
pipe = pipe.to("cuda")
9393
```
9494

docs/source/en/training/dreambooth.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -457,11 +457,11 @@ If you have **`"accelerate>=0.16.0"`** installed, you can use the following code
457457
inference from an intermediate checkpoint:
458458

459459
```python
460-
from diffusers import StableDiffusionPipeline
460+
from diffusers import DiffusionPipeline
461461
import torch
462462

463463
model_id = "path_to_saved_model"
464-
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
464+
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
465465

466466
prompt = "A photo of sks dog in a bucket"
467467
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]

docs/source/en/using-diffusers/using_safetensors.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,9 +75,9 @@ And we're equipped with dealing with it.
7575
Then in order to use the model, even before the branch gets accepted by the original author you can do:
7676

7777
```python
78-
from diffusers import StableDiffusionPipeline
78+
from diffusers import DiffusionPipeline
7979

80-
pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22")
80+
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22")
8181
```
8282

8383
or you can test it directly online with this [space](https://huggingface.co/spaces/diffusers/check_pr).

0 commit comments

Comments
 (0)