Skip to content

Commit 8244146

Browse files
authored
[Docs] add missing output image (huggingface#7425)
add missing output image
1 parent 3e1097c commit 8244146

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

docs/source/en/tutorials/using_peft_for_inference.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Make sure to include the token `toy_face` in the prompt and then you can perform
4545
```python
4646
prompt = "toy_face of a hacker with a hoodie"
4747

48-
lora_scale= 0.9
48+
lora_scale = 0.9
4949
image = pipe(
5050
prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0)
5151
).images[0]
@@ -114,7 +114,7 @@ To return to only using one adapter, use the [`~diffusers.loaders.UNet2DConditio
114114
pipe.set_adapters("toy")
115115

116116
prompt = "toy_face of a hacker with a hoodie"
117-
lora_scale= 0.9
117+
lora_scale = 0.9
118118
image = pipe(
119119
prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0)
120120
).images[0]
@@ -127,11 +127,12 @@ Or to disable all adapters entirely, use the [`~diffusers.loaders.UNet2DConditio
127127
pipe.disable_lora()
128128

129129
prompt = "toy_face of a hacker with a hoodie"
130-
lora_scale= 0.9
131130
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
132131
image
133132
```
134133

134+
![no-lora](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_20_1.png)
135+
135136
## Manage active adapters
136137

137138
You have attached multiple adapters in this tutorial, and if you're feeling a bit lost on what adapters have been attached to the pipeline's components, use the [`~diffusers.loaders.LoraLoaderMixin.get_active_adapters`] method to check the list of active adapters:

0 commit comments

Comments
 (0)