Skip to content

Commit 6c314ad

Browse files
sayakpaulstevhliu
andauthored
[Docs] add doc entry to explain lora fusion and use of different scales. (huggingface#4893)
* add doc entry to explain lora fusion and use of different scales. * Apply suggestions from code review Co-authored-by: Steven Liu <[email protected]> --------- Co-authored-by: Steven Liu <[email protected]>
1 parent 946bb53 commit 6c314ad

File tree

1 file changed

+36
-0
lines changed

1 file changed

+36
-0
lines changed

docs/source/en/training/lora.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -301,6 +301,42 @@ You can call [`~diffusers.loaders.LoraLoaderMixin.fuse_lora`] on a pipeline to m
301301

302302
To undo `fuse_lora`, call [`~diffusers.loaders.LoraLoaderMixin.unfuse_lora`] on a pipeline.
303303

304+
## Working with different LoRA scales when using LoRA fusion
305+
306+
If you need to use `scale` when working with `fuse_lora()` to control the influence of the LoRA parameters on the outputs, you should specify `lora_scale` within `fuse_lora()`. Passing the `scale` parameter to `cross_attention_kwargs` when you call the pipeline won't work.
307+
308+
To use a different `lora_scale` with `fuse_lora()`, you should first call `unfuse_lora()` on the corresponding pipeline and call `fuse_lora()` again with the expected `lora_scale`.
309+
310+
```python
311+
from diffusers import DiffusionPipeline
312+
import torch
313+
314+
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda")
315+
lora_model_id = "hf-internal-testing/sdxl-1.0-lora"
316+
lora_filename = "sd_xl_offset_example-lora_1.0.safetensors"
317+
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
318+
319+
# This uses a default `lora_scale` of 1.0.
320+
pipe.fuse_lora()
321+
322+
generator = torch.manual_seed(0)
323+
images_fusion = pipe(
324+
"masterpiece, best quality, mountain", output_type="np", generator=generator, num_inference_steps=2
325+
).images
326+
327+
# To work with a different `lora_scale`, first reverse the effects of `fuse_lora()`.
328+
pipe.unfuse_lora()
329+
330+
# Then proceed as follows.
331+
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
332+
pipe.fuse_lora(lora_scale=0.5)
333+
334+
generator = torch.manual_seed(0)
335+
images_fusion = pipe(
336+
"masterpiece, best quality, mountain", output_type="np", generator=generator, num_inference_steps=2
337+
).images
338+
```
339+
304340
## Supporting different LoRA checkpoints from Diffusers
305341

306342
🤗 Diffusers supports loading checkpoints from popular LoRA trainers such as [Kohya](https://github.com/kohya-ss/sd-scripts/) and [TheLastBen](https://github.com/TheLastBen/fast-stable-diffusion). In this section, we outline the current API's details and limitations.

0 commit comments

Comments
 (0)