Skip to content

Commit 493f952

Browse files
younesbelkadapatrickvonplatensayakpaulBenjaminBossan
authored
[PEFT / LoRA] PEFT integration - text encoder (huggingface#5058)
* more fixes * up * up * style * add in setup * oops * more changes * v1 rzfactor CI * Apply suggestions from code review Co-authored-by: Patrick von Platen <[email protected]> * few todos * protect torch import * style * fix fuse text encoder * Update src/diffusers/loaders.py Co-authored-by: Sayak Paul <[email protected]> * replace with `recurse_replace_peft_layers` * keep old modules for BC * adjustments on `adjust_lora_scale_text_encoder` * nit * move tests * add conversion utils * remove unneeded methods * use class method instead * oops * use `base_version` * fix examples * fix CI * fix weird error with python 3.8 * fix * better fix * style * Apply suggestions from code review Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * Apply suggestions from code review Co-authored-by: Patrick von Platen <[email protected]> * add comment * Apply suggestions from code review Co-authored-by: Sayak Paul <[email protected]> * conv2d support for recurse remove * added docstrings * more docstring * add deprecate * revert * try to fix merge conflicts * v1 tests * add new decorator * add saving utilities test * adapt tests a bit * add save / from_pretrained tests * add saving tests * add scale tests * fix deps tests * fix lora CI * fix tests * add comment * fix * style * add slow tests * slow tests pass * style * Update src/diffusers/utils/import_utils.py Co-authored-by: Benjamin Bossan <[email protected]> * Apply suggestions from code review Co-authored-by: Benjamin Bossan <[email protected]> * circumvents pattern finding issue * left a todo * Apply suggestions from code review Co-authored-by: Patrick von Platen <[email protected]> * update hub path * add lora workflow * fix --------- Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Benjamin Bossan <[email protected]>
1 parent b32555a commit 493f952

File tree

46 files changed

+1193
-175
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+1193
-175
lines changed
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
name: Fast tests for PRs - PEFT backend
2+
3+
on:
4+
pull_request:
5+
branches:
6+
- main
7+
8+
concurrency:
9+
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
10+
cancel-in-progress: true
11+
12+
env:
13+
DIFFUSERS_IS_CI: yes
14+
OMP_NUM_THREADS: 4
15+
MKL_NUM_THREADS: 4
16+
PYTEST_TIMEOUT: 60
17+
18+
jobs:
19+
run_fast_tests:
20+
strategy:
21+
fail-fast: false
22+
matrix:
23+
config:
24+
- name: LoRA
25+
framework: lora
26+
runner: docker-cpu
27+
image: diffusers/diffusers-pytorch-cpu
28+
report: torch_cpu_lora
29+
30+
31+
name: ${{ matrix.config.name }}
32+
33+
runs-on: ${{ matrix.config.runner }}
34+
35+
container:
36+
image: ${{ matrix.config.image }}
37+
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
38+
39+
defaults:
40+
run:
41+
shell: bash
42+
43+
steps:
44+
- name: Checkout diffusers
45+
uses: actions/checkout@v3
46+
with:
47+
fetch-depth: 2
48+
49+
- name: Install dependencies
50+
run: |
51+
apt-get update && apt-get install libsndfile1-dev libgl1 -y
52+
python -m pip install -e .[quality,test]
53+
python -m pip install git+https://github.com/huggingface/accelerate.git
54+
python -m pip install -U git+https://github.com/huggingface/transformers.git
55+
python -m pip install -U git+https://github.com/huggingface/peft.git
56+
57+
- name: Environment
58+
run: |
59+
python utils/print_env.py
60+
61+
- name: Run fast PyTorch LoRA CPU tests with PEFT backend
62+
if: ${{ matrix.config.framework == 'lora' }}
63+
run: |
64+
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
65+
-s -v \
66+
--make-reports=tests_${{ matrix.config.report }} \
67+
tests/lora/test_lora_layers_peft.py

src/diffusers/loaders.py

Lines changed: 198 additions & 118 deletions
Large diffs are not rendered by default.

src/diffusers/models/lora.py

Lines changed: 19 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -25,18 +25,25 @@
2525
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
2626

2727

28-
def adjust_lora_scale_text_encoder(text_encoder, lora_scale: float = 1.0):
29-
for _, attn_module in text_encoder_attn_modules(text_encoder):
30-
if isinstance(attn_module.q_proj, PatchedLoraProjection):
31-
attn_module.q_proj.lora_scale = lora_scale
32-
attn_module.k_proj.lora_scale = lora_scale
33-
attn_module.v_proj.lora_scale = lora_scale
34-
attn_module.out_proj.lora_scale = lora_scale
35-
36-
for _, mlp_module in text_encoder_mlp_modules(text_encoder):
37-
if isinstance(mlp_module.fc1, PatchedLoraProjection):
38-
mlp_module.fc1.lora_scale = lora_scale
39-
mlp_module.fc2.lora_scale = lora_scale
28+
def adjust_lora_scale_text_encoder(text_encoder, lora_scale: float = 1.0, use_peft_backend: bool = False):
29+
if use_peft_backend:
30+
from peft.tuners.lora import LoraLayer
31+
32+
for module in text_encoder.modules():
33+
if isinstance(module, LoraLayer):
34+
module.scaling[module.active_adapter] = lora_scale
35+
else:
36+
for _, attn_module in text_encoder_attn_modules(text_encoder):
37+
if isinstance(attn_module.q_proj, PatchedLoraProjection):
38+
attn_module.q_proj.lora_scale = lora_scale
39+
attn_module.k_proj.lora_scale = lora_scale
40+
attn_module.v_proj.lora_scale = lora_scale
41+
attn_module.out_proj.lora_scale = lora_scale
42+
43+
for _, mlp_module in text_encoder_mlp_modules(text_encoder):
44+
if isinstance(mlp_module.fc1, PatchedLoraProjection):
45+
mlp_module.fc1.lora_scale = lora_scale
46+
mlp_module.fc2.lora_scale = lora_scale
4047

4148

4249
class LoRALinearLayer(nn.Module):

src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ def encode_prompt(
303303
self._lora_scale = lora_scale
304304

305305
# dynamically adjust the LoRA scale
306-
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
306+
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale, self.use_peft_backend)
307307

308308
if prompt is not None and isinstance(prompt, str):
309309
batch_size = 1

src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -301,7 +301,7 @@ def encode_prompt(
301301
self._lora_scale = lora_scale
302302

303303
# dynamically adjust the LoRA scale
304-
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
304+
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale, self.use_peft_backend)
305305

306306
if prompt is not None and isinstance(prompt, str):
307307
batch_size = 1

src/diffusers/pipelines/controlnet/pipeline_controlnet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -291,7 +291,7 @@ def encode_prompt(
291291
self._lora_scale = lora_scale
292292

293293
# dynamically adjust the LoRA scale
294-
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
294+
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale, self.use_peft_backend)
295295

296296
if prompt is not None and isinstance(prompt, str):
297297
batch_size = 1

src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -315,7 +315,7 @@ def encode_prompt(
315315
self._lora_scale = lora_scale
316316

317317
# dynamically adjust the LoRA scale
318-
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
318+
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale, self.use_peft_backend)
319319

320320
if prompt is not None and isinstance(prompt, str):
321321
batch_size = 1

src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -442,7 +442,7 @@ def encode_prompt(
442442
self._lora_scale = lora_scale
443443

444444
# dynamically adjust the LoRA scale
445-
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
445+
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale, self.use_peft_backend)
446446

447447
if prompt is not None and isinstance(prompt, str):
448448
batch_size = 1

src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -315,8 +315,8 @@ def encode_prompt(
315315
self._lora_scale = lora_scale
316316

317317
# dynamically adjust the LoRA scale
318-
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
319-
adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale)
318+
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale, self.use_peft_backend)
319+
adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale, self.use_peft_backend)
320320

321321
prompt = [prompt] if isinstance(prompt, str) else prompt
322322

src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -288,8 +288,8 @@ def encode_prompt(
288288
self._lora_scale = lora_scale
289289

290290
# dynamically adjust the LoRA scale
291-
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
292-
adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale)
291+
adjust_lora_scale_text_encoder(self.text_encoder, lora_scale, self.use_peft_backend)
292+
adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale, self.use_peft_backend)
293293

294294
prompt = [prompt] if isinstance(prompt, str) else prompt
295295

0 commit comments

Comments
 (0)