Skip to content

Commit 43f1090

Browse files
stevhliusayakpaul
andauthored
[docs] Network alpha docstring (huggingface#9238)
fix docstring Co-authored-by: Sayak Paul <[email protected]>
1 parent c291617 commit 43f1090

File tree

1 file changed

+18
-6
lines changed

1 file changed

+18
-6
lines changed

src/diffusers/loaders/lora_pipeline.py

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -280,7 +280,9 @@ def load_lora_into_text_encoder(
280280
A standard state dict containing the lora layer parameters. The key should be prefixed with an
281281
additional `text_encoder` to distinguish between unet lora layers.
282282
network_alphas (`Dict[str, float]`):
283-
See `LoRALinearLayer` for more details.
283+
The value of the network alpha used for stable learning and preventing underflow. This value has the
284+
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
285+
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
284286
text_encoder (`CLIPTextModel`):
285287
The text encoder model to load the LoRA layers into.
286288
prefix (`str`):
@@ -753,7 +755,9 @@ def load_lora_into_text_encoder(
753755
A standard state dict containing the lora layer parameters. The key should be prefixed with an
754756
additional `text_encoder` to distinguish between unet lora layers.
755757
network_alphas (`Dict[str, float]`):
756-
See `LoRALinearLayer` for more details.
758+
The value of the network alpha used for stable learning and preventing underflow. This value has the
759+
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
760+
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
757761
text_encoder (`CLIPTextModel`):
758762
The text encoder model to load the LoRA layers into.
759763
prefix (`str`):
@@ -1249,7 +1253,9 @@ def load_lora_into_text_encoder(
12491253
A standard state dict containing the lora layer parameters. The key should be prefixed with an
12501254
additional `text_encoder` to distinguish between unet lora layers.
12511255
network_alphas (`Dict[str, float]`):
1252-
See `LoRALinearLayer` for more details.
1256+
The value of the network alpha used for stable learning and preventing underflow. This value has the
1257+
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
1258+
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
12531259
text_encoder (`CLIPTextModel`):
12541260
The text encoder model to load the LoRA layers into.
12551261
prefix (`str`):
@@ -1735,7 +1741,9 @@ def load_lora_into_text_encoder(
17351741
A standard state dict containing the lora layer parameters. The key should be prefixed with an
17361742
additional `text_encoder` to distinguish between unet lora layers.
17371743
network_alphas (`Dict[str, float]`):
1738-
See `LoRALinearLayer` for more details.
1744+
The value of the network alpha used for stable learning and preventing underflow. This value has the
1745+
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
1746+
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
17391747
text_encoder (`CLIPTextModel`):
17401748
The text encoder model to load the LoRA layers into.
17411749
prefix (`str`):
@@ -1968,7 +1976,9 @@ def load_lora_into_transformer(cls, state_dict, network_alphas, transformer, ada
19681976
into the unet or prefixed with an additional `unet` which can be used to distinguish between text
19691977
encoder lora layers.
19701978
network_alphas (`Dict[str, float]`):
1971-
See `LoRALinearLayer` for more details.
1979+
The value of the network alpha used for stable learning and preventing underflow. This value has the
1980+
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
1981+
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
19721982
unet (`UNet2DConditionModel`):
19731983
The UNet model to load the LoRA layers into.
19741984
adapter_name (`str`, *optional*):
@@ -2061,7 +2071,9 @@ def load_lora_into_text_encoder(
20612071
A standard state dict containing the lora layer parameters. The key should be prefixed with an
20622072
additional `text_encoder` to distinguish between unet lora layers.
20632073
network_alphas (`Dict[str, float]`):
2064-
See `LoRALinearLayer` for more details.
2074+
The value of the network alpha used for stable learning and preventing underflow. This value has the
2075+
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
2076+
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
20652077
text_encoder (`CLIPTextModel`):
20662078
The text encoder model to load the LoRA layers into.
20672079
prefix (`str`):

0 commit comments

Comments
 (0)