Skip to content

Commit ae2fc01

Browse files
authored
Wrap lines in docstring (huggingface#5190)
1 parent 16d56c4 commit ae2fc01

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

src/diffusers/models/attention_flax.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,8 @@ class FlaxAttention(nn.Module):
132132
use_memory_efficient_attention (`bool`, *optional*, defaults to `False`):
133133
enable memory efficient attention https://arxiv.org/abs/2112.05682
134134
split_head_dim (`bool`, *optional*, defaults to `False`):
135-
Whether to split the head dimension into a new axis for the self-attention computation. In most cases, enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL.
135+
Whether to split the head dimension into a new axis for the self-attention computation. In most cases,
136+
enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL.
136137
dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
137138
Parameters `dtype`
138139

0 commit comments

Comments
 (0)