You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/tutorials/fast_diffusion.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,15 +14,15 @@ specific language governing permissions and limitations under the License.
14
14
15
15
Diffusion models are known to be slower than their counter parts, GANs, because of the iterative and sequential reverse diffusion process. Recent works try to address limitation with:
16
16
17
-
* progressive timestep distillation (such as [LCM LoRA](../using-diffusers/inference_with_lcm_lora.md))
17
+
* progressive timestep distillation (such as [LCM LoRA](../using-diffusers/inference_with_lcm_lora))
18
18
* model compression (such as [SSD-1B](https://huggingface.co/segmind/SSD-1B))
19
19
* reusing adjacent features of the denoiser (such as [DeepCache](https://github.com/horseee/DeepCache))
20
20
21
-
In this tutorial, we focus on leveraging the power of PyTorch 2 to accelerate the inference latency of text-to-image diffusion pipeline, instead. We will use [Stable Diffusion XL (SDXL)](../using-diffusers/sdxl.md) as a case study, but the techniques we will discuss should extend to other text-to-image diffusion pipelines.
21
+
In this tutorial, we focus on leveraging the power of PyTorch 2 to accelerate the inference latency of text-to-image diffusion pipeline, instead. We will use [Stable Diffusion XL (SDXL)](../using-diffusers/sdxl) as a case study, but the techniques we will discuss should extend to other text-to-image diffusion pipelines.
22
22
23
23
## Setup
24
24
25
-
Make sure you're on the latest version of `diffusers`:
25
+
Make sure you're on the latest version of `diffusers`:
26
26
27
27
```bash
28
28
pip install -U diffusers
@@ -42,7 +42,7 @@ _This tutorial doesn't present the benchmarking code and focuses on how to perfo
42
42
43
43
## Baseline
44
44
45
-
Let's start with a baseline. Disable the use of a reduced precision and [`scaled_dot_product_attention`](../optimization/torch2.0.md):
45
+
Let's start with a baseline. Disable the use of a reduced precision and [`scaled_dot_product_attention`](../optimization/torch2.0):
46
46
47
47
```python
48
48
from diffusers import StableDiffusionXLPipeline
@@ -104,11 +104,11 @@ _(We later ran the experiments in float16 and found out that the recent versions
104
104
* The benefits of using the bfloat16 numerical precision as compared to float16 are hardware-dependent. Modern generations of GPUs tend to favor bfloat16.
105
105
* Furthermore, in our experiments, we bfloat16 to be much more resilient when used with quantization in comparison to float16.
106
106
107
-
We have a [dedicated guide](../optimization/fp16.md) for running inference in a reduced precision.
107
+
We have a [dedicated guide](../optimization/fp16) for running inference in a reduced precision.
108
108
109
109
## Running attention efficiently
110
110
111
-
Attention blocks are intensive to run. But with PyTorch's [`scaled_dot_product_attention`](../optimization/torch2.0.md), we can run them efficiently.
111
+
Attention blocks are intensive to run. But with PyTorch's [`scaled_dot_product_attention`](../optimization/torch2.0), we can run them efficiently.
112
112
113
113
```python
114
114
from diffusers import StableDiffusionXLPipeline
@@ -200,7 +200,7 @@ It provides a minor boost from 2.54 seconds to 2.52 seconds.
200
200
201
201
<Tipwarning={true}>
202
202
203
-
Support for `fuse_qkv_projections()` is limited and experimental. As such, it's not available for many non-SD pipelines such as [Kandinsky](../using-diffusers/kandinsky.md). You can refer to [this PR](https://github.com/huggingface/diffusers/pull/6179) to get an idea about how to support this kind of computation.
203
+
Support for `fuse_qkv_projections()` is limited and experimental. As such, it's not available for many non-SD pipelines such as [Kandinsky](../using-diffusers/kandinsky). You can refer to [this PR](https://github.com/huggingface/diffusers/pull/6179) to get an idea about how to support this kind of computation.
0 commit comments