Diffusers documentation

Compile and offloading quantized models

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.33.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Compile and offloading quantized models

Optimizing models often involves trade-offs between inference speed and memory-usage. For instance, while caching can boost inference speed, it also increases memory consumption since it needs to store the outputs of intermediate attention layers. A more balanced optimization strategy combines quantizing a model, torch.compile and various offloading methods.

For image generation, combining quantization and model offloading can often give the best trade-off between quality, speed, and memory. Group offloading is not as effective for image generation because it is usually not possible to fully overlap data transfer if the compute kernel finishes faster. This results in some communication overhead between the CPU and GPU.

For video generation, combining quantization and group-offloading tends to be better because video models are more compute-bound.

The table below provides a comparison of optimization strategy combinations and their impact on latency and memory-usage for Flux.

combination latency (s) memory-usage (GB)
quantization 32.602 14.9453
quantization, torch.compile 25.847 14.9448
quantization, torch.compile, model CPU offloading 32.312 12.2369
These results are benchmarked on Flux with a RTX 4090. The transformer and text_encoder components are quantized. Refer to the if you're interested in evaluating your own model.

This guide will show you how to compile and offload a quantized model with bitsandbytes. Make sure you are using PyTorch nightly and the latest version of bitsandbytes.

pip install -U bitsandbytes

Quantization and torch.compile

Start by quantizing a model to reduce the memory required for storage and compiling it to accelerate inference.

Configure the Dynamo capture_dynamic_output_shape_ops = True to handle dynamic outputs when compiling bitsandbytes models.

import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig

torch._dynamo.config.capture_dynamic_output_shape_ops = True

# quantize
pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
    components_to_quantize=["transformer", "text_encoder_2"],
)
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
).to("cuda")

# compile
pipeline.transformer.to(memory_format=torch.channels_last)
pipeline.transformer.compile(mode="max-autotune", fullgraph=True)
pipeline("""
    cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
    highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
"""
).images[0]

Quantization, torch.compile, and offloading

In addition to quantization and torch.compile, try offloading if you need to reduce memory-usage further. Offloading moves various layers or model components from the CPU to the GPU as needed for computations.

Configure the Dynamo cache_size_limit during offloading to avoid excessive recompilation and set capture_dynamic_output_shape_ops = True to handle dynamic outputs when compiling bitsandbytes models.

model CPU offloading
group offloading

Model CPU offloading moves an individual pipeline component, like the transformer model, to the GPU when it is needed for computation. Otherwise, it is offloaded to the CPU.

import torch
from diffusers import DiffusionPipeline
from diffusers.quantizers import PipelineQuantizationConfig

torch._dynamo.config.cache_size_limit = 1000
torch._dynamo.config.capture_dynamic_output_shape_ops = True

# quantize
pipeline_quant_config = PipelineQuantizationConfig(
    quant_backend="bitsandbytes_4bit",
    quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
    components_to_quantize=["transformer", "text_encoder_2"],
)
pipeline = DiffusionPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16,
).to("cuda")

# model CPU offloading
pipeline.enable_model_cpu_offload()

# compile
pipeline.transformer.compile()
pipeline(
    "cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California, highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain"
).images[0]
< > Update on GitHub