CUTLASS-powered quantized BLAS library for low-bit deep learning on NVIDIA Blackwell GPUs.
QuTLASS is a high-performance library designed for low-precision kernel support in deep learning quantization, built on top of NVIDIA CUTLASS. It introduces narrow-precision microscaling routines tailored for quantized Large Language Model (LLM) inference and training on NVIDIA Blackwell GPUs.
- Microscaling in Blackwell
- What’s New in v0.2
- Features from Previous Versions
- Getting Started
- Usage Examples
- Benchmarks
- Citation
The new Blackwell architecture supports native matrix multiplication with microscaling, using scale factors in the form:
Here, the scale factors are applied along the inner (
- FlashInfer backend support for B200 GPUs
- Quantization-Aware Training (QAT) via MXFP types:
- Quartet clipping mask computation integrated in quantization routines
- Prototype backward kernels for MXFP4 (
sm_120) and MXFP8 (sm_100) - Integrated CUTLASS MXFP8 backward GEMM kernels (TN and NN layouts)
- Updated Transformers Integration for QAT (#41897)
- Nanochat-QAT Integration (#1)
- Support for
sm_100GPUs (e.g., NVIDIA B200). - NVFP4 Microscaling:
- Full W4A4 quantization support.
- Online rotations:
- Fused transform + quantization + scale computation.
- Rotation matrices loaded at runtime, allowing any transformation to be applied.
- NVFP4 Matmul Kernels:
- CUTLASS-backed NVFP4:NVFP4 with block-scale reordering.
- Quantization:
- Abs-Max supported.
- Multiple rotation sizes (16/32/64/128) supported for both MXFP4 and NVFP4.
- vLLM Integration (PR #24440)
- MXFP4 microscaling support, with
- Weight and Activation quantization (W4A4)
- Online rotations: fused kernel for online transforms, quantization, and scale computation.
- Transformations matching the microscaling group sizes (i.e., 32 for MXFP4).
- Compatible with any rotation matrix defined (e.g., Identity, Hadamard, DCT), as they are loaded in runtime.
- Multiple quantization schemes:
- Quartet (i.e., Quest-like).
- Abs-Max.
- Matmul kernels:
- CUTLASS-backed MXFP4:MXFP4 kernel with block-scale reordering.
- Prototype kernel for small batch sizes (no reordering required).
- Transformers Integration (PR #38696)
Note: QuTLASS is under active development and not yet fully optimized.
- NVIDIA Blackwell GPU (Compute capabilities supported:
sm_120aandsm_100a) - Compatible drivers: CUDA 12.8 or newer
- Install requirements:
pip install -r requirements.txt- Install QuTLASS (in editable mode):
pip install --no-build-isolation -e .in the root folder of this repository.
Note: To generate accurate quantized models using MXFP4 or NVFP4 formats, refer to the FP-Quant repository.
Correctness tests can be executed via python tests/mxfp4_test.py and benchmarks via python benchmarks/bench_mxfp4.py.
The fused quantization kernel can be invoked directly through qutlass.fusedQuantizeMx(a, h, method). Here, a is the input tensor to quantize, h is the Hadamard matrix, and method is the quantization scheme specified as Literal["quest", "abs_max"].
The kernel interface is defined in qutlass/csrc/fused_quantize_mx.cu.
The outputs include aq, the quantized data in FP4 (e2m1), and a_sf the corresponding scaling factors in FP8 (e8m0).
The matmul kernel can be called via qutlass.matmul_mxf4_bf16_tn(aq, bq, a_sf, b_sf, alpha). Its implementation can be found in qutlass/csrc/gemm.cu.
To use this matmul kernel, the scaling factors must be first rearranged into a block-scaled swizzle format.
The qutlass.to_blocked, located in qutlass/utils.py, handles this reordering.
In addition to the previous CUTLASS-powered MXFP4 matmul kernel, we provide a custom prototype kernel that can be called via qutlass.matmul_ada_mxf4_bf16_tn(...).
This implementation is located in qutlass/csrc/gemm_ada.cu and does not require the previous invocation to to_blocked.
Optimization efforts for this kernel have primarily targeted small batch sizes(i.e., qutlass.matmul_mxf4_bf16_tn is recommended.
This applies also to NVFP4, which is functionally equivalent aside from minor naming changes.
The following illustrate the performance of QuTLASS MXFP4 across various batch sizes. Ideal performance refers to pure matrix multiplication in FP4, without any overhead from quantization. Actual performance includes the full pipeline: Hadamard rotation, data quantization, scale computation, and block-scale reordering.
The following results show the inference speedup of QuTLASS MXFP4 over PyTorch BF16 in Transformers, as a function of batch size and sequence length on 8B and 14B-parameter models.
MXFP4 delivers consistent performance gains across all batch sizes, with speedups increasing progressively and peaking at
In order to generate recipes for efficient and accurate weight + activation quantization for low-bit MXFP and NVFP formats, please refer to FP-Quant.
The following results show some QAT performance using QuTLASS. Using our Transformers integration, an MXFP4:MXFP8 QAT scheme applied to Llama-3.1-8B recovers over half of the lost performance after only ~100M training tokens, while training 30% faster than BF16 pseudo-quantization QAT.
For efficient and accurate QAT recipes for low-bit MXFP formats, see nanochat-qat and FP-Quant.
@misc{qutlass2025,
title={QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning},
author={Roberto L. Castro, and Dan Alistarh},
year={2025},
publisher = {GitHub},
howpublished = {\url{https://github.com/IST-DASLab/qutlass}},
}