32 releases
| 0.9.2-alpha.2 | Dec 16, 2025 |
|---|---|
| 0.9.2-alpha.1 | Oct 23, 2025 |
| 0.9.1 | May 1, 2025 |
| 0.8.4 | Mar 15, 2025 |
| 0.3.1 | Nov 12, 2023 |
#24 in Machine learning
144,923 downloads per month
Used in 356 crates
(198 directly)
1.5MB
36K
SLoC
Contains (Zip file, 2KB) tests/fortran_tensor_3d.pth, (Zip file, 2KB) tests/test.pt, (Zip file, 2KB) tests/test_with_key.pt
candle
Minimalist ML framework for Rust
lib.rs:
ML framework for Rust
use candle_core::{Tensor, DType, Device};
let a = Tensor::arange(0f32, 6f32, &Device::Cpu)?.reshape((2, 3))?;
let b = Tensor::arange(0f32, 12f32, &Device::Cpu)?.reshape((3, 4))?;
let c = a.matmul(&b)?;
Features
- Simple syntax (looks and feels like PyTorch)
- CPU and Cuda backends (and M1 support)
- Enable serverless (CPU) small and fast deployments
- Model training
- Distributed computing (NCCL).
- Models out of the box (Llama, Whisper, Falcon, ...)
FAQ
- Why Candle?
Candle stems from the need to reduce binary size in order to enable serverless possible by making the whole engine smaller than PyTorch very large library volume
And simply removing Python from production workloads. Python can really add overhead in more complex workflows and the GIL is a notorious source of headaches.
Rust is cool, and a lot of the HF ecosystem already has Rust crates safetensors and tokenizers
Other Crates
Candle consists of a number of crates. This crate holds core the common data structures but you may wish to look at the docs for the other crates which can be found here:
- candle-core. Core Datastructures and DataTypes.
- candle-nn. Building blocks for Neural Nets.
- candle-datasets. Rust access to commonly used Datasets like MNIST.
- candle-examples. Examples of Candle in Use.
- candle-onnx. Loading and using ONNX models.
- candle-pyo3. Access to Candle from Python.
- candle-transformers. Candle implementation of many published transformer models.
Dependencies
~8–18MB
~317K SLoC