
model2vec-rs
is a Rust crate providing an efficient implementation for inference with Model2Vec static embedding models. Model2Vec is a technique for creating compact and fast static embedding models from sentence transformers, achieving significant reductions in model size and inference speed. This Rust crate is optimized for performance, making it suitable for applications requiring fast embedding generation.
You can utilize model2vec-rs
in two ways:
- As a library in your Rust projects
- As a standalone Command-Line Interface (CLI) tool for quick terminal-based inferencing
Integrate model2vec-rs
into your Rust application to generate embeddings within your code.
a. Add model2vec-rs
as a dependency:
cargo add model2vec-rs
b. Load a model and generate embeddings:
use anyhow::Result;
use model2vec_rs::model::StaticModel;
fn main() -> Result<()> {
// Load a model from the Hugging Face Hub or a local path.
// Arguments: (repo_or_path, hf_token, normalize_embeddings, subfolder_in_repo)
let model = StaticModel::from_pretrained(
"minishlab/potion-base-8M", // Model ID from Hugging Face or local path to model directory
None, // Optional: Hugging Face API token for private models
None, // Optional: bool to override model's default normalization. `None` uses model's config.
None // Optional: subfolder if model files are not at the root of the repo/path
)?;
let sentences = vec![
"Hello world".to_string(),
"Rust is awesome".to_string(),
];
// Generate embeddings using default parameters
// (Default max_length: Some(512), Default batch_size: 1024)
let embeddings = model.encode(&sentences);
// `embeddings` is a Vec<Vec<f32>>
println!("Generated {} embeddings.", embeddings.len());
// To generate embeddings with custom arguments:
let custom_embeddings = model.encode_with_args(
&sentences,
Some(256), // Optional: custom max token length for truncation
512, // Custom batch size for processing
);
println!("Generated {} custom embeddings.", custom_embeddings.len());
Ok(())
}
a. Install the CLI tool:
This command compiles the crate in release mode (for speed) and installs the model2vec-rs
executable to Cargo's binary directory ~/.cargo/bin/
.
cargo install model2vec-rs
Ensure ~/.cargo/bin/
is in your system's PATH
to run model2vec-rs
from any directory.
b. Generate embeddings via CLI:
The compiled binary installed via cargo install
is significantly faster (often >10x) than running via cargo run -- ...
without release mode.
-
Encode a single sentence:
model2vec-rs encode "Hello world" "minishlab/potion-base-8M"
Embeddings will be printed to the console in JSON format. This command should take less than 0.1s to execute.
-
Encode multiple lines from a file and save to an output file:
echo -e "This is the first sentence.\nThis is another sentence." > my_texts.txt model2vec-rs encode my_texts.txt "minishlab/potion-base-8M" --output embeddings_output.json
c. (Alternative for Developers) Running CLI from a cloned repository:
# Clone and navigate to the repository directory
git clone https://github.com/MinishLab/model2vec-rs.git
cd model2vec-rs
# Build and run with release optimizations (recommended for better performance):
cargo run --release -- encode "Hello world" "minishlab/potion-base-8M"
# For quicker development cycles instead (slower execution):
cargo run -- encode "Hello world" "minishlab/potion-base-8M"
# Alternatively, build the executable first:
cargo build --release
# Then run with:
./target/release/model2vec-rs encode "Hello world" "minishlab/potion-base-8M"
- Fast Inference: Optimized Rust implementation for fast embedding generation.
- Hugging Face Hub Integration: Load pre-trained Model2Vec models directly from the Hugging Face Hub using model IDs, or use models from local paths.
- Model Formats: Supports models with f32, f16, and i8 weight types stored in
safetensors
files. - Batch Processing: Encodes multiple sentences in batches.
- Configurable Encoding: Allows customization of maximum sequence length and batch size during encoding.
Model2Vec is a technique to distill large sentence transformer models into highly efficient static embedding models. This process significantly reduces model size and computational requirements for inference. For a detailed understanding of how Model2Vec works, including the distillation process and model training, please refer to the main Model2Vec Python repository and its documentation.
This model2vec-rs
crate provides a Rust-based engine specifically for inference using these Model2Vec models.
A variety of pre-trained Model2Vec models are available on the HuggingFace Hub (MinishLab collection). These can be loaded by model2vec-rs
using their Hugging Face model ID or by providing a local path to the model files.
Model | Language | Distilled From (Original Sentence Transformer) | Params | Task |
---|---|---|---|---|
potion-base-32M | English | bge-base-en-v1.5 | 32.3M | General |
potion-base-8M | English | bge-base-en-v1.5 | 7.5M | General |
potion-base-4M | English | bge-base-en-v1.5 | 3.7M | General |
potion-base-2M | English | bge-base-en-v1.5 | 1.8M | General |
potion-retrieval-32M | English | bge-base-en-v1.5 | 32.3M | Retrieval |
M2V_multilingual_output | Multilingual | LaBSE | 471M | General |
We compared the performance of the Rust implementation with the Python version of Model2Vec. The benchmark was run single-threaded on a CPU.
Implementation | Throughput |
---|---|
Rust | 8000 samples/second |
Python | 4650 samples/second |
The Rust version is roughly 1.7× faster than the Python version.
model2vec-rs
(This Crate): High-performance Rust engine for 1.7x faster Model2Vec inference.model2vec
(Python-based): Handles model distillation, training, fine-tuning, and slower Python-based inference.
MIT
If you use the Model2Vec methodology or models in your research or work, please cite the original Model2Vec project:
@article{minishlab2024model2vec,
author = {Tulkens, Stephan and {van Dongen}, Thomas},
title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
year = {2024},
url = {https://github.com/MinishLab/model2vec}
}