1 unstable release
Uses new Rust 2024
| 0.1.0 | Aug 29, 2025 |
|---|
#908 in Machine learning
36 downloads per month
85KB
1.5K
SLoC
paddle-inference-rs
Rust bindings for PaddlePaddle inference library, providing safe and ergonomic access to PaddlePaddle's C API for deep learning inference.
Features
- Safe Rust API: Type-safe wrappers around PaddlePaddle's C inference API
- Cross-platform: Supports Windows, Linux, and macOS
- Async support: Optional async/await support for inference operations
- Memory safe: Proper resource management with RAII patterns
- Zero-cost abstractions: Minimal overhead compared to direct C API usage
Installation
Add this to your Cargo.toml:
[dependencies]
paddle-inference-rs = "0.1.0"
Prerequisites
You need to have the PaddlePaddle inference library installed. The library expects the following structure:
paddle/
├── include/
│ ├── pd_common.h
│ ├── pd_config.h
│ ├── pd_inference_api.h
│ ├── pd_predictor.h
│ ├── pd_tensor.h
│ ├── pd_types.h
│ └── pd_utils.h
└── lib/
├── paddle_inference_c.dll (Windows)
├── paddle_inference_c.so (Linux)
└── paddle_inference_c.dylib (macOS)
Usage
Basic Example
use paddle_inference_rs::{Config, Predictor, PrecisionType, PlaceType};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create configuration
let config = Config::new()?
.set_model("model_dir", "model_file", "params_file")?
.enable_gpu(0)?
.set_precision(PrecisionType::Float32)?
.enable_memory_optim()?;
// Create predictor
let predictor = Predictor::create(config)?;
// Get input and output names
let input_names = predictor.get_input_names();
let output_names = predictor.get_output_names();
// Prepare input data
let mut input_tensor = predictor.get_input_handle(&input_names[0])?;
input_tensor.reshape(&[1, 3, 224, 224])?;
// Copy data to tensor (example with random data)
let input_data = vec![0.0f32; 1 * 3 * 224 * 224];
input_tensor.copy_from_cpu(&input_data)?;
// Run inference
predictor.run()?;
// Get output
let output_tensor = predictor.get_output_handle(&output_names[0])?;
let output_shape = output_tensor.get_shape()?;
let mut output_data = vec![0.0f32; output_shape.iter().product()];
output_tensor.copy_to_cpu(&mut output_data)?;
println!("Inference completed successfully!");
println!("Output shape: {:?}", output_shape);
println!("Output data: {:?}", &output_data[0..5]);
Ok(())
}
Advanced Example with Async
use paddle_inference_rs::{Config, Predictor};
use tokio::task;
async fn async_inference() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::new()?
.set_model("model_dir", "model_file", "params_file")?
.enable_gpu(0)?;
// Run inference in a separate thread
let result = task::spawn_blocking(move || {
let predictor = Predictor::create(config)?;
predictor.run()?;
Ok(())
}).await??;
Ok(())
}
API Overview
Config
- Model configuration and optimization settings
- Hardware backend selection (CPU/GPU/XPU)
- Precision settings (FP32/FP16/INT8)
- Memory optimization options
Predictor
- Main inference interface
- Input/output tensor management
- Batch inference support
- Thread-safe operations
Tensor
- Multi-dimensional data container
- Data type support (Float32, Int32, Int64, UInt8, Int8)
- Shape manipulation and data copying
- Lod (Level of Detail) support for variable-length sequences
Building from Source
- Clone the repository:
git clone https://github.com/your-username/paddle-inference-rs.git
cd paddle-inference-rs
- Build with cargo:
cargo build --release
- For binding generation (requires bindgen):
cargo build --features gen
Platform Support
- Windows: Requires Visual Studio build tools and PaddlePaddle Windows binaries
- Linux: Requires gcc/clang and PaddlePaddle Linux binaries
- macOS: Requires Xcode command line tools and PaddlePaddle macOS binaries
Performance
The library provides near-native performance with minimal overhead:
- <1% overhead compared to direct C API calls
- Zero-copy data transfer when possible
- Efficient memory management with RAII
- Thread-safe operations for concurrent inference
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- PaddlePaddle team for the excellent inference library
- Rust community for amazing tools and libraries
- Contributors and users of this crate
Support
If you encounter any issues or have questions:
- Check the documentation
- Search existing issues
- Create a new issue with detailed information
Version Compatibility
| paddle-inference-rs | PaddlePaddle | Rust |
|---|---|---|
| 0.1.x | 2.4+ | 1.65+ |
Made with ❤️ for the Rust and AI communities
Dependencies
~100–260KB