High-performance S3-compatible storage server written in Rust, optimized for speed and reliability.
- S3 API Compatibility: Complete implementation of core S3 operations
- Bucket operations: Create, Delete, List, Head
- Object operations: PUT, GET, DELETE, HEAD
- Multipart uploads: Initiate, Upload Parts, Complete, Abort
- Query operations: Versioning, ACL, Location, Batch Delete
- AWS Signature V4: Complete authentication implementation
- Chunked Transfer Encoding: Full support for AWS chunked transfers with signatures
- Async I/O: Built on Tokio and Axum for maximum concurrency
- Disk Persistence: Reliable filesystem-based storage
- CORS Support: Full cross-origin resource sharing support
- Zero-Copy Operations: Efficient memory usage for large files
- Exceptional Performance: 20,000+ operations per second
Also check the Web UI here: https://github.com/vibecoder-host/ironbucket-ui
📊 View Benchmark Results (Tool: MinIO Warp, Server: 8 cores / 16GB RAM)
./warp mixed --host=172.17.0.1:20000 --access-key=XXX --secret-key=XXX \
--obj.size=100KB --duration=60s --autoterm- Total Throughput: 28,561 obj/s | 16.71 MB/s (mixed workload)
- PUT Operations: 4,284 obj/s | 4.09 MB/s
- GET Operations: 12,852 obj/s | 12.26 MB/s
- DELETE Operations: 2,856 obj/s
- STAT Operations: 8,568 obj/s
- Latency: < 2ms average response time (P50), 15ms (P99)
- Total Throughput: 26,627 obj/s | 152.34 MB/s (mixed workload)
- PUT Operations: 3,993 obj/s | 38.08 MB/s
- GET Operations: 11,981 obj/s | 114.26 MB/s
- DELETE Operations: 2,663 obj/s
- STAT Operations: 7,989 obj/s
- Latency: < 2ms average response time (P50), 18ms (P99)
- Total Throughput: 19,307 obj/s | 1104.77 MB/s (mixed workload)
- PUT Operations: 2,896 obj/s | 276.23 MB/s
- GET Operations: 8,687 obj/s | 828.54 MB/s
- DELETE Operations: 1,930 obj/s
- STAT Operations: 5,792 obj/s
- Latency: < 3ms average response time (P50), 24ms (P99)
- Total Throughput: 5,067 obj/s | 2.898 GB/s (mixed workload)
- PUT Operations: 759 obj/s | 724.29 MB/s
- GET Operations: 2,280 obj/s | 2.18 GB/s
- DELETE Operations: 507 obj/s
- STAT Operations: 1,520 obj/s
- Latency: < 5ms average response time (P50), 36ms (P99)
- Total Throughput: 735 obj/s | 4.23 GB/s (mixed workload)
- PUT Operations: 110 obj/s | 1049.50 MB/s
- GET Operations: 330 obj/s | 3.16 GB/s
- DELETE Operations: 73 obj/s
- STAT Operations: 221 obj/s
- Latency: < 25ms average response time (P50), 140ms (P99)
# Clone the repository
cd /opt/app/ironbucket
# Start IronBucket with Docker Compose
docker-compose up -d
# Verify it's running
docker-compose ps
# Check logs
docker-compose logs -f ironbucket# Build from source
cargo build --release
# Run with environment variables
STORAGE_PATH=/s3 ./target/release/ironbucketConfiguration via environment variables or .env file:
# Storage
STORAGE_PATH=/s3 # Directory for object storage
MAX_FILE_SIZE=5368709120 # Max file size (5GB default)
# Server
PORT=9000 # Server port
RUST_LOG=ironbucket=info # Logging level
# Authentication (S3 compatible)
ACCESS_KEY=root
SECRET_KEY=xxxxxxxxxxxxxxxxxxxxx
services:
ironbucket:
build: .
ports:
- "172.17.0.1:20000:9000"
volumes:
- ./s3:/s3
environment:
- STORAGE_PATH=/s3
- RUST_LOG=ironbucket=warn,tower_http=warn
restart: always| Operation | Endpoint | Description |
|---|---|---|
| List Buckets | GET / |
List all buckets |
| Create Bucket | PUT /{bucket} |
Create a new bucket |
| Delete Bucket | DELETE /{bucket} |
Delete an empty bucket |
| Head Bucket | HEAD /{bucket} |
Check if bucket exists |
| List Objects | GET /{bucket} |
List objects in bucket |
| Get Location | GET /{bucket}?location |
Get bucket location |
| Get Versioning | GET /{bucket}?versioning |
Get versioning status |
| Get ACL | GET /{bucket}?acl |
Get bucket ACL |
| List Uploads | GET /{bucket}?uploads |
List multipart uploads |
| Batch Delete | POST /{bucket}?delete |
Delete multiple objects |
| Operation | Endpoint | Description |
|---|---|---|
| Put Object | PUT /{bucket}/{key} |
Upload an object |
| Get Object | GET /{bucket}/{key} |
Download an object |
| Delete Object | DELETE /{bucket}/{key} |
Delete an object |
| Head Object | HEAD /{bucket}/{key} |
Get object metadata |
| Get Object ACL | GET /{bucket}/{key}?acl |
Get object ACL |
| Operation | Endpoint | Description |
|---|---|---|
| Initiate Upload | POST /{bucket}/{key}?uploads |
Start multipart upload |
| Upload Part | PUT /{bucket}/{key}?partNumber=N&uploadId=ID |
Upload a part |
| Complete Upload | POST /{bucket}/{key}?uploadId=ID |
Complete multipart upload |
| Abort Upload | DELETE /{bucket}/{key}?uploadId=ID |
Abort multipart upload |
| List Parts | GET /{bucket}/{key}?uploadId=ID |
List uploaded parts |
# Configure credentials
export AWS_ACCESS_KEY_ID=root
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxx
export AWS_ENDPOINT=http://172.17.0.1:20000
# Create a bucket
aws --endpoint-url $AWS_ENDPOINT s3 mb s3://my-bucket
# Upload a file
aws --endpoint-url $AWS_ENDPOINT s3 cp file.txt s3://my-bucket/
# List objects
aws --endpoint-url $AWS_ENDPOINT s3 ls s3://my-bucket/
# Download a file
aws --endpoint-url $AWS_ENDPOINT s3 cp s3://my-bucket/file.txt ./downloaded.txt
# Delete a file
aws --endpoint-url $AWS_ENDPOINT s3 rm s3://my-bucket/file.txt
# Remove bucket
aws --endpoint-url $AWS_ENDPOINT s3 rb s3://my-bucket# Download warp
wget https://github.com/minio/warp/releases/download/v0.7.11/warp_0.7.11_Linux_x86_64.tar.gz
tar -xzf warp_0.7.11_Linux_x86_64.tar.gz
# Run mixed benchmark
./warp mixed \
--host=localhost:20000 \
--access-key=root \
--secret-key=xxxxxxxxxxxxxxxxxxxxx \
--autoterm \
--duration=60s \
--concurrent=50
# Run specific operation benchmarks
./warp get --host=localhost:20000 ... # Test GET performance
./warp put --host=localhost:20000 ... # Test PUT performance
./warp delete --host=localhost:20000 ... # Test DELETE performance- Axum Web Framework: High-performance async HTTP server
- Tokio Runtime: Async I/O and task scheduling
- Chunked Transfer Parser: Handles AWS chunked encoding with signatures
- Storage Layer: Direct filesystem operations with optional caching
- Auth Middleware: AWS Signature V4 validation
# Check storage usage
du -sh /opt/app/ironbucket/s3/
# Clean up all storage
rm -rf /opt/app/ironbucket/s3/*
# View logs
docker-compose logs -f ironbucket
# Check container status
docker-compose ps
# Monitor performance
docker stats ironbucket# Run in development mode
RUST_LOG=debug cargo run
# Run tests
cargo test
# Format code
cargo fmt
# Check for issues
cargo clippy# Check what's using port 20000
netstat -tlnp | grep 20000
# Stop IronBucket
docker-compose down# Fix permissions
sudo chown -R $USER:$USER /opt/app/ironbucket/s3# Stop services
docker-compose down
# Clear storage
rm -rf s3/* redis-data/*
# Restart
docker-compose up -d- Object versioning support (Completed)
- Bucket policies and IAM integration (Completed)
- Server-side encryption (Completed - AES-256-GCM)
- CORS configuration support (Completed)
- Object lifecycle management
- Bucket analytics and metrics
- Replication
- Event notifications
Comprehensive documentation is available in the /doc folder:
- Installation Guide - Complete installation instructions for various platforms
- Configuration Guide - Detailed configuration options and environment variables
- Security Guide - Security best practices and authentication setup
- API Reference - Complete S3 API endpoint documentation
- CLI Usage - Command-line interface guide and examples
- Node.js SDK - Node.js integration and AWS SDK usage
- Python SDK - Python boto3 integration guide
- Rust SDK - Rust AWS SDK integration examples
- Performance Guide - Performance tuning and optimization tips
- Troubleshooting - Common issues and solutions
- Documentation Index - Overview of all documentation
Contributions are welcome! Please ensure:
- Code follows Rust best practices
- All tests pass
- Performance benchmarks show no regression
- Documentation is updated
- GitHub Issues: Report bugs
- Discussions: Ask questions
- Security: Report vulnerabilities
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) - see the LICENSE file for details.