#rules-engine #artificial-intelligence #ml #grl

bin+lib rust-rule-engine

A blazing-fast Rust rule engine with RETE algorithm, backward chaining inference, and GRL (Grule Rule Language) syntax. Features: forward/backward chaining, pattern matching, unification, O(1) rule indexing, TMS, expression evaluation, method calls, streaming with Redis state backend, watermarking, and custom functions. Production-ready for business rules, expert systems, real-time stream processing, and decision automation.

64 releases (18 stable)

new 1.14.1 Dec 26, 2025
1.3.0 Nov 28, 2025
0.19.1 Nov 25, 2025

#199 in Data structures

Download history 339/week @ 2025-09-26 397/week @ 2025-10-03 1085/week @ 2025-10-10 272/week @ 2025-10-17 155/week @ 2025-10-24 14/week @ 2025-10-31 7/week @ 2025-11-07 69/week @ 2025-11-14 19/week @ 2025-11-21 26/week @ 2025-11-28 60/week @ 2025-12-05

120 downloads per month
Used in rust-logic-graph

MIT license

2MB
31K SLoC

Rust Rule Engine v1.14.1 ๐Ÿฆ€โšก๐Ÿš€

Crates.io Documentation License: MIT Build Status

A blazing-fast production-ready rule engine for Rust supporting both Forward and Backward Chaining. Features RETE-UL algorithm with Alpha Memory Indexing and Beta Memory Indexing, parallel execution, goal-driven reasoning, and GRL (Grule Rule Language) syntax.

๐Ÿ”— GitHub | Documentation | Crates.io


๐ŸŽฏ Reasoning Modes

๐Ÿ”„ Forward Chaining (Data-Driven)

"When facts change, fire matching rules"

  • Native Engine - Simple pattern matching for small rule sets
  • RETE-UL - Optimized network for 100-10,000 rules with O(1) indexing
  • Parallel Execution - Multi-threaded rule evaluation

Use Cases: Business rules, validation, reactive systems, decision automation

๐ŸŽฏ Backward Chaining (Goal-Driven)

"Given a goal, find facts/rules to prove it"

  • Unification - Pattern matching with variable bindings
  • Search Strategies - DFS, BFS, Iterative Deepening
  • Aggregation - COUNT, SUM, AVG, MIN, MAX
  • Negation - NOT queries with closed-world assumption
  • Explanation - Proof trees with JSON/MD/HTML export
  • Disjunction - OR patterns for alternative paths
  • Nested Queries - Subqueries with shared variables
  • Query Optimization - Automatic goal reordering for 10-100x speedup

Use Cases: Expert systems, diagnostics, planning, decision support, AI reasoning

๐ŸŒŠ Stream Processing (Event-Driven) ๐Ÿ†•

"Process real-time event streams with time-based windows"

  • GRL Stream Syntax - Declarative stream pattern definitions
  • StreamAlphaNode - RETE-integrated event filtering & windowing
  • Time Windows - Sliding (continuous) and tumbling (non-overlapping)
  • Multi-Stream Correlation - Join events from different streams
  • WorkingMemory Integration - Stream events become facts for rule evaluation

Use Cases: Real-time fraud detection, IoT monitoring, financial analytics, security alerts, CEP

Example:

rule "Fraud Alert" {
    when
        login: LoginEvent from stream("logins") over window(10 min, sliding) &&
        purchase: PurchaseEvent from stream("purchases") over window(10 min, sliding) &&
        login.user_id == purchase.user_id &&
        login.ip_address != purchase.ip_address
    then
        Alert.trigger("IP mismatch detected");
}

๐Ÿš€ Quick Start

Forward Chaining Example

use rust_rule_engine::{RuleEngine, Facts, Value};

let mut engine = RuleEngine::new();

// Define rule in GRL
engine.add_rule_from_grl(r#"
    rule "VIP Discount" {
        when
            Customer.TotalSpent > 10000
        then
            Customer.Discount = 0.15;
    }
"#)?;

// Add facts and execute
let mut facts = Facts::new();
facts.set("Customer.TotalSpent", Value::Number(15000.0));
engine.execute(&mut facts)?;

// Result: Customer.Discount = 0.15 โœ“

Backward Chaining Example

use rust_rule_engine::backward::BackwardEngine;

let mut engine = BackwardEngine::new(kb);

// Query: "Can this order be auto-approved?"
let result = engine.query(
    "Order.AutoApproved == true",
    &mut facts
)?;

if result.provable {
    println!("Order can be auto-approved!");
    println!("Proof: {:?}", result.proof_trace);
}

Stream Processing Example ๐Ÿ†•

use rust_rule_engine::parser::grl::stream_syntax::parse_stream_pattern;
use rust_rule_engine::rete::stream_alpha_node::{StreamAlphaNode, WindowSpec};
use rust_rule_engine::rete::working_memory::WorkingMemory;

// Parse GRL stream pattern
let grl = r#"login: LoginEvent from stream("logins") over window(5 min, sliding)"#;
let (_, pattern) = parse_stream_pattern(grl)?;

// Create stream processor
let mut node = StreamAlphaNode::new(
    &pattern.source.stream_name,
    pattern.event_type,
    pattern.source.window.as_ref().map(|w| WindowSpec {
        duration: w.duration,
        window_type: w.window_type.clone(),
    }),
);

// Process events in real-time
let mut wm = WorkingMemory::new();
for event in event_stream {
    if node.process_event(&event) {
        // Event passed filters and is in window
        wm.insert_from_stream("logins".to_string(), event);
        // Now available for rule evaluation!
    }
}

// Run: cargo run --example streaming_fraud_detection --features streaming

โœจ What's New in v1.14.0 ๐ŸŽ‰

โšก Alpha Memory Indexing - Up to 800x Faster Queries!

New hash-based indexing for alpha node fact filtering, complementing Beta Memory Indexing for complete RETE optimization!

๐Ÿ” Alpha Memory Indexing

Problem: Alpha nodes scan all facts linearly to find matches - O(n) complexity becomes slow with large datasets.

Solution: Hash-based indexing provides O(1) fact lookups - up to 800x speedup for filtered queries!

use rust_rule_engine::rete::{AlphaMemoryIndex, FactValue, TypedFacts};

// Create alpha memory with indexing
let mut mem = AlphaMemoryIndex::new();

// Create index on frequently-queried field
mem.create_index("status".to_string());

// Insert facts (index updated automatically)
for i in 0..10_000 {
    let mut fact = TypedFacts::new();
    fact.set("id", i as i64);
    fact.set("status", if i % 100 == 0 { "active" } else { "pending" });
    mem.insert(fact);
}

// Query using index - O(1) lookup!
let active = mem.filter("status", &FactValue::String("active".to_string()));
println!("Found {} active facts", active.len());
// Without index: 10,000 comparisons (O(n))
// With index: 1 hash lookup (O(1)) โ†’ ~800x faster!

Real Benchmark Results:

Dataset Size Linear Scan Indexed Lookup Speedup
1,000 facts 310 ยตs 396 ns 782x
10,000 facts 3.18 ms 396 ns 8,030x
50,000 facts 15.9 ms 396 ns 40,151x ๐Ÿš€

Key Features:

  • โœ… Auto-tuning - Automatically creates indexes after 50+ queries on a field
  • โœ… Multiple indexes - Index different fields independently
  • โœ… Statistics tracking - Monitor index hit rates and effectiveness
  • โœ… Low overhead - ~7-9% memory per index

When to Use:

// โœ… Use when:
// - Dataset > 10K facts
// - Read-heavy workload (query > insert)
// - High selectivity queries (<10% match rate)
// - Same queries repeated multiple times

// โŒ Skip when:
// - Dataset < 1K facts (overhead > benefit)
// - Write-heavy workload (insert > query)
// - Query each field only once

// ๐Ÿค– Auto-tuning mode (recommended):
let mut mem = AlphaMemoryIndex::new();

// Query many times...
for _ in 0..100 {
    mem.filter_tracked("status", &FactValue::String("active".to_string()));
}

// Auto-create index when query count > 50
mem.auto_tune();  // Indexes "status" automatically!

Memory Overhead:

Index Count Memory Usage Overhead
0 indexes 59.31 MB Baseline
1 index 60.32 MB +1.7%
3 indexes 72.15 MB +21.6%
5 indexes 85.67 MB +44.4%

Recommendation: Use 1-3 indexes max (~20% overhead) for best ROI.


โœจ What's New in v1.13.0

โšก Beta Memory Indexing - Up to 1,235x Faster Joins!

Comprehensive RETE optimization system with Beta Memory Indexing providing exponential speedup for multi-pattern rules!

๐Ÿš€ Beta Memory Indexing

Problem: Join operations use nested loops (O(nยฒ)) which becomes a bottleneck with large fact sets.

Solution: Hash-based indexing changes O(nยฒ) to O(n) - providing 11x to 1,235x speedup!

use rust_rule_engine::rete::optimization::BetaMemoryIndex;
use rust_rule_engine::rete::TypedFacts;

// Create sample facts (e.g., orders with customer IDs)
let mut orders = Vec::new();
for i in 0..1000 {
    let mut order = TypedFacts::new();
    order.set("OrderId", format!("O{}", i));
    order.set("CustomerId", format!("C{}", i % 100));  // 100 unique customers
    order.set("Amount", (i * 50) as i64);
    orders.push(order);
}

// Build index on join key (CustomerId)
let mut index = BetaMemoryIndex::new("CustomerId".to_string());
for (idx, order) in orders.iter().enumerate() {
    index.add(order, idx);  // O(1) insertion
}

// Perform O(1) lookup instead of O(n) scan
// Note: Key format is the Debug representation of FactValue
let key = "String(\"C50\")";  // Looking for customer C50's orders
let matches = index.lookup(key);  // O(1) hash lookup!

println!("Found {} orders for customer C50", matches.len());
// Without indexing: 1,000 comparisons (O(n))
// With indexing: 1 hash lookup (O(1)) โ†’ 1,000x faster!

Real Benchmark Results:

Dataset Size Nested Loop (O(nยฒ)) Indexed (O(n)) Speedup
100 facts 1.00 ms 92 ยตs 11x
1,000 facts 113.79 ms 672.76 ยตs 169x
5,000 facts 2.63 seconds 2.13 ms 1,235x ๐Ÿš€

Key Insight: At 5,000 facts, the difference between 2.6 SECONDS and 2ms is production-critical!

๐Ÿ”ง Memory Optimizations

Three additional optimizations focus on reducing memory footprint:

1. Node Sharing - Deduplicate identical alpha nodes

use rust_rule_engine::rete::optimization::NodeSharingRegistry;

let mut registry = NodeSharingRegistry::new();

// Register 10,000 nodes with 100 unique patterns
for (idx, node) in nodes.iter().enumerate() {
    registry.register(node, idx);
}

// Result: 98.1% memory reduction (689.84 KB saved)
let stats = registry.stats();
println!("Memory saved: {:.1}%", stats.memory_saved_percent);

2. Alpha Memory Compaction - Eliminate duplicate facts

use rust_rule_engine::rete::optimization::CompactAlphaMemory;

let mut memory = CompactAlphaMemory::new();

// Insert 10,000 facts with duplicates
for fact in facts {
    memory.add(&fact);
}

// Result: 98.7% memory reduction (925.00 KB saved)
println!("Unique facts: {} (saved {:.1}%)",
    memory.len(), memory.memory_savings());

3. Token Pooling - Reduce allocations

use rust_rule_engine::rete::optimization::TokenPool;

let mut pool = TokenPool::new(100);

// Process 10,000 events with token reuse
for event in events {
    let mut token = pool.acquire();
    token.set_fact(event);
    // ... process ...
    pool.release(token);
}

// Result: 99% fewer allocations
let stats = pool.stats();
println!("Reuse rate: {:.1}%", stats.reuse_rate);

๐Ÿ“Š When to Use Each Optimization

Optimization Always Use? Use When Skip When
Beta Indexing โšก YES Any join operations Never (always beneficial)
Alpha Indexing ๐Ÿ†• No Read-heavy + >10K facts Write-heavy or <1K facts
Node Sharing No Memory-constrained + 10K+ rules Speed is priority
Alpha Memory Compaction No Many duplicate facts expected Few duplicates
Token Pooling No 100K+ events/sec continuous Batch/low-volume processing

Default (Most Production Systems):

// Use Beta + Alpha Indexing for maximum performance
use rust_rule_engine::rete::{AlphaMemoryIndex, BetaMemoryIndex};

// Alpha indexing: for filtering (auto-tune recommended)
let mut alpha_mem = AlphaMemoryIndex::new();
// Will auto-create indexes for frequently-queried fields

// Beta indexing: for joins (always use)
let mut beta_index = BetaMemoryIndex::new("user_id".to_string());
// 150-1,235x faster joins - no downsides!

Memory-Constrained + Large Rule Sets:

use rust_rule_engine::rete::optimization::{
    BetaMemoryIndex,      // For speed (always)
    NodeSharingRegistry,  // For memory (if 10K+ rules)
};

High-Duplicate Workloads:

use rust_rule_engine::rete::optimization::{
    BetaMemoryIndex,      // For speed (always)
    CompactAlphaMemory,   // For deduplication (if >50% duplicates)
};

๐Ÿ”ฌ Try It Yourself

# Run interactive demos
cargo run --example alpha_indexing_demo          # Alpha Memory Indexing
cargo run --example rete_optimization_demo       # Beta Memory Indexing
cargo run --example grl_optimization_demo        # GRL rules + indexing

# Run benchmarks
cargo bench --bench engine_comparison_benchmark  # Compare all optimizations
cargo bench --bench alpha_indexing_benchmark     # Alpha indexing details
cargo run --bin memory_usage_benchmark --release # Memory analysis


# View detailed HTML reports
open target/criterion/report/index.html

๐Ÿ“š Complete Documentation

New in v1.13.0:

  • โœ… Beta Memory Indexing (11x to 1,235x speedup)
  • โœ… Node Sharing (98.1% memory reduction)
  • โœ… Alpha Memory Compaction (98.7% memory reduction)
  • โœ… Token Pooling (99% fewer allocations)
  • โœ… Comprehensive benchmarks with scaled datasets
  • โœ… Real memory measurements (KB/MB)
  • โœ… Production-ready optimization manager
  • โœ… 30+ optimization tests

โœจ Previous Update - v1.12.1

๐ŸŒŠ Stream Processing Foundation!

GRL Stream Syntax - Parse and process real-time event streams with time-based windows!

๐Ÿ†• Stream Processing Features

GRL Stream Pattern Syntax:

// Stream with sliding window
login: LoginEvent from stream("logins") over window(10 min, sliding)

// Stream with tumbling window
metric: MetricEvent from stream("metrics") over window(5 sec, tumbling)

// Simple stream without window
event: Event from stream("events")

StreamAlphaNode - RETE Integration:

use rust_rule_engine::parser::grl::stream_syntax::parse_stream_pattern;
use rust_rule_engine::rete::stream_alpha_node::{StreamAlphaNode, WindowSpec};

// Parse GRL pattern
let grl = r#"login: LoginEvent from stream("logins") over window(5 min, sliding)"#;
let (_, pattern) = parse_stream_pattern(grl)?;

// Create stream processor
let mut node = StreamAlphaNode::new(
    &pattern.source.stream_name,
    pattern.event_type,
    pattern.source.window.as_ref().map(|w| WindowSpec {
        duration: w.duration,
        window_type: w.window_type.clone(),
    }),
);

// Process events
if node.process_event(&event) {
    let handle = working_memory.insert_from_stream("logins".to_string(), event);
    // Event now in RETE network for rule evaluation!
}

Real-World Example - Fraud Detection:

// 4 fraud detection rules implemented:
// 1. Suspicious IP changes (multiple IPs in 15 min)
// 2. High velocity purchases (>3 purchases in 15 min)
// 3. Impossible travel (location change too fast)
// 4. IP mismatch (login IP != purchase IP)

// Result: 7 alerts triggered from 16 events
cargo run --example streaming_fraud_detection --features streaming

Features Implemented:

  • โœ… GRL stream syntax parser (nom-based, 15 tests)
  • โœ… StreamAlphaNode for event filtering & windowing (10 tests)
  • โœ… Sliding windows (continuous rolling)
  • โœ… Tumbling windows (non-overlapping)
  • โœ… WorkingMemory integration (stream โ†’ facts)
  • โœ… Duration units: ms, sec, min, hour
  • โœ… Optional event type filtering
  • โœ… Multi-stream correlation

Test Coverage:

  • 58 streaming tests (100% pass)
  • 8 integration tests (fraud, IoT, trading, security)
  • 3 end-to-end tests (GRL โ†’ RETE โ†’ WorkingMemory)
  • 2 comprehensive examples

โœจ Previous Update - v1.11.0

๐ŸŽฏ Nested Queries & Query Optimization!

Complete Phase 1.1 with nested queries (subqueries) and intelligent query optimization for 10-100x performance improvements!

๐Ÿ†• Nested Queries

use rust_rule_engine::backward::*;

// Find grandparents using nested queries
let results = engine.query(
    "grandparent(?x, ?z) WHERE
        parent(?x, ?y) AND
        (parent(?y, ?z) WHERE child(?z, ?y))",
    &mut facts
)?;

// Complex eligibility with nested OR
query "CheckEligibility" {
    goal: (eligible(?x) WHERE (vip(?x) OR premium(?x))) AND active(?x)
    on-success: { LogMessage("Eligible!"); }
}

โšก Query Optimization

// Enable optimization in GRL
query "OptimizedSearch" {
    goal: item(?x) AND expensive(?x) AND in_stock(?x)
    enable-optimization: true  // Automatically reorders goals!
}

// Manual optimization
let mut optimizer = QueryOptimizer::new();
optimizer.set_selectivity("in_stock(?x)".to_string(), 0.1);   // 10% in stock
optimizer.set_selectivity("expensive(?x)".to_string(), 0.3);  // 30% expensive
optimizer.set_selectivity("item(?x)".to_string(), 0.9);       // 90% items

let optimized = optimizer.optimize_goals(goals);
// Result: in_stock โ†’ expensive โ†’ item (10-100x faster!)

Performance Benefits:

  • Before: 1000 items โ†’ 900 expensive โ†’ 270 in_stock = 2170 evaluations
  • After: 10 in_stock โ†’ 8 expensive โ†’ 8 items = 26 evaluations
  • Speedup: ~83x faster! ๐Ÿš€

New Features:

  • Nested queries with WHERE clauses
  • Query optimizer with goal reordering
  • Selectivity estimation (heuristic & custom)
  • Join order optimization
  • enable-optimization flag in GRL
  • 19 new tests + 9 integration tests

Testing: 485/485 tests pass (368 unit + 117 integration) โ€ข Zero regressions

๐Ÿ“– Nested Query Demo โ€ข Optimizer Demo โ€ข GRL Integration


๐Ÿ“š Documentation

Comprehensive documentation organized by topic:

๐Ÿš€ Getting Started

๐ŸŽฏ Core Features

โšก Advanced Features

๐Ÿ“– API Reference

๐Ÿ“ Guides

๐Ÿ’ก Examples

๐Ÿ“š Full Documentation Index โ†’


๐Ÿ“œ Older Releases

See CHANGELOG.md for full version history (v0.1.0 - v0.19.0).

Dependencies

~4โ€“18MB
~203K SLoC