2 releases
| 0.1.1 | Nov 14, 2025 |
|---|---|
| 0.1.0 | Nov 14, 2025 |
#928 in Machine learning
Used in 2 crates
280KB
6K
SLoC
LLM Orchestrator Core - Workflow orchestration engine for LLM pipelines.
This crate provides the core functionality for defining, validating, and executing multi-step LLM workflows with support for DAG-based execution, parallel processing, and fault tolerance.
Example
use llm_orchestrator_core::{Workflow, ExecutionContext};
use serde_json::json;
// Load workflow from YAML
let yaml = r#"
name: "simple-workflow"
steps:
- id: "step1"
type: "llm"
provider: "openai"
model: "gpt-4"
prompt: "Say hello to {{ name }}"
output: ["greeting"]
"#;
let workflow = Workflow::from_yaml(yaml)?;
// Validate workflow
workflow.validate()?;
// Create execution context
let mut inputs = std::collections::HashMap::new();
inputs.insert("name".to_string(), json!("World"));
let ctx = ExecutionContext::new(inputs);
// Render template with context
let prompt = ctx.render_template("Say hello to {{ name }}")?;
assert_eq!(prompt, "Say hello to World");
Dependencies
~20–42MB
~627K SLoC