Expand description
§Meridian
Meridian is a framework for building local first, context aware applications with LLM powered agents.
§Intuitive interfaces
Meridian is used to build local applications where hunans and AI work together to solve problems. To make this easy, we provide three core interfaces for interacting with humans and AI:
- LLM Prompts
- Human in the loop questions
- Classic LLM calls
§LLM Prompts
LLM prompts are used to interact with LLMs. They are simple to create and provide a contructor pattern to enable you to build prompts of varying complexity. As meridian matures we’ll continue to improve the prompt interface to enable more sophisticated prompt engineering.
We provide some utility methods on the prompts themselves to make it easy to send prompts directly to an LLM.
Examples
use meridian::{
prompts::{UserPrompt, SystemPrompt, BasePrompt},
llms::ProviderModel,
};
// Simple user prompt
let prompt = UserPrompt("Explain what a Rust macro is".to_string());
let response = prompt.get_response(ProviderModel::Anthropic(Model::Claude3Sonnet))?;
// Prompt with system message
let system = SystemPrompt("You are a Rust programming expert.".to_string());
let user = UserPrompt("How do I implement the Display trait?".to_string());
// Add additional instructions
let prompt_with_instructions = user.with_additional_instructions(
"Include practical examples in your explanation"
);§Human in the loop questions
Human in the loop questions are used to block the application from proceeding until a user has provided the necessary information to proceed.
Questions can be embedded in a survey which bundles questions together and provides a way to ask a series of interdependent questions to a user.
Examples
use meridian::question_and_answers::{Question, QuestionType};
let question = Question::new(
"project_name",
"What should we name this project?",
QuestionType::Text
);
let answer = question.ask()?;
println!("Project will be named: {}", answer);§Classic LLM calls
Classic LLM calls are used to interact with LLMs. They are the most basic interface and provide a way to interact with an LLM without any additional context.
When you need more control over prompts and LLMs themselves you have the ability to call LLMs using patterns that you’ll be familiar with if you’ve used frameworks like Langchain.
Examples
use meridian::llms::{
anthropic::AnthropicClient,
open_ai::OpenAIClient,
messages::Role,
};
// Using Anthropic
let client = AnthropicClient::new()
.with_model(Model::Claude3Sonnet);
let messages = client.get_completion(vec![
AnthropicMessage::new(
vec![AnthropicMessageContent::Text {
text: "Explain async/await".to_string()
}],
Role::User
)
])?;
// Using OpenAI
let client = OpenAIClient::new()
.with_model(Model::Gpt4);
let response = client.get_completion(vec![
OpenAIMessage::new("Explain traits", Role::User)
])?;§Context Management
Context is critical to solving complex problems with LLMs. Meridian provides a context manager that allows you to manage the context your intend to pass to an LLM. Context is stored in memory and can be persisted to disk if needed. We plan on adding vector search capabilities in the future to enable more sophisticated context management. As it stands, you need to manage the context yourself as you build your application.
Examples
use meridian::{
context::ContextSession,
prompts::{UserPrompt, BasePrompt},
};
let mut context = ContextSession::new();
// Add code context
context.add_file("src/main.rs", "fn main() { println!(\"Hello\"); }");
// Use context in prompts
let prompt = UserPrompt("Explain this code".to_string())
.with_context(&context);§In-memory context
Examples
use meridian::context::ContextSession;
let mut context = ContextSession::new();
context.add_memory("Previous conversation about error handling");
context.add_file("Cargo.toml", "...");§Persistence
Examples
§Session management
Examples
§LLMs
§Providers
§Open AI
Examples
§Anthropic
Examples
§Retries
§Error Handling
§Tools
Examples
use meridian::{
llms::tools::{Toolkit, ToolChoice},
prompts::{UserPrompt, ToolPrompt},
};
let toolkit = Toolkit::new()
.with_tool("search_code", "Search through the codebase")
.with_tool("run_tests", "Execute test suite");
let prompt = UserPrompt("Find all async functions".to_string());
let result = prompt.get_work_output(
&ProviderModel::OpenAI(Model::Gpt4),
&toolkit,
&ToolChoice::Auto
)?;§Logging
Robust logging system that allows you to follow the interactions with llms.
§Acknowledgements
Made with ❤️ by the folks at fiveonefour
§Contributions
If you’re interested in contributing to Meridian, please reach out to us at [email protected]
§License
Meridian is licensed under the MIT License