7 releases
| 0.2.8 | Aug 7, 2025 |
|---|---|
| 0.2.7 | Jun 30, 2025 |
| 0.1.0 |
|
#452 in Asynchronous
746 downloads per month
35KB
776 lines
Simple LLM Client
A Rust crate for interacting with Large Language Model APIs to streamline content creation, research, and information synthesis for use with RAG applications.
Features
- Perplexity AI Integration: Seamlessly connect with the Perplexity AI API for advanced research capabilities
- Markdown Output: Automatically format responses as Markdown with proper citation formatting
- Streaming Support: Option to stream responses in real-time or receive complete responses
- Citation Handling: Extract and format citations from AI responses
- Multiple Provider Support: Future updates will include additional AI providers (OpenAI, Anthropic, Google, etc.)
Installation
Add this to your Cargo.toml:
[dependencies]
simple-llm-client = "^0.2"
The crate is also available via its directory name for local development:
[dependencies]
simple_llm_client = { path = "path/to/llm_client" }
Usage
Basic Example
use simple_llm_client::perplexity::{chat_completion, models::ChatMessage};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let messages = vec![
ChatMessage {
role: "system".to_string(),
content: "Be precise and concise.".to_string(),
},
ChatMessage {
role: "user".to_string(),
content: "How many stars are there in our galaxy?".to_string(),
},
];
// Stream the response to stdout
chat_completion("sonar-pro", messages).await?;
Ok(())
}
Markdown Output Example
use simple_llm_client::perplexity::{chat_completion_markdown, models::ChatMessage};
use std::{error::Error, path::Path};
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let messages = vec![
ChatMessage {
role: "system".to_string(),
content: "Be precise and concise. Return the response as markdown.".to_string(),
},
ChatMessage {
role: "user".to_string(),
content: "Explain the difference between fusion and fission.".to_string(),
},
];
// Save the formatted response to a file
chat_completion_markdown("sonar-pro", messages, Some(Path::new("./output")), "research_result.md").await?;
Ok(())
}
Configuration
The crate requires a Perplexity API key to be set in your environment or in a .env file:
PERPLEXITY_API_KEY=your_api_key_here
OPENAI_API_KEY=your_api_key_here
For the examples to work correctly, create a .env file in the project root with your API key.
Directory Structure
When using the file output functionality, make sure the output directories exist:
mkdir -p output # Create the output directory if it doesn't exist
The example code creates this directory automatically.
Roadmap
- Additional Providers: Support for other AI research and generation APIs will be added in future releases:
- Anthropic (Claude models)
- Google (Gemini models)
- Others based on community demand
- Advanced Formatting Options: Customizable output formatting and templates
- Citation Style Options: Support for different citation styles (APA, MLA, etc.)
- Context Management: Tools for managing conversation context and history
- Multi-provider Research: Aggregate and compare responses from multiple providers
License
This project is licensed under the MIT License - see the LICENSE file for details.
Examples
The crate includes several examples to help you get started:
Running Examples
To run any example, use the cargo run --example command:
# Test the Perplexity implementation
cargo run --example perplexity --features perplexity
# Test the OpenAI implementation
cargo run --example openai --features openai
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Dependencies
~8–24MB
~294K SLoC