PAGANS (Prompts Aligned to Guidelines and Normalization System) is a powerful prompt optimization framework that works exclusively with OpenRouter to optimize prompts across different LLM model families. By leveraging model-specific optimization strategies, PAGANS ensures your prompts are perfectly aligned with each model family's unique characteristics and best practices.
- 🚀 Fast Optimization: Optimize prompts in under 10 seconds
- 🎯 Model Family Optimization: Specialized optimization strategies for OpenAI, Anthropic, and Google model families
- 🔗 OpenRouter Integration: Seamless integration with OpenRouter for access to all major model families
- 🔄 Async Support: Full asynchronous API for high-throughput optimization
- 📊 Performance Metrics: Track optimization time and token usage
- 🔧 Easy Integration: Simple API that integrates seamlessly into existing workflows
- 🧠 Smart Caching: Built-in caching to avoid redundant optimizations
- ⚡ Batch Processing: Optimize multiple prompts concurrently
- 📈 Model Comparison: Compare optimization results across different model families
- 🎨 Family-Aware Prompts: Automatically adapts prompts to each model family's unique characteristics
PAGANS excels at understanding the unique characteristics of different LLM model families and optimizing prompts accordingly:
- Emphasizes clear structure and explicit instructions
- Benefits from chain-of-thought prompting techniques
- Optimized for conversational and creative tasks
- Focuses on safety and helpfulness guidelines
- Works best with detailed context and examples
- Excels at analysis and reasoning tasks
- Leverages multimodal capabilities when available
- Optimized for factual accuracy and research
- Performs well with structured data and technical content
The optimization process automatically detects the target model family and applies the most effective strategies for that specific family, ensuring optimal performance across all supported models.
pip install pagansgit clone https://github.com/abubakarsiddik31/pagans.git
cd pagans
uv syncgit clone https://github.com/abubakarsiddik31/pagans.git
cd pagans
uv sync --devimport asyncio
from pagans import PAGANSOptimizer
async def main():
# Initialize PAGANS optimizer with your OpenRouter API key
optimizer = PAGANSOptimizer(api_key="your-openrouter-api-key")
# Your original prompt
prompt = "Write a Python function to calculate fibonacci numbers"
# Optimize for a specific model family
result = await optimizer.optimize(
prompt=prompt,
target_model="claude-sonnet-4" # Short model name
)
print(f"Original: {result.original}")
print(f"Optimized: {result.optimized}")
print(f"Target Model: {result.target_model}")
print(f"Model Family: {result.target_family.value}")
print(f"Optimization Time: {result.optimization_time:.2f}s")
await optimizer.close()
asyncio.run(main())PAGANS separates the target model (what you're optimizing for) from the optimizer model (what does the work):
# The optimizer model (from environment) does the actual optimization work
# The target model is what you're optimizing FOR
result = await optimizer.optimize(
prompt="Explain quantum computing",
target_model="claude-sonnet-4" # What you're optimizing FOR
)
# PAGANS automatically detects the model family and applies the right optimization strategy
result2 = await optimizer.optimize(
prompt="Explain quantum computing",
target_model="gpt-4o" # Different model family, different optimization strategy
)export OPENROUTER_API_KEY="your-api-key-here"
export PAGANS_OPTIMIZER_MODEL="openai/gpt-4o-mini" # Model that does the optimization workimport asyncio
from pagans import PAGANSOptimizer
async def main():
# Configuration loaded from environment variables
optimizer = PAGANSOptimizer()
result = await optimizer.optimize(
prompt="Explain quantum computing",
target_model="claude-3.5-sonnet" # What you're optimizing FOR
)
print(f"Optimized for: {result.target_model}")
print(f"Optimization: {result.optimized}")
await optimizer.close()
asyncio.run(main())import asyncio
from pagans import PAGANSOptimizer
async def main():
async with PAGANSOptimizer() as optimizer:
result = await optimizer.optimize(
prompt="Create a REST API in Node.js",
target_model="gemini-2.5-pro"
)
print(result.optimized)Use these convenient short names with PAGANS:
| Family | Short Names | Available via OpenRouter |
|---|---|---|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-5 |
✅ |
| Anthropic | claude-opus-4, claude-opus-4.1, claude-sonnet-4, claude-sonnet-3.7, claude-3.5-sonnet |
✅ |
gemini-2.5-pro, gemini-2.5-flash |
✅ |
You can also use full OpenRouter model names:
| Family | Full Model Names |
|---|---|
| OpenAI | openai/gpt-4o, openai/gpt-4o-mini, openai/gpt-5 |
| Anthropic | anthropic/claude-3.5-sonnet, anthropic/claude-opus-4-20250514, anthropic/claude-sonnet-4-20250514 |
google/gemini-2.5-pro, google/gemini-2.5-flash |
✨ Key Feature: PAGANS automatically detects the model family and applies the appropriate optimization strategy for optimal results!
See the examples/ directory for complete working examples:
# Set your OpenRouter API key (get one at https://openrouter.ai/)
export OPENROUTER_API_KEY="your-openrouter-api-key-here"
# Run the basic example
uv run examples/openai_example.py# Set your OpenRouter API key (get one at https://openrouter.ai/)
export OPENROUTER_API_KEY="your-openrouter-api-key-here"
# Run examples for different model families
uv run examples/anthropic_example.py # Claude models
uv run examples/google_example.py # Gemini models# Provider factory example (legacy)
uv run examples/provider_factory_example.pyimport asyncio
from pagans import PAGANSOptimizer
async def batch_optimize():
prompts = [
"Write a Python function",
"Explain machine learning",
"Create a React component"
]
async with PAGANSOptimizer() as optimizer:
results = await optimizer.optimize_multiple(
prompts=prompts,
target_model="gpt-4o",
max_concurrent=3
)
for result in results:
print(f"Optimized: {result.optimized}")import asyncio
from pagans import PAGANSOptimizer
async def compare_models():
prompt = "Write a function to reverse a string"
async with PAGANSOptimizer() as optimizer:
results = await optimizer.compare_optimizations(
prompt=prompt,
target_models=[
"gpt-4o", # OpenAI family
"claude-3.5-sonnet", # Anthropic family
"gemini-2.5-pro" # Google family
]
)
for model, result in results.items():
print(f"{model}: {result.optimized[:50]}...")import asyncio
from pagans import PAGANSOptimizer
async def custom_config():
optimizer = PAGANSOptimizer(
api_key="your-api-key",
base_url="https://custom.openrouter.url",
timeout=30.0,
max_retries=5,
retry_delay=2.0
)
result = await optimizer.optimize(
prompt="Your prompt here",
target_model="gpt-4o"
)
await optimizer.close()__init__(api_key=None, base_url=None, optimizer_model=None, timeout=30.0, max_retries=3, retry_delay=1.0)
Initialize the PAGANSOptimizer.
Parameters:
api_key(str, optional): OpenRouter API keybase_url(str, optional): OpenRouter base URLoptimizer_model(str, optional): Model that does the optimization worktimeout(float): Request timeout in secondsmax_retries(int): Maximum number of retry attemptsretry_delay(float): Delay between retry attempts
Optimize a prompt for a specific target model.
Parameters:
prompt(str): The original prompt to optimizetarget_model(str, optional): Target model nameoptimization_notes(str, optional): Additional optimization notesuse_cache(bool): Whether to use caching
Returns: OptimizationResult
async optimize_multiple(prompts, target_model=None, optimization_notes=None, use_cache=True, max_concurrent=3)
Optimize multiple prompts concurrently.
Parameters:
prompts(list[str]): List of prompts to optimizetarget_model(str, optional): Target model nameoptimization_notes(str, optional): Additional optimization notesuse_cache(bool): Whether to use cachingmax_concurrent(int): Maximum concurrent optimizations
Returns: list[OptimizationResult]
Attributes:
original(str): Original promptoptimized(str): Optimized prompttarget_model(str): Target model nametarget_family(ModelFamily): Model family enumprovider(str, optional): Provider used (always OpenRouter)optimization_notes(str, optional): Optimization notestokens_used(int, optional): Tokens used in optimizationoptimization_time(float, optional): Time taken in seconds
OPENROUTER_API_KEY: Your OpenRouter API key (required)PAGANS_OPTIMIZER_MODEL: The model that does the optimization work (optional)OPENROUTER_BASE_URL: Custom OpenRouter base URL (optional)
Create a .env file in your project root:
OPENROUTER_API_KEY=your-api-key-here
OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
PAGANS_OPTIMIZER_MODEL=openai/gpt-4o-miniImportant Environment Variables:
OPENROUTER_API_KEY: Your OpenRouter API key (required)PAGANS_OPTIMIZER_MODEL: The model that does the optimization work (optional, defaults toopenai/gpt-4o-mini)OPENROUTER_BASE_URL: Custom OpenRouter base URL (optional)
This project uses uv for dependency management and running commands.
Run the test suite:
uv run pytestRun with coverage:
uv run pytest --cov=pagans --cov-report=htmlWe're actively working on exciting new features! Here's what's on our roadmap:
- 🤖 Extended Model Families - Support for Meta Llama, Mistral, Cohere, and Perplexity models via OpenRouter
- ⚡ Advanced Caching - Redis-based distributed caching with semantic similarity matching
- 🎨 Custom Optimization Strategies - Allow users to define their own optimization approaches
- 🔄 A/B Testing Framework - Compare optimization strategies and automatically select the best approach
- 📊 Performance Analytics - Track optimization success rates, cost savings, and performance metrics
- 🛠️ CLI Tool - Command-line interface for batch processing and automation
- 🌐 Web Dashboard - Simple web UI for testing and managing optimizations
- 🔌 Framework Integrations - Native support for LangChain, LlamaIndex, and Haystack
- 📱 IDE Extensions - VS Code extension for in-editor prompt optimization
Want to contribute to any of these features? Check out our agents.md for developer guidelines!
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
For detailed development guidelines, see agents.md.
This project is licensed under the MIT License - see the LICENSE file for details.