Agentic Workflow Platform using n8n.
Instructions for setting up the development environment will be added here.
AI Flow N8N is designed to automate complex business processes with large language models (LLMs) and human-in-the-loop capabilities. The platform leverages n8n for workflow orchestration, integrates with multiple LLM providers, and implements a RAG (Retrieval Augmented Generation) system for context-aware AI interactions.
- Flexible LLM Integration: Support for multiple AI providers (OpenAI, Anthropic, local models)
- Document Processing: Intelligent parsing, chunking, and embedding of various document formats
- RAG Database: Vector-based retrieval system for contextual data
- Human-in-the-Loop: Interactive review and feedback workflows
- Containerized Deployment: Easy setup via Docker for portability
Assists sales engineers in creating Statements of Work by:
- Analyzing meeting notes and project requirements
- Retrieving similar approved SOWs from the RAG database
- Generating draft SOW sections with appropriate context
- Facilitating human review and editing
- Creating supporting materials (battlecards, talking points)
Helps account executives prepare for client meetings by:
- Collecting information from public sources
- Analyzing company details, news, and market position
- Identifying potential opportunities aligned with services
- Generating comprehensive research reports
- Providing strategic talking points
The system comprises several containerized services:
- n8n Workflow Engine: Core orchestration system
- LLM Integration Service: Unified API for AI model access
- Document Processing Service: Handles parsing and chunking
- RAG Query Service: Manages semantic search and context enhancement
- Vector Database: Stores and retrieves document embeddings
- Document Storage: Persists files and results
For a detailed view of component interactions, see the System Architecture Diagram.
The project implementation is divided into eight main phases:
- Environment Setup: Docker infrastructure and base configuration
- LLM Integration Layer: Multi-provider AI service
- Document Processing Pipeline: Text extraction and chunking
- RAG Database Implementation: Vector search system
- Basic Workflow Templates: Reusable n8n components
- Use Case Implementation: SOW and Research workflows
- Testing and Refinement: Performance optimization
- Documentation and Deployment: Production readiness
For detailed implementation steps, see the Implementation Plan.
This project utilizes various Model Context Protocol (MCP) services:
- LLM Orchestration: LangChain, LlamaIndex
- Vector Databases: Qdrant, Chroma
- Document Processing: Unstructured
- Testing: Ragas
For a complete list, see MCP Options.
The project includes a comprehensive testing approach:
- Unit testing of individual components
- Integration testing of service interactions
- End-to-end workflow validation
- Performance benchmarking
- Security validation
For details, see the Testing Framework.
- Docker and Docker Compose
- Python 3.10+
- Git
# Clone the repository
git clone https://github.com/tc45/ai_flow_n8n.git
cd ai_flow_n8n
# Set up environment variables
cp .env.example .env
# Edit .env with your API keys and configuration
# Start the services
docker-compose up -d
- Access the n8n interface at http://localhost:5678
- Set up authentication
- Import the base workflow templates
- Configure LLM API credentials
- Upload initial documents to the RAG database
The project uses Poetry for Python dependency management:
# Install development dependencies
poetry install
# Run tests
poetry run pytest
# Format code
poetry run black .
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.