A comprehensive, production-ready Docker Compose setup for deploying a full AI application stack with enterprise-grade security, monitoring, and observability features.
- Overview
- Architecture
- Services Included
- Security Features
- Prerequisites
- Quick Start
- Configuration
- Monitoring & Observability
- Service Access
- Security Configuration
- Troubleshooting
- Contributing
- License
This project provides a complete, secure, and monitored AI application stack deployment featuring:
- 15+ AI Services: From LLM hosting to workflow automation
- Enterprise Security: Authentication, encryption, network segmentation
- Comprehensive Monitoring: Health checks, resource monitoring, log aggregation
- Production Ready: SSL/TLS, secrets management, backup strategies
- Easy Deployment: Single-command setup with automated security hardening
Perfect for developers, researchers, and organizations looking to deploy AI applications with enterprise-grade reliability and security.
┌─────────────────────────────────────────────────────────────────┐
│ 🌐 Nginx Reverse Proxy (SSL/TLS) │
│ ┌─────────────────────────────────────┐ │
│ │ Monitoring Dashboard │ │
│ │ (Health + Resources + Logs) │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
┌─────────▼─────────┐ ┌───▼───┐ ┌────────▼─────────┐
│ AI Applications │ │Vector │ │ AI Services │
│ │ │ DB │ │ │
│ • Dify (API/Web) │ │Qdrant │ │ • Ollama │
│ • N8N Workflow │ │ │ │ • LiteLLM │
│ • Flowise Builder │ └───────┘ │ • OpenWebUI │
│ • Supabase │ │ • OpenMemory │
└───────────────────┘ └─────────────────┘
│ │
└───────────────┬───────────────┘
│
┌───────────────▼───────────────┐
│ Infrastructure │
│ │
│ • PostgreSQL Database │
│ • Redis Cache │
│ • Docker Secrets │
│ • Network Segmentation │
└───────────────────────────────┘
- Dify: Open-source LLM application development platform
- Ollama: Local LLM runner with model management
- Ollama WebUI: Web interface for Ollama model management
- LiteLLM: LLM API proxy and load balancer
- OpenWebUI: Modern web interface for LLMs
- OpenMemory: AI memory and context management
- Qdrant: High-performance vector database
- PostgreSQL: Relational database
- Redis: Cache and session store
- Supabase: Open-source Firebase alternative
- Adminer: Web-based database management (optional)
- Monitoring Dashboard: Comprehensive health, resource, and log monitoring
- Nginx Reverse Proxy: SSL/TLS termination and load balancing
- Security Hardening: Firewall rules, secret management, encryption
- Multi-level Authentication: HTTP Basic Auth for all web interfaces
- Secure Credentials: Cryptographically generated passwords and API keys
- Session Management: Secure session handling with Redis
- Reverse Proxy: Nginx with SSL/TLS termination
- Security Headers: XSS, CSRF, HSTS, Content-Type protection
- Rate Limiting: API rate limiting and brute force protection
- Network Segmentation: Isolated Docker networks
- Firewall Rules: Host-level iptables with service-specific restrictions
- Docker Secrets: Encrypted secret files for sensitive data
- Environment Isolation: Secrets not exposed in environment variables
- Automated Generation: Cryptographically secure random credentials
- HTTPS Everywhere: SSL/TLS for all web interfaces
- Database Encryption: PostgreSQL with secure authentication
- Redis Encryption: Password-protected Redis connections
- Self-signed Certificates: Development-ready (replace with CA certs for production)
- Security Logging: Comprehensive audit logs with rotation
- Health Monitoring: Real-time service health checks with visual dashboards
- Resource Monitoring: CPU, memory, network, and disk usage tracking
- Log Aggregation: Centralized container log viewing with filtering
- Access Logging: Detailed security audit trails
- Prometheus Metrics: Standard metrics endpoint for external monitoring
- Alerting System: Automated alerts for service failures and high resource usage
- Historical Trends: Metrics history and trend analysis with charts
- Request Tracing: Performance monitoring and request duration tracking
- OS: Linux (Ubuntu 20.04+, CentOS 8+, Debian 10+)
- CPU: 4+ cores recommended
- RAM: 16GB+ recommended (32GB+ for multiple LLMs)
- Disk: 100GB+ SSD storage
- Network: Stable internet connection
- Docker Engine: 20.10+ (installed automatically if missing)
- Docker Compose: 2.0+ (installed automatically if missing)
- sudo access: Required for security hardening
- SSL Certificates: CA-signed certificates for production
- External Storage: For data persistence and backups
- Monitoring Tools: External monitoring integration
Get the complete AI Stack Build up and running with a single command:
curl -fsSL https://raw.githubusercontent.com/steelburn/ai-stack-build/main/install.sh | bashThis automated installer will:
- ✅ Check system requirements (Docker, Git, etc.)
- 📥 Clone/update the repository
- 🔧 Set up environment configuration
- 🔐 Generate secure credentials and secrets
- 🐳 Configure and start all Docker services
- 📊 Launch the monitoring dashboard
Post-installation:
- Access monitoring:
http://localhost/monitoring - Check status:
make status - View logs:
make logs
If you prefer manual setup or need more control:
git clone <repository-url>
cd ai-stack-build# One-command setup (recommended)
make setup
# Or run manually
./setup.sh# Generate secure secrets (highly recommended)
./generate-secrets.sh
./generate-docker-secrets.sh
# Generate SSL certificates
./generate-ssl.shsudo ./harden-security.shmake up
# Or: docker-compose up -d- Monitoring Dashboard: https://localhost/monitoring/
- Resource Monitor: https://localhost/monitoring/resources
- Alert Dashboard: https://localhost/monitoring/alerts
- Metrics Trends: https://localhost/monitoring/trends
- Prometheus Metrics: https://localhost/monitoring/metrics
- Dify: https://localhost/dify/
- OpenWebUI: https://localhost/openwebui/
- N8N: https://localhost/n8n/
All configuration is centralized in .env file. Copy the example:
cp .env.example .env
# Edit .env with your preferred settings# Database Configuration
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your-secure-db-password
POSTGRES_DB=dify
# Redis Configuration
REDIS_PASSWORD=your-secure-redis-password
# AI Service Authentication
WEBUI_AUTH_USERNAME=admin
WEBUI_AUTH_PASSWORD=your-secure-password
# Monitoring Credentials
MONITORING_USERNAME=admin
MONITORING_PASSWORD=your-monitoring-password
# SSL/TLS (for production)
SSL_CERT_PATH=/path/to/cert.pem
SSL_KEY_PATH=/path/to/key.pem# Pull common models after startup
docker exec -it ai-stack-ollama-1 ollama pull llama3.2
docker exec -it ai-stack-ollama-1 ollama pull codellama- Access dashboard: http://localhost:4000/ui
- Configure model routing and load balancing
- Set up API keys and rate limits
- Default credentials: admin / password (change in .env)
- Access: https://localhost/n8n/
- Import/export workflows via UI
- Default credentials: admin / password (change in .env)
- Access: https://localhost/flowise/
- Build AI workflows visually
The monitoring dashboard provides real-time health status for all services:
- Service Status: Up/Down indicators with response times
- Health Checks: Automated endpoint monitoring
- Auto-refresh: Updates every 30 seconds
- Error Details: Specific error messages and diagnostics
Access: https://localhost/monitoring/ (requires authentication)
Comprehensive resource usage tracking:
- CPU Usage: Real-time CPU utilization with visual indicators
- Memory Usage: RAM usage with limits and percentages
- Network I/O: RX/TX byte counts
- Disk I/O: Read/write operations
- Container Status: Running/stopped/exited states
Access: https://localhost/monitoring/resources/
Centralized container log viewing:
- Real-time Logs: Live container log streaming
- Log Filtering: Search and filter capabilities
- Syntax Highlighting: Color-coded log levels (ERROR, WARN, INFO, DEBUG)
- Log History: Configurable log retention
- Per-Service Logs: Individual service log access
Access: Click "View Logs" for any service in the monitoring dashboard
The monitoring system supports multiple configuration methods:
{
"my-service": {
"url": "http://my-service:8080/health",
"name": "My Custom Service"
}
}SERVICE_1_NAME=My Service
SERVICE_1_URL=http://my-service:8080/health
SERVICE_2_NAME=Another Service
SERVICE_2_URL=http://another-service:3000/health- Dify API, Web, and Worker
- Ollama, LiteLLM, OpenWebUI
- N8N, Flowise, OpenMemory
- Qdrant, PostgreSQL, Redis
| Service | URL | Authentication | Notes |
|---|---|---|---|
| Monitoring Dashboard | https://localhost/monitoring/ | HTTP Basic Auth | Service health & resources |
| Dify | https://localhost/dify/ | Via Dify | LLM application platform |
| OpenWebUI | https://localhost/openwebui/ | Built-in Auth | Web interface for LLMs |
| Ollama WebUI | https://localhost/ollama-webui/ | None | Model management interface |
| N8N | https://localhost/n8n/ | HTTP Basic Auth | Workflow automation |
| Flowise | https://localhost/flowise/ | Built-in Auth | AI workflow builder |
| LiteLLM Dashboard | https://localhost/litellm/ui/ | API Key | LLM proxy management |
| Database Admin (Adminer) | https://localhost/adminer/ | HTTP Basic Auth | PostgreSQL management (when enabled) |
| Service | Endpoint | Authentication |
|---|---|---|
| Ollama API | https://localhost/ollama/api/generate | None |
| LiteLLM API | https://localhost/litellm/chat/completions | API Key |
| OpenMemory API | https://localhost/openmemory/api/v1/memories/ | None |
| Service | Purpose | Access |
|---|---|---|
| Qdrant | Vector Database | Internal services only |
| PostgreSQL | Primary Database | Internal services only |
| Redis | Cache & Sessions | Internal services only |
| Supabase | Alternative Database | Internal services only |
Security Note: All user-facing services are protected behind the Nginx reverse proxy with SSL/TLS encryption, rate limiting, and security headers. Direct port access has been removed for security.
-
Change Default Credentials
# Edit .env file MONITORING_USERNAME=your-admin-user MONITORING_PASSWORD=your-secure-password WEBUI_AUTH_USERNAME=your-user WEBUI_AUTH_PASSWORD=your-secure-password -
SSL Certificates for Production
# Replace self-signed certificates cp your-ca-cert.pem nginx/ssl/cert.pem cp your-private-key.pem nginx/ssl/key.pem -
Review Firewall Rules
sudo iptables -L # Check current rules # Customize harden-security.sh if needed
-
Secret Management
./generate-secrets.sh # Regenerate secrets ./generate-docker-secrets.sh # Update Docker secrets
generate-secrets.sh: Generate cryptographically secure passwordsgenerate-docker-secrets.sh: Create Docker secret filesgenerate-ssl.sh: Generate self-signed SSL certificatesharden-security.sh: Apply host-level security hardening
| Service | Auth Type | Config Location |
|---|---|---|
| Monitoring | HTTP Basic | .env (MONITORING_*) |
| OpenWebUI | Built-in | .env (WEBUI_AUTH_*) |
| N8N | HTTP Basic | .env (N8N_BASIC_*) |
| Flowise | Built-in | .env (FLOWISE_*) |
| LiteLLM | HTTP Basic | .env (UI_USERNAME/UI_PASSWORD) |
| Database Admin (Adminer) | HTTP Basic | .env (ADMINER_*) |
The stack includes optional web-based database management via Adminer. This feature is disabled by default for security reasons.
-
Set Environment Variables
# Edit .env file ENABLE_DATABASE_ADMIN=true ADMINER_USERNAME=your-db-admin-user ADMINER_PASSWORD=your-secure-db-admin-password -
Start with Database Admin Profile
# Start all services including database admin docker-compose --profile db-admin up -d # Or use the Makefile make up-db-admin
-
Access Database Admin
- URL:
https://localhost/adminer/ - Authentication Required: Use the
ADMINER_USERNAMEandADMINER_PASSWORDyou configured - Auto-Connection: Adminer will automatically connect to the PostgreSQL database with your configured credentials
- System: PostgreSQL (pre-selected)
- Server:
db(pre-filled) - Username: Your
POSTGRES_USER(pre-filled) - Password: Your
POSTGRES_PASSWORD(pre-filled) - Database: Your
POSTGRES_DB(pre-filled)
- URL:
- Database admin is only accessible when explicitly enabled
- HTTP Basic Authentication required for all access
- Pre-configured connection eliminates manual entry of credentials
- Only enable in development/staging environments
- Use strong passwords for both Adminer auth and database access
- Monitor access logs when enabled
make help # Show all available commands
make setup # Complete automated setup
make up # Start all services
make down # Stop all services
make restart # Restart all services
make logs # View all logs
make status # Show service status
make clean # Stop and remove containers/volumes
make pull-models # Pull common Ollama models
make backup # Backup data volumes
make restore # Restore from backup
make update # Update all images
make security # Run security hardening# Check all services
docker-compose ps
# Check specific service
docker-compose ps monitoring
# View service logs
docker-compose logs monitoring
make logs SERVICE=monitoring- Cause: Self-signed certificates in development
- Solution: Add to browser exceptions or use CA certificates
- Production: Replace with proper SSL certificates
# Check credentials in .env
grep -E "(USERNAME|PASSWORD)" .env
# Verify secret files
ls -la secrets/
cat secrets/monitoring_username# Check network connectivity
docker-compose exec monitoring ping dify-api
# Verify service health
curl -k https://localhost/dify/# Check system resources
free -h
df -h
docker system df
# Monitor container resources
docker stats# Check database connectivity
docker-compose exec db pg_isready -U postgres -d dify
# View database logs
docker-compose logs db# Increase Docker memory limit
# Edit /etc/docker/daemon.json
{
"default-shard-size": "1GB",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}# Clean up Docker
docker system prune -a --volumes
# Monitor disk usage
du -sh /var/lib/docker/volumes/# Backup all volumes
make backup
# Manual backup
docker run --rm -v ai-stack_db_data:/data -v $(pwd)/backup:/backup alpine tar czf /backup/db-$(date +%Y%m%d).tar.gz -C /data .# Restore from backup
make restore BACKUP=db-20241201.tar.gzWe welcome contributions! Please see our contributing guidelines:
# Fork and clone
git clone https://github.com/your-username/ai-stack-build.git
cd ai-stack-build
# Create feature branch
git checkout -b feature/your-feature
# Make changes and test
make test
make up
# Submit pull request
git push origin feature/your-feature- Use descriptive commit messages
- Update documentation for new features
- Test security implications of changes
- Follow Docker best practices
- Include health checks for new services
- Use GitHub Issues for bugs and feature requests
- Include system information and logs
- Describe steps to reproduce
- Suggest potential solutions
This project is licensed under the MIT License - see the LICENSE file for details.
- Dify - LLM application platform
- Ollama - Local LLM hosting
- LiteLLM - LLM API management
- Qdrant - Vector database
- N8N - Workflow automation
- Flowise - AI workflow builder
- OpenWebUI - LLM web interface
- Documentation: This README and inline comments
- Issues: GitHub Issues for bugs and feature requests
- Discussions: GitHub Discussions for questions and ideas
- Security: Report security issues privately
Happy AI Building! 🤖✨