A web-based tool for generating production-ready Elasticsearch cluster configurations with Docker Compose.
- Smart Configuration: Automatic optimization based on CPU cores and RAM
- Node Role Management: Support for Master, Data, and Ingest nodes
- Production Ready: Pre-configured settings for production environments
- Split-brain Prevention: Automatic calculation of minimum master nodes
- Visualization: Cluster topology and request flow diagrams
- Configuration Management: Save/load cluster configurations
-
Install dependencies:
pip install -r requirements.txt
-
Run the application:
streamlit run streamlit_app.py
-
Access the web interface at
http://localhost:8501
The tool generates two different output structures depending on the deployment mode:
elasticsearch-cluster-dev/
├── docker-compose.yml # Single file with all nodes
├── start.sh # Simple startup script
└── README.md # Development documentation
Features:
- All nodes in one Docker Compose file
- Container networking (no host mapping needed)
- Quick startup and easy debugging
- Perfect for local development and testing
elasticsearch-cluster-prod/
├── README.md # Complete cluster documentation
├── cluster-init.sh # Global system validation
├── start-all.sh # Start all nodes sequentially
├── stop-all.sh # Stop all nodes
└── nodes/
├── els01/ # Node 1 folder
│ ├── docker-compose.yml # Individual node configuration
│ ├── run.sh # Node startup with validation
│ └── README.md # Node-specific documentation
├── els02/ # Node 2 folder
│ ├── docker-compose.yml
│ ├── run.sh
│ └── README.md
└── els03/ # Node 3 folder (if configured)
├── docker-compose.yml
├── run.sh
└── README.md
Features:
- Ultra-streamlined: All ES + JVM settings in docker-compose.yml
- Zero configuration redundancy
- Individual node folders for easy management
- Production-ready with comprehensive validation
- Scalable for large clusters
- Python 3.7+
- Docker and Docker Compose
- Minimum 2 CPU cores per node
- Minimum 4GB RAM per node
- Cluster name and domain settings
- Node count and roles
- Hardware specifications
- Elasticsearch version
- X-Pack features
- Network settings
-
Extract and prepare files:
unzip elasticsearch-cluster-dev_*.zip cd elasticsearch-cluster-dev/
-
Make scripts executable:
chmod +x start.sh
-
Start the cluster:
./start.sh
Or manually with Docker Compose:
docker-compose up -d
-
Verify cluster health:
curl http://localhost:9200/_cluster/health?pretty curl http://localhost:9200/_cat/nodes?v
-
Extract and prepare files:
unzip elasticsearch-cluster-prod_*.zip cd elasticsearch-cluster-prod/
-
Make all scripts executable:
chmod +x cluster-init.sh start-all.sh stop-all.sh chmod +x nodes/*/run.sh -
Validate system requirements:
./cluster-init.sh
-
Start entire cluster:
./start-all.sh
-
Prepare and validate system:
./cluster-init.sh
-
Deploy nodes individually:
# Start first node cd nodes/els01/ ./run.sh # Start second node (in new terminal) cd ../els02/ ./run.sh # Continue for additional nodes...
-
Navigate to each node directory:
cd nodes/els01/ docker-compose up -d cd ../els02/ docker-compose up -d # Repeat for all nodes...
# Check cluster health
curl http://NODE_IP:9200/_cluster/health?pretty
# List all nodes
curl http://NODE_IP:9200/_cat/nodes?v
# Check cluster settings
curl http://NODE_IP:9200/_cluster/settings?pretty# Stop entire cluster (production mode)
./stop-all.sh
# Stop development cluster
docker-compose down
# View logs (development)
docker-compose logs -f
# View logs (production - specific node)
cd nodes/els01/ && docker-compose logs -fThis project is provided as-is for educational and production use.