This comprehensive DevOps project demonstrates how to set up a robust, multi-environment infrastructure using Terraform for provisioning and Ansible for configuration management. The project covers creating infrastructure for development, staging, and production environments, with a focus on automation, scalability, and best practices.
The project involves:
-
Installing Terraform and Ansible
-
Setting up AWS infrastructure
-
Creating dynamic inventories
-
Configuring Nginx across multiple environments
-
Automating infrastructure management
Follow these steps to install Terraform on Ubuntu:
-
Update the Package List
sudo apt-get update
-
Install Dependencies
sudo apt-get install -y gnupg software-properties-common
-
Add HashiCorp's GPG Key
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg -
Add the HashiCorp Repository
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
-
Install Terraform
sudo apt-get update && sudo apt-get install terraform -
Verify the Installation
terraform --version
Ansible simplifies configuration management and automation. To install it:
-
Add the Ansible PPA
sudo apt-add-repository ppa:ansible/ansible
-
Update the Package List
sudo apt update
-
Install Ansible
sudo apt install ansible
-
Verify the Installation
ansible --version
To keep your infrastructure code and server configuration scripts organized, create two separate directories: one for Terraform and another for Ansible.
-
Navigate to Your Project Directory (or create a new one):
mkdir <your-project-name> && cd <your-project-name>
-
Create a Directory for Terraform:
mkdir terraform
-
Create a Directory for Ansible:
mkdir ansible
-
Verify the Directory Structure:
tree
Your project structure should look like this:
<your-project-name>/ ├── terraform/ └── ansible/
With this structure, you can separate your Terraform scripts (infrastructure provisioning) and Ansible playbooks (server configuration) efficiently.
After creating the infra directory, add basic configurations to each Terraform file to provision essential AWS resources.
-
Navigate to the Terraform Directory:
cd terraform -
Create the
infraDirectory:mkdir infra && cd infra
-
Create and Populate the Terraform Files: below is code which i have used to create infrastructure structure to accomplish project pattern
a. [bucket.tf] (S3 Bucket Configuration) : Refer to the source code provided above
b. [dynamodb.tf] (DynamoDB Table for State Locking) : Refer to the source code provided above
c. [ec2.tf] (EC2 Instance Configuration) : Refer to the source code provided above
d. [output.tf] (Output Definitions) : Refer to the source code provided above
e. [variable.tf] (Variable Declarations) : Refer to the source code provided above
-
Verify the File Structure and Content:
tree
Your structure should look like this:
infra/ ├── bucket.tf ├── dynamodb.tf ├── ec2.tf ├── output.tf └── variable.tf
Each file now contains sample resource configurations which i have used to create that project. You can modify the values in [variable.tf] to fit your project’s requirements.
cd ..The [main.tf] file will include the configuration to call your infra module and create resources for the dev, stage, and prod environments.
- Refer to the source code provided above
In this [main.tf], you're defining three modules (dev, stage, prod) using the same infra module, but you can customize them with different settings such as the EC2 instance type, AMI, S3 bucket name, and DynamoDB table name.even display output of Public ips as well.
This file configures the AWS provider and sets the region and access credentials.
- Refer to the source code provided above
This file is used for initialising terraform aws provider.
- Refer to the source code provided above
note : here I have used key name as devops-key , you can create with any name , and replace that every-where that old one appears,
To create SSH keys for accessing the EC2 winstances, use the ssh-keygen command:
ssh-keygen -t rsa -b 2048 -f devops-key -N ""-
This generates two files:
-
devops-key(private key) -
[
devops-key.pub] (public key)
-
At this point, your Terraform project structure should look like this:
├── devops-key # Private SSH key for EC2 access
├── devops-key.pub # Public SSH key for EC2 access
├── infra
│ ├── bucket.tf
│ ├── dynamodb.tf
│ ├── ec2.tf
│ ├── output.tf
│ └── variable.tf
├── main.tf # Defines environment-based modules
├── providers.tf # AWS provider configuration
├── terraform.tf # Backend configuration for state managementRun the following commands to initialize, plan, and apply your Terraform setup:
a. terraform init : Initialize Terraform with the required providers and modules
b. terraform plan : Review the plan to apply changes
c. terraform apply : Apply the changes to provision infrastructur
You can see below that all instance , buckets ,dynamodb are running or created , which is created through Terraform :
Before using the private key, ensure that it is securely encrypted by setting proper permissions. This prevents other users from accessing it. Run the following command to restrict the access:
chmod 400 devops-key # Set read-only permissions for the owner to ensure securityThis command ensures that the private key (devops-key) is only readable by you, preventing others from accessing or modifying it.
After provisioning, you can SSH into the EC2 instances using the generated devops-key:
ssh -i devops-key ubuntu@<your-ec2-ip>Terraform steps done ,now going to setup with ansible
Firstly nevigate to ansible dir which would you have created before
mkdir -p inventories/dev inventories/prod inventories/stg- Refer to the source code provided above
- Refer to the source code provided above
- Refer to the source code provided above
inventories
├── dev
├── prod
└── stgIf you're not already in the Ansible directory, navigate to it first:
cd ../ansibleCreate the playbooks directory inside the Ansible directory:
mkdir playbooksNow, navigate into the playbooks directory:
cd playbooksCreate the install_nginx_playbook.yml file with the following content to install Nginx and render a webpage using the nginx-role:
- Refer to the source code provided above
After completing the above steps, your Ansible directory structure should look like this:
ansible
├── inventories
│ ├── dev
│ ├── prod
│ └── stg
├── playbooks
│ └── install_nginx_playbook.ymlHere are the steps to initialize the nginx-role using Ansible Galaxy, which will generate the necessary folder structure for managing all tasks, files, handlers, templates, and variables related to the Nginx role.
If you're not already in the playbooks directory, navigate to it:
cd ansible/playbooksNow, use the ansible-galaxy command to initialize the nginx-role:
ansible-galaxy role init nginx-roleThis will create the following directory structure within the nginx-role folder:
nginx-role
├── README.md
├── defaults
│ └── main.yml
├── files
│ └── index.html
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.ymlNow that your role structure is ready, you can add your custom tasks and files.
Create a tasks/main.yml file under the nginx-role/tasks/ directory. This file will contain all the steps to install, configure, and manage the Nginx service. Here's the content for your tasks/main.yml:
- Refer to the source code provided above
This will ensure that:
-
Nginx is installed with the latest version.
-
Nginx service is enabled and starts automatically.
-
The
index.htmlfile is copied to the/var/www/htmldirectory, which is where the default Nginx webpage is served from.
You can add an index.html file under the nginx-role/files/ directory. This file can be customized as per your needs. Here's a simplified version of the index.html file you provided, with basic content:
- Refer to the source code provided above
Note: You can replace this HTML content with your own custom webpage content as needed. The goal here is to serve a simple webpage as part of the Nginx configuration.
8. To add the update_inventories.sh script to your Ansible directory and integrate it with your existing setup, follow these steps:
In your ansible directory, create a new file named update_inventories.sh with the following content. This script will dynamically update the inventory files for dev, stg, and prod environments based on the IPs fetched from the Terraform outputs.
- Refer to the source code provided above
This script will:
-
Navigate to the Terraform directory and fetch the public IPs of the instances for dev, stg, and prod environments.
-
Dynamically generate or update the corresponding inventory files in the
ansible/inventoriesdirectory. -
Add common variables for all servers in each environment's inventory file.
After adding the script, your ansible directory should look like this:
ansible
├── inventories
│ ├── dev
│ ├── prod
│ └── stg
├── playbooks
│ ├── install_nginx_playbook.yml
│ └── nginx-role
│ ├── README.md
│ ├── defaults
│ │ └── main.yml
│ ├── files
│ │ └── index.html
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml
│ ├── templates
│ ├── tests
│ │ ├── inventory
│ │ └── test.yml
│ └── vars
│ └── main.yml
├── update_inventories.shBefore running the update_[inventories.sh] script, ensure that it is executable. You can do this by running the following command:
chmod +x update_inventories.shYou can now execute the script to update the inventory files with the IPs fetched from Terraform:
./update_inventories.shAfter running the script, check the inventories directory. The dev, stg, and prod inventory files should now be updated with the IPs of your servers and the necessary variables.
Example contents of the dev inventory file:
[servers]
server1 ansible_host=192.168.1.10
server2 ansible_host=192.168.1.11
[servers:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=/home/amitabh/devops-key
ansible_python_interpreter=/usr/bin/python3Repeat this process for stg and prod environments as well.
Now that your inventory files are updated, you can reference them in your Ansible playbooks by using the -i option:
- For dev inventory :
ansible-playbook -i inventories/dev install_nginx_playbook.yml- For stg inventory :
ansible-playbook -i inventories/stg install_nginx_playbook.yml- For prod inventory
ansible-playbook -i inventories/prod install_nginx_playbook.ymlThis will execute the playbook using the updated all(dev,stg,prod) inventory.
Step 7: Varify the all servers whether html page is visible or not (for all inventory like : dev,stg,prod):
.
├── README.md
├── ansible
│ ├── inventories
│ │ ├── dev
│ │ ├── prod
│ │ └── stg
│ ├── playbooks
│ │ ├── install_nginx_playbook.yml
│ │ └── nginx-role
│ │ ├── README.md
│ │ ├── defaults
│ │ │ └── main.yml
│ │ ├── files
│ │ │ └── index.html
│ │ ├── handlers
│ │ │ └── main.yml
│ │ ├── meta
│ │ │ └── main.yml
│ │ ├── tasks
│ │ │ └── main.yml
│ │ ├── templates
│ │ ├── tests
│ │ │ ├── inventory
│ │ │ └── test.yml
│ │ └── vars
│ │ └── main.yml
│ └── update_inventories.sh
└── terraform
├── infra
│ ├── bucket.tf
│ ├── dynamodb.tf
│ ├── ec2.tf
│ ├── output.tf
│ └── variable.tf
├── main.tf
├── providers.tf
├── terraform.tf
├── terraform.tfstate
└── terraform.tfstate.backupAfter successfully implementing and managing your infrastructure across multiple environments with Terraform and Ansible, it's time to clean up and destroy all the resources that were provisioned. This step ensures that no resources are left running, which helps avoid unnecessary costs.
To destroy the infrastructure, follow these simple steps:
-
Navigate to the Terraform Directory: Go to the directory where your Terraform configuration files are located. This is typically where your
main.tffile and other Terraform scripts are present.cd /path/to/terraform/directory -
Run Terraform Destroy: Execute the following command to destroy all the resources that were created by Terraform. The
--auto-approveflag ensures that you won’t be prompted to confirm the destruction.terraform destroy --auto-approve
This command will:
-
Destroy all EC2 instances
-
Delete all S3 buckets
-
Remove any databases or other resources provisioned during the setup
Once the command finishes executing, your infrastructure will be completely torn down, and you will have successfully cleaned up all resources.
-
This is the final step to ensure that you have a well-managed infrastructure setup that can be recreated anytime using Terraform and Ansible.
Note: Be cautious when running terraform destroy as it will remove all resources, and data in your infrastructure will be lost. Always ensure that you’ve backed up any important data before performing the destruction.
Congratulations on successfully implementing and managing a multi-environment infrastructure with Terraform and Ansible! Here's a quick recap of what you've achieved:
-
Infrastructure Setup with Terraform:
-
You began by defining your infrastructure using Terraform, which included provisioning EC2 instances, S3 buckets, and databases across multiple environments: development, staging, and production.
-
You followed best practices in managing these resources using Terraform's modular approach and state management.
-
-
Automating Server Configuration with Ansible:
-
After setting up your infrastructure, you leveraged Ansible for configuration management. You initialized and structured an Nginx role using Ansible Galaxy, allowing you to efficiently manage the installation and configuration of Nginx across all environments.
-
You also created dynamic inventories for each environment, making it easy to manage server configurations in a scalable way.
-
-
Environment-Specific Configurations:
- By dynamically fetching IPs from Terraform outputs and updating your Ansible inventories, you ensured that each environment had its own specific configuration, enabling streamlined management of resources across dev, staging, and production environments.
-
Simplified Infrastructure Management:
-
With Ansible, you automated the installation, configuration, and updates of necessary software (like Nginx), reducing manual effort and human error.
-
The use of Terraform and Ansible together allowed you to achieve both infrastructure provisioning and configuration management in a clean, reproducible, and automated way.
-
-
Final Cleanup:
- As a final step, you executed the
terraform destroycommand to tear down the infrastructure that was created. This ensured that you could clean up all resources, including instances, databases, and storage, once the project was completed.
- As a final step, you executed the
This project has provided you with hands-on experience in managing infrastructure and configurations for multiple environments using industry-standard tools like Terraform and Ansible. You have successfully automated your infrastructure management, from provisioning to configuration, across different environments.
You can now apply these skills to any real-world scenario, ensuring that infrastructure is managed efficiently, securely, and consistently across any environment.












