Skip to content

Commit d536602

Browse files
Merge pull request hashicorp#81 from hashicorp/openshift-3.11
openshift 3.11
2 parents ca3d986 + decec49 commit d536602

File tree

7 files changed

+43
-21
lines changed

7 files changed

+43
-21
lines changed

infrastructure-as-code/k8s-cluster-openshift-aws/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# Openshift Cluster in AWS
2-
This guide provisions an OpenShift Origin 3.7 cluster in AWS with 1 master node, 1 client node, and 1 bastion host. It uses ansible-playbook to deploy OpenShift to the master and client nodes from the bastion host after using Terraform to provision the AWS infrastructure. It is based on a [terraform-aws-openshift](https://github.com/dwmkerr/terraform-aws-openshift) repository created by Dave Kerr.
2+
This guide provisions an OpenShift Origin 3.11 cluster in AWS with 1 master node, 1 client node, and 1 bastion host. It uses ansible-playbook to deploy OpenShift to the master and client nodes from the bastion host after using Terraform to provision the AWS infrastructure. It is based on a [terraform-aws-openshift](https://github.com/dwmkerr/terraform-aws-openshift) repository created by Dave Kerr.
33

44
While the original repository required the user to manually run ansible-playbook after provisioning the AWS infrastructure with Terraform, this guide uses a Terraform [remote-exec provisioner](https://www.terraform.io/docs/provisioners/remote-exec.html) to do that. It also uses several additional remote-exec and local-exec provisioners to automate the rest of the deployment, retrieve the OpenShift cluster keys, and write them to outputs. This is important since it allows workspaces that deploy pods and services to the cluster do that via workspace state sharing without any manual copying of the cluster keys.
55

66
## Reference Material
77
* [OpenShift Origin](https://www.openshift.org/): the open source version of OpenShift, Red Hat's commercial implementation of Kubernetes.
88
* [Kubernetes](https://kubernetes.io/): the open source system for automating deployment and management of containerized applications.
9-
* [openshift-ansible](https://github.com/openshift/openshift-ansible/tree/release-3.7): Ansible roles and playbooks for installing and managing OpenShift 3.7 clusters with Ansible.
9+
* [openshift-ansible](https://github.com/openshift/openshift-ansible/tree/release-3.11): Ansible roles and playbooks for installing and managing OpenShift 3.11 clusters with Ansible.
1010
* [ansible-playbook](https://docs.ansible.com/ansible/2.4/ansible-playbook.html): the actual ansible tool used to deploy the OpenShift cluster. This is used in the install-from-bastion.sh script.
1111

1212
## Estimated Time to Complete
@@ -16,7 +16,7 @@ While the original repository required the user to manually run ansible-playbook
1616
Our target persona is a developer or operations engineer who wants to provision an OpenShift cluster into AWS.
1717

1818
## Challenge
19-
The [advanced installation method](https://docs.openshift.com/container-platform/3.7/install_config/install/advanced_install.html) for OpenShift uses ansible-playbook to deploy OpenShift. Before doing that, the deployer must first provision some infrastructure and then configure an Ansible inventory file with suitable settings. Typically, ansible-playbook would be manually run on a bastion host even if a tool like Terraform had been used to provision the infrastructure.
19+
The [installation method](https://docs.openshift.com/container-platform/3.11/install/index.html) for OpenShift uses ansible-playbook to deploy OpenShift. Before doing that, the deployer must first provision some infrastructure and then configure an Ansible inventory file with suitable settings. Typically, ansible-playbook would be manually run on a bastion host even if a tool like Terraform had been used to provision the infrastructure.
2020

2121
## Solution
2222
This guide combines and completely automates the two steps mentioned above:
@@ -64,14 +64,14 @@ EOF
6464

6565
1. If you do not already have a Terraform Enterprise (TFE) account, self-register for an evaluation at https://app.terraform.io/account/new.
6666
1. After getting access to your TFE account, create an organization for yourself. You might also want to review the [Getting Started](https://www.terraform.io/docs/enterprise/getting-started/index.html) documentation.
67-
1. Connect your TFE organization to GitHub. See the [Configuring Github Access](https://www.terraform.io/docs/enterprise/vcs/github.html)documentation.
67+
1. Connect your TFE organization to GitHub. See the [Configuring GitHub Access](https://www.terraform.io/docs/enterprise/vcs/github.html) documentation.
6868

6969
If you want to use open source Terraform instead of TFE, you can create a copy of the included openshift.tfvars.example file, calling it openshift.auto.tfvars, set values for the variables in it, run `terraform init`, and then run `terraform apply`.
7070

7171
### Step 3: Configure a Terraform Enterprise Workspace
7272
1. Fork this repository by clicking the Fork button in the upper right corner of the screen and selecting your own personal GitHub account or organization.
7373
1. Create a workspace in your TFE organization called k8s-cluster-openshift.
74-
1. Configure the workspace to connect to the fork of this repository in your own Github account.
74+
1. Configure the workspace to connect to the fork of this repository in your own GitHub account.
7575
1. Set the Terraform Working Directory to "infrastructure-as-code/k8s-cluster-openshift-aws".
7676
1. On the Variables tab of your workspace, add the following variables to the Terraform variables: key_name, private_key_data, vault_addr, vault_user, and vault_k8s_auth_path. The first of these must be the name of the key pair you created above. The second must be the actual contents of the private key you downloaded as a pem file. Be sure to mark this variable as sensitive so that it will not be visible after you save your variables. Set vault_addr to the address of your Vault server (e.g., "http://<your_vault_dns>:8200") and vault_user to your username on your Vault server. Finally, set vault_k8s_auth_path to something like "\<your username\>-openshift".
7777
1. HashiCorp SEs should also set the owner and ttl variables which are used by the AWS Lambda reaper function that terminates old EC2 instances.

infrastructure-as-code/k8s-cluster-openshift-aws/main.tf

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,7 @@ resource "null_resource" "configure_k8s" {
131131

132132
provisioner "remote-exec" {
133133
inline = [
134+
"sleep 180",
134135
"kubectl create -f vault-reviewer.yaml",
135136
"kubectl create -f vault-reviewer-rbac.yaml",
136137
"kubectl get serviceaccount vault-reviewer -o yaml > vault-reviewer-service.yaml",

infrastructure-as-code/k8s-cluster-openshift-aws/modules/openshift/01-amis.tf

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Define the RHEL 7.2 AMI by:
2-
# RedHat, Latest, x86_64, EBS, HVM, RHEL 7.2
3-
data "aws_ami" "rhel7_2" {
2+
# RedHat, Latest, x86_64, EBS, HVM, RHEL 7.5
3+
data "aws_ami" "rhel7_5" {
44
most_recent = true
55

66
owners = ["309956199498"] // Red Hat's account ID.
@@ -22,7 +22,7 @@ data "aws_ami" "rhel7_2" {
2222

2323
filter {
2424
name = "name"
25-
values = ["RHEL-7.2*"]
25+
values = ["RHEL-7.5*"]
2626
}
2727
}
2828

infrastructure-as-code/k8s-cluster-openshift-aws/modules/openshift/05-nodes.tf

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ data "template_file" "setup-master" {
77

88
// Launch configuration for the master
99
resource "aws_instance" "master" {
10-
ami = "${data.aws_ami.rhel7_2.id}"
10+
ami = "${data.aws_ami.rhel7_5.id}"
1111
# Master nodes require at least 16GB of memory.
1212
instance_type = "m4.xlarge"
1313
subnet_id = "${aws_subnet.public-subnet.id}"
@@ -51,7 +51,7 @@ data "template_file" "setup-node" {
5151

5252
// Create the two nodes.
5353
resource "aws_instance" "node1" {
54-
ami = "${data.aws_ami.rhel7_2.id}"
54+
ami = "${data.aws_ami.rhel7_5.id}"
5555
instance_type = "${var.amisize}"
5656
subnet_id = "${aws_subnet.public-subnet.id}"
5757
iam_instance_profile = "${aws_iam_instance_profile.openshift-instance-profile.id}"

infrastructure-as-code/k8s-cluster-openshift-aws/modules/openshift/07-bastion.tf

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ data "template_file" "inventory" {
1212
master_ip = "${aws_instance.master.public_ip}"
1313
private_key = "${var.private_key_data}"
1414
name_tag_prefix = "${var.name_tag_prefix}"
15+
region = "${var.region}"
1516
}
1617
}
1718

infrastructure-as-code/k8s-cluster-openshift-aws/modules/openshift/files/install-from-bastion.sh

Lines changed: 30 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,19 +5,26 @@ exec > /home/ec2-user/install-openshift.log 2>&1
55

66
# Install dev tools and Ansible 2.2
77
sudo yum install -y "@Development Tools" python2-pip openssl-devel python-devel gcc libffi-devel
8-
sudo pip install -Iv ansible==2.4.3.0
8+
sudo pip install -Iv ansible==2.6.5
99

1010
# Clone the openshift-ansible repo, which contains the installer.
11-
git clone -b release-3.7 https://github.com/openshift/openshift-ansible
11+
#git clone -b release-3.11 https://github.com/openshift/openshift-ansible
12+
13+
# Using specific commit since later one made it illegal to use the openshift_hostname variable
14+
git clone -n https://github.com/openshift/openshift-ansible
15+
cd openshift-ansible
16+
git checkout 7c8b4f07a46089640bc61b8a9b35fd8b5ed86245
17+
cd ..
18+
1219

1320
# Set up bastion to SSH to other servers
1421
echo "${private_key}" > /home/ec2-user/.ssh/private-key.pem
1522
chmod 400 /home/ec2-user/.ssh/private-key.pem
1623
#chown ec2-user:ec2-user /home/ec2-user/.ssh/private-key.pem
1724
eval $(ssh-agent)
1825
ssh-add /home/ec2-user/.ssh/private-key.pem
19-
ssh-keyscan -t rsa -H master.${name_tag_prefix}-openshift.local >> /home/ec2-user/.ssh/known_hosts
20-
ssh-keyscan -t rsa -H node1.${name_tag_prefix}-openshift.local >> /home/ec2-user/.ssh/known_hosts
26+
#ssh-keyscan -t rsa -H master.${name_tag_prefix}-openshift.local >> /home/ec2-user/.ssh/known_hosts
27+
#ssh-keyscan -t rsa -H node1.${name_tag_prefix}-openshift.local >> /home/ec2-user/.ssh/known_hosts
2128

2229
# Create inventory.cfg file
2330
#cat > /home/inventory.cfg << EOF
@@ -45,7 +52,7 @@ ansible_become=true
4552
openshift_deployment_type=origin
4653
4754
# OpenShift Release
48-
openshift_release=v3.7
55+
openshift_release=v3.11
4956
5057
# We need a wildcard DNS setup for our public access to services, fortunately
5158
# we can use the superb xip.io to get one for free.
@@ -54,13 +61,22 @@ openshift_public_hostname=${master_ip}.xip.io
5461
openshift_master_default_subdomain=${master_ip}.xip.io
5562
5663
# Use an htpasswd file as the indentity provider.
57-
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
64+
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
65+
#, 'filename': '/etc/origin/master/htpasswd'
5866
5967
# Uncomment the line below to enable metrics for the cluster.
6068
# openshift_hosted_metrics_deploy=true
6169
62-
openshift_enable_service_catalog=false
63-
template_service_broker_install=false
70+
# Set the cluster_id.
71+
openshift_clusterid="openshift-cluster-${region}"
72+
73+
# Define the standard set of node groups, as per:
74+
# https://github.com/openshift/openshift-ansible#node-group-definition-and-mapping
75+
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}, {'name': 'node-config-master-infra', 'labels': ['node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true']}, {'name': 'node-config-all-in-one', 'labels': ['node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true,node-role.kubernetes.io/compute=true']}]
76+
77+
78+
#openshift_enable_service_catalog=false
79+
#template_service_broker_install=false
6480
6581
#openshift_template_service_broker_namespaces=['openshift']
6682
@@ -79,15 +95,18 @@ master.${name_tag_prefix}-openshift.local openshift_hostname=master.${name_tag_p
7995
8096
# host group for nodes, includes region info
8197
[nodes]
82-
master.${name_tag_prefix}-openshift.local openshift_hostname=master.${name_tag_prefix}-openshift.local openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
83-
node1.${name_tag_prefix}-openshift.local openshift_hostname=node1.${name_tag_prefix}-openshift.local openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
98+
master.${name_tag_prefix}-openshift.local openshift_hostname=master.${name_tag_prefix}-openshift.local openshift_node_group_name='node-config-master-infra' openshift_schedulable=true
99+
node1.${name_tag_prefix}-openshift.local openshift_hostname=node1.${name_tag_prefix}-openshift.local openshift_node_group_name='node-config-compute'
84100
EOF
85101

86102
# Change ownership of file to ec2-user
87103
#sudo chown ec2-user:ec2-user /home/ec2-user/inventory.cfg
88104

89105
# Run the playbook.
90-
ANSIBLE_HOST_KEY_CHECKING=False /usr/local/bin/ansible-playbook -i ~/inventory.cfg ~/openshift-ansible/playbooks/byo/config.yml
106+
#ANSIBLE_HOST_KEY_CHECKING=False /usr/local/bin/ansible-playbook -i ~/inventory.cfg ~/openshift-ansible/playbooks/byo/config.yml
107+
108+
ANSIBLE_HOST_KEY_CHECKING=False /usr/local/bin/ansible-playbook -i ~/inventory.cfg ./openshift-ansible/playbooks/prerequisites.yml
109+
ANSIBLE_HOST_KEY_CHECKING=False /usr/local/bin/ansible-playbook -i ~/inventory.cfg ./openshift-ansible/playbooks/deploy_cluster.yml
91110

92111
# uncomment for verbose! -vvv
93112

infrastructure-as-code/k8s-cluster-openshift-aws/scripts/postinstall-master.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
sleep 120
77

88
# Create an htpasswd file, we'll use htpasswd auth for OpenShift.
9+
sudo mkdir -p /etc/origin/master
910
sudo htpasswd -cb /etc/origin/master/htpasswd admin 123
1011
oc adm policy add-cluster-role-to-user cluster-admin admin
1112

0 commit comments

Comments
 (0)