Skip to content

Commit 5a89a06

Browse files
committed
added self-serve-infrastructure/k8s-services-openshift
1 parent ef0f905 commit 5a89a06

File tree

7 files changed

+276
-1
lines changed

7 files changed

+276
-1
lines changed
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# OpenShift Pods and Services
2+
Terraform configuration for deploying OpenShift pods and services to existing OpenShift clusters.
3+
4+
## Introduction
5+
This Terraform configuration deploys two pods exposed as services. It is meant to be used in Terraform Enterprise (TFE). The first runs a python application called "cats-and-dogs-frontend" that lets users vote for their favorite type of pet. It stores data in the second, "cats-and-dogs-backend", which runs a redis database. The Terraform configuration replicates what a user could do with the [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/), `kubectl`.
6+
7+
It uses the kubernetes_pod and kubernetes_service resources of Terraform's Kubernetes Provider to deploy the pods and services into an OpenShift cluster previously provisioned by Terraform. It also uses the terraform_remote_state data source to copy the outputs of the targeted cluster's TFE workspace directly into the Kubernetes Provider block, avoiding the need to manually copy the outputs into variables of the TFE services workspace. It also uses the vault_addr, vault_user, and vault_k8s_auth_backend outputs from the cluster workspace. Note that it also uses a remote-exec provisioner to create an OpenShift project (namespace) called "cats-and-dogs" and a Kubernetes service account called "cats-and-dogs" which the pods use. After doing that, it uses additional provisioners to retrieve the JWT token of the cats-and-dogs service account from OpenShift.
8+
9+
Another important aspect of this configuration is that both the frontend application and the redis database get the redis password from a Vault server after using the Kubernetes JWT token of the cats-and-dogs service account to authenticate against Vault's [Kubernetes Auth Method](https://www.vaultproject.io/docs/auth/kubernetes.html). This has the benefits that the redis password is not stored in the Terraform code and that neither the application developers nor the DBAs managing Redis will actually know what the redis password is. Only the security team that stores the password in Vault will know. The redis_db password is stored in the Vault server under "secret/<vault_user>/kubernetes/cats-and-dogs" where \<vault_user\> is the Vault username.
10+
11+
## Deployment Prerequisites
12+
13+
1. First deploy an OpenShift cluster with Terraform by using the Terraform code in the [k8s-cluster-openshift-aws](../../infrastructure-as-code/k8s-cluster-openshift-aws) directory of this repository and pointing a TFE workspace against it.
14+
1. We assume that you have already satisfied all the prerequisites for deploying an OpenShift cluster described by the above link.
15+
1. We also assume that you have already forked this repository and cloned your fork to your laptop.
16+
17+
## Deployment Steps
18+
Execute the following commands to deploy the pods and services to your OpenShift cluster:
19+
20+
1. Create a new TFE workspace called k8s-services-openshift.
21+
1. Configure your workspace to connect to the fork of this repository in your own GitHub account.
22+
1. Set the Terraform Working Directory to "self-serve-infrastructure/k8s-services-openshift"
23+
1. Set the tfe-organization variable in your workspace to the name of the TFE organization containing your OpenShift cluster workspace.
24+
1. Set the k8s-cluster-workspace variable in your workspace to the name of the workspace you used to deploy your OpenShift cluster.
25+
1. Queue a plan for the services workspace in TFE.
26+
1. Confirm that you want to apply the plan.
27+
1. Finally, enter the cats_and_dogs_dns output in a browser. You should see the "Pets Voting App" page.
28+
1. Vote for your favorite pets.
29+
30+
## Cleanup
31+
Execute the following steps to delete the cats-and-dogs pods and services from your OpenShift cluster.
32+
33+
1. Define an environment variable CONFIRM_DESTROY with value 1 on the Variables tab of your services workspace.
34+
1. Queue a Destroy plan in T:FE from the Settings tab of your services workspace.
35+
1. On the Latest Run tab of your services workspace, make sure that the Plan was successful and then click the "Confirm and Apply" button to actually remove the cats-and-dogs pods and services.

self-serve-infrastructure/k8s-services-openshift/cats-and-dogs-token

Whitespace-only changes.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
apiVersion: v1
2+
kind: ServiceAccount
3+
metadata:
4+
name: cats-and-dogs
5+
namespace: cats-and-dogs
Lines changed: 216 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,216 @@
1+
terraform {
2+
required_version = ">= 0.11.5"
3+
}
4+
5+
data "terraform_remote_state" "k8s_cluster" {
6+
backend = "atlas"
7+
config {
8+
name = "${var.tfe_organization}/${var.k8s_cluster_workspace}"
9+
}
10+
}
11+
12+
provider "kubernetes" {
13+
host = "${data.terraform_remote_state.k8s_cluster.k8s_endpoint}"
14+
client_certificate = "${base64decode(data.terraform_remote_state.k8s_cluster.k8s_master_auth_client_certificate)}"
15+
client_key = "${base64decode(data.terraform_remote_state.k8s_cluster.k8s_master_auth_client_key)}"
16+
cluster_ca_certificate = "${base64decode(data.terraform_remote_state.k8s_cluster.k8s_master_auth_cluster_ca_certificate)}"
17+
}
18+
19+
resource "null_resource" "service_account" {
20+
21+
provisioner "file" {
22+
source = "cats-and-dogs.yaml"
23+
destination = "~/cats-and-dogs.yaml"
24+
}
25+
26+
provisioner "remote-exec" {
27+
inline = [
28+
"oc new-project cats-and-dogs --description=\"cats and dogs project\" --display-name=\"cats-and-dogs\"",
29+
"kubectl create -f cats-and-dogs.yaml",
30+
"kubectl get serviceaccount cats-and-dogs -o yaml > cats-and-dogs-service.yaml",
31+
"kubectl get secret $(grep \"cats-and-dogs-token\" cats-and-dogs-service.yaml | cut -d ':' -f 2 | sed 's/ //') -o yaml > cats-and-dogs-secret.yaml",
32+
"sed -n 6,6p cats-and-dogs-secret.yaml | cut -d ':' -f 2 | sed 's/ //' | base64 -d > cats-and-dogs-token"
33+
]
34+
}
35+
36+
connection {
37+
host = "${data.terraform_remote_state.k8s_cluster.master_public_dns}"
38+
type = "ssh"
39+
agent = false
40+
user = "ec2-user"
41+
private_key = "${var.private_key_data}"
42+
bastion_host = "${data.terraform_remote_state.k8s_cluster.bastion_public_dns}"
43+
}
44+
}
45+
46+
resource "null_resource" "get_service_account_token" {
47+
provisioner "remote-exec" {
48+
inline = [
49+
"scp -o StrictHostKeyChecking=no -i ~/.ssh/private-key.pem ec2-user@${data.terraform_remote_state.k8s_cluster.master_public_dns}:~/cats-and-dogs-token cats-and-dogs-token"
50+
]
51+
52+
connection {
53+
host = "${data.terraform_remote_state.k8s_cluster.bastion_public_dns}"
54+
type = "ssh"
55+
agent = false
56+
user = "ec2-user"
57+
private_key = "${var.private_key_data}"
58+
}
59+
}
60+
61+
provisioner "local-exec" {
62+
command = "echo \"${var.private_key_data}\" > private-key.pem"
63+
}
64+
65+
provisioner "local-exec" {
66+
command = "chmod 400 private-key.pem"
67+
}
68+
69+
provisioner "local-exec" {
70+
command = "scp -o StrictHostKeyChecking=no -i private-key.pem ec2-user@${data.terraform_remote_state.k8s_cluster.bastion_public_dns}:~/cats-and-dogs-token cats-and-dogs-token"
71+
}
72+
73+
depends_on = ["null_resource.service_account"]
74+
}
75+
76+
data "null_data_source" "retrieve_token_from_file" {
77+
inputs = {
78+
cats_and_dogs_token = "${file("cats-and-dogs-token")}"
79+
}
80+
depends_on = ["null_resource.get_service_account_token"]
81+
}
82+
83+
resource "kubernetes_pod" "cats-and-dogs-backend" {
84+
metadata {
85+
name = "cats-and-dogs-backend"
86+
namespace = "cats-and-dogs"
87+
labels {
88+
App = "cats-and-dogs-backend"
89+
}
90+
}
91+
spec {
92+
service_account_name = "cats-and-dogs"
93+
container {
94+
image = "rberlind/cats-and-dogs-backend:k8s-auth"
95+
image_pull_policy = "Always"
96+
name = "cats-and-dogs-backend"
97+
command = ["/app/start_redis.sh"]
98+
env = {
99+
name = "VAULT_ADDR"
100+
value = "${data.terraform_remote_state.k8s_cluster.vault_addr}"
101+
}
102+
env = {
103+
name = "VAULT_K8S_BACKEND"
104+
value = "${data.terraform_remote_state.k8s_cluster.vault_k8s_auth_backend}"
105+
}
106+
env = {
107+
name = "VAULT_USER"
108+
value = "${data.terraform_remote_state.k8s_cluster.vault_user}"
109+
}
110+
env = {
111+
name = "K8S_TOKEN"
112+
value = "${data.null_data_source.retrieve_token_from_file.outputs["cats_and_dogs_token"]}"
113+
}
114+
port {
115+
container_port = 6379
116+
}
117+
}
118+
}
119+
}
120+
121+
resource "kubernetes_service" "cats-and-dogs-backend" {
122+
metadata {
123+
name = "cats-and-dogs-backend"
124+
namespace = "cats-and-dogs"
125+
}
126+
spec {
127+
selector {
128+
App = "${kubernetes_pod.cats-and-dogs-backend.metadata.0.labels.App}"
129+
}
130+
port {
131+
port = 6379
132+
target_port = 6379
133+
}
134+
}
135+
}
136+
137+
resource "kubernetes_pod" "cats-and-dogs-frontend" {
138+
metadata {
139+
name = "cats-and-dogs-frontend"
140+
namespace = "cats-and-dogs"
141+
labels {
142+
App = "cats-and-dogs-frontend"
143+
}
144+
}
145+
spec {
146+
service_account_name = "cats-and-dogs"
147+
container {
148+
image = "rberlind/cats-and-dogs-frontend:k8s-auth"
149+
image_pull_policy = "Always"
150+
name = "cats-and-dogs-frontend"
151+
env = {
152+
name = "REDIS"
153+
value = "cats-and-dogs-backend"
154+
}
155+
env = {
156+
name = "VAULT_ADDR"
157+
value = "${data.terraform_remote_state.k8s_cluster.vault_addr}"
158+
}
159+
env = {
160+
name = "VAULT_K8S_BACKEND"
161+
value = "${data.terraform_remote_state.k8s_cluster.vault_k8s_auth_backend}"
162+
}
163+
env = {
164+
name = "VAULT_USER"
165+
value = "${data.terraform_remote_state.k8s_cluster.vault_user}"
166+
}
167+
env = {
168+
name = "K8S_TOKEN"
169+
value = "${data.null_data_source.retrieve_token_from_file.outputs["cats_and_dogs_token"]}"
170+
}
171+
port {
172+
container_port = 80
173+
}
174+
}
175+
}
176+
177+
depends_on = ["kubernetes_service.cats-and-dogs-backend"]
178+
}
179+
180+
resource "kubernetes_service" "cats-and-dogs-frontend" {
181+
metadata {
182+
name = "cats-and-dogs-frontend"
183+
namespace = "cats-and-dogs"
184+
}
185+
spec {
186+
selector {
187+
App = "${kubernetes_pod.cats-and-dogs-frontend.metadata.0.labels.App}"
188+
}
189+
port {
190+
port = 80
191+
target_port = 80
192+
}
193+
type = "LoadBalancer"
194+
}
195+
}
196+
197+
resource "null_resource" "expose_route" {
198+
199+
provisioner "remote-exec" {
200+
inline = [
201+
"oc expose service cats-and-dogs-frontend --hostname=cats-and-dogs-frontend.${data.terraform_remote_state.k8s_cluster.master_public_ip}.xip.io"
202+
]
203+
}
204+
205+
connection {
206+
host = "${data.terraform_remote_state.k8s_cluster.master_public_dns}"
207+
type = "ssh"
208+
agent = false
209+
user = "ec2-user"
210+
private_key = "${var.private_key_data}"
211+
bastion_host = "${data.terraform_remote_state.k8s_cluster.bastion_public_dns}"
212+
}
213+
214+
depends_on = ["kubernetes_service.cats-and-dogs-frontend"]
215+
216+
}
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
output "cats_and_dogs_dns" {
2+
value = "http://cats-and-dogs-frontend.${data.terraform_remote_state.k8s_cluster.master_public_ip}.xip.io"
3+
}
4+
5+
output "cats_and_dogs_token" {
6+
value = "${data.null_data_source.retrieve_token_from_file.outputs["cats_and_dogs_token"]}"
7+
}
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
variable "tfe_organization" {
2+
description = "TFE organization"
3+
default = "RogerBerlind"
4+
}
5+
6+
variable "k8s_cluster_workspace" {
7+
description = "workspace to use for the k8s cluster"
8+
}
9+
10+
variable "private_key_data" {
11+
description = "contents of the private key"
12+
}

self-serve-infrastructure/k8s-services/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ This Terraform configuration deploys two pods exposed as services. It is meant t
66

77
It uses the kubernetes_pod and kubernetes_service resources of Terraform's Kubernetes Provider to deploy the pods and services into a Kubernetes cluster previously provisioned by Terraform. It also uses the terraform_remote_state data source to copy the outputs of the targeted cluster's TFE workspace directly into the Kubernetes Provider block, avoiding the need to manually copy the outputs into variables of the TFE services workspace. It also uses the vault_addr, vault_user, and vault_k8s_auth_backend outputs from the cluster workspace. Note that it also creates a Kubernetes service account called "cats-and-dogs" which the pods use.
88

9-
Another important aspect of this configuration is that both the frontend application and the redis database get the redis password from a Vault server after using the Kubernetes JWT token of the cats-and-dogs service account to authenticate against Vault's Kubernetes auth backend. This has the benefits that the redis password is not stored in the Terraform code and that neither the application developers nor the DBAs managing Redis need to know what the redis password is. Only the security team that stores the password in Vault know it. The redis_db password is stored in the Vault server under "secret/<vault_user>/kubernetes/cats-and-dogs" where \<vault_user\> is the Vault username.
9+
Another important aspect of this configuration is that both the frontend application and the redis database get the redis password from a Vault server after using the Kubernetes JWT token of the cats-and-dogs service account to authenticate against Vault's Kubernetes auth method. This has the benefits that the redis password is not stored in the Terraform code and that neither the application developers nor the DBAs managing Redis need to know what the redis password is. Only the security team that stores the password in Vault know it. The redis_db password is stored in the Vault server under "secret/<vault_user>/kubernetes/cats-and-dogs" where \<vault_user\> is the Vault username.
1010

1111
## Deployment Prerequisites
1212

0 commit comments

Comments
 (0)