Skip to content

Commit b6211df

Browse files
committed
Set up Vagrant for K3s install and add documentation on usage
1 parent e180892 commit b6211df

File tree

3 files changed

+151
-14
lines changed

3 files changed

+151
-14
lines changed

README.md

Lines changed: 58 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -15,26 +15,77 @@ To do so, ***you will refactor this application into a microservice architecture
1515
* [SQLAlchemy](https://www.sqlalchemy.org/) - Database ORM
1616
* [PostgreSQL](https://www.postgresql.org/) - Relational database
1717
* [PostGIS](https://postgis.net/) - Spatial plug-in for PostgreSQL enabling geographic queries]
18+
* [Vagrant](https://www.vagrantup.com/) - Tool for managing virtual deployed environments
19+
* [VirtualBox](https://www.virtualbox.org/) - Hypervisor allowing you to run multiple operating systems
1820
* [K3s](https://k3s.io/) - Lightweight distribution of K8s to easily develop against a local cluster
1921

2022
## Running the app
2123
The project has been set up such that you should be able to have the project up and running with Kubernetes.
24+
2225
### Prerequisites
23-
To run the application, you will need a K8s cluster running locally and to interface with it via `kubectl`.
26+
We will be installing the tools that we'll need to use for getting our environment set up properly.
2427
1. [Install Docker](https://docs.docker.com/get-docker/)
2528
2. [Set up a DockerHub account](https://hub.docker.com/)
26-
3. [Install K3s for your Operating System](https://rancher.com/docs/k3s/latest/en/)
27-
4. [Set up `kubectl`](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/)
29+
3. [Set up `kubectl`](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/)
30+
4. [Install VirtualBox](https://www.virtualbox.org/wiki/Downloads) with at least version 6.0
31+
5. [Install Vagrant](https://www.vagrantup.com/docs/installation) with at least version 2.0
32+
33+
### Environment Setup
34+
To run the application, you will need a K8s cluster running locally and to interface with it via `kubectl`. We will be using Vagrant with VirtualBox to run K3s.
35+
36+
#### Initialize K3s
37+
In this project's root, run `vagrant up`.
38+
```bash
39+
$ vagrant up
40+
```
41+
The command will take a while and will leverage VirtualBox to load an [OpenSuse](https://www.opensuse.org/) OS and automatically install [K3s](https://k3s.io/). When we are taking a break from development, we can run `vagrant suspend` to conserve some ouf our system's resources and `vagrant resume` when we want to bring our resources back up. Some useful vagrant commands can be found in [this cheatsheet](https://gist.github.com/wpscholar/a49594e2e2b918f4d0c4).
42+
43+
#### Set up `kubectl`
44+
After `vagrant up` is done, you will SSH into the Vagrant environment and retrieve the Kubernetes config file used by `kubectl`. We want to copy the contents of this file into our local environment so that `kubectl` knows how to communicate with the K3s cluster.
45+
```bash
46+
$ vagrant ssh
47+
```
48+
You will now be connected inside of the virtual OS. Run `sudo cat /etc/rancher/k3s/k3s.yaml` to print out the contents of the file. You should see output similar to the one that I've shown below. Note that the output below is just for your reference: every configuration is unique and you should _NOT_ copy the output I have below.
49+
50+
Copy the contents from the output issued from your own command into your clipboard -- we will be pasting it somewhere soon!
51+
```bash
52+
$ sudo cat /etc/rancher/k3s/k3s.yaml
53+
54+
apiVersion: v1
55+
clusters:
56+
- cluster:
57+
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFU1T1RrNE9EYzFNekFlRncweU1EQTVNVE13T1RFNU1UTmFGdzB6TURBNU1URXdPVEU1TVROYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFU1T1RrNE9EYzFNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQk9rc2IvV1FEVVVXczJacUlJWlF4alN2MHFseE9rZXdvRWdBMGtSN2gzZHEKUzFhRjN3L3pnZ0FNNEZNOU1jbFBSMW1sNXZINUVsZUFOV0VTQWRZUnhJeWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFERjczbWZ4YXBwCmZNS2RnMTF1dCswd3BXcWQvMk5pWE9HL0RvZUo0SnpOYlFJZ1JPcnlvRXMrMnFKUkZ5WC8xQmIydnoyZXpwOHkKZ1dKMkxNYUxrMGJzNXcwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
58+
server: https://127.0.0.1:6443
59+
name: default
60+
contexts:
61+
- context:
62+
cluster: default
63+
user: default
64+
name: default
65+
current-context: default
66+
kind: Config
67+
preferences: {}
68+
users:
69+
- name: default
70+
user:
71+
password: 485084ed2cc05d84494d5893160836c9
72+
username: admin
73+
```
74+
Type `exit` to exit the virtual OS and you will find yourself back in your computer's session. Create the file (or replace if it already exists) `~/.kube/config` and paste the contents of the `k3s.yaml` output here.
75+
76+
Afterwards, you can test that `kubectl` works by running a command like `kubectl describe services`. It should not return any errors.
2877

2978
### Steps
3079
1. `kubectl apply -f deployment/db-configmap.yaml` - Set up environment variables for the pods
3180
2. `kubectl apply -f deployment/db-secret.yaml` - Set up secrets for the pods
3281
3. `kubectl apply -f deployment/postgres.yaml` - Set up a Postgres database running PostGIS
33-
4. `sh scripts/run_db_command.sh <POD_NAME>` - Seed your database against the `postgres` pod. (`kubectl get pods` will give you the `POD_NAME`)
34-
5. `kubectl apply -f deployment/udaconnect-api` - Set up the service and deployment for the API
35-
6. `kubectl apply -f deployment/udaconnect-app` - Set up the service and deployment for the web app
82+
4. `kubectl apply -f deployment/udaconnect-api` - Set up the service and deployment for the API
83+
5. `kubectl apply -f deployment/udaconnect-app` - Set up the service and deployment for the web app
84+
6. `sh scripts/run_db_command.sh <POD_NAME>` - Seed your database against the `postgres` pod. (`kubectl get pods` will give you the `POD_NAME`)
85+
86+
Manually applying each of the individual `yaml` files is cumbersome but going through each step provides some context on the content of the starter project. In practice, we would have reduced the number of steps by running the command against a directory to apply of the contents: `kubectl apply -f deployment/`.
3687

37-
Manually applying each of the individual `yaml` files is cumbersome but going through each step provides some context on the content of the starter project. In practice, we would have reduced the number of steps by running the command against a directory to apply of the contents: `kubectl apply -f deployment/`
88+
Note: The first time you run this project, you will need to seed the database with dummy data. Use the command `sh scripts/run_db_command.sh <POD_NAME>` against the `postgres` pod. (`kubectl get pods` will give you the `POD_NAME`). Subsequent runs of `kubectl apply` for making changes to deployments or services shouldn't require you to seed the database again!
3889

3990
### Verifying it Works
4091
Once the project is up and running, you should be able to see 3 deployments and 3 services in Kubernetes:

Vagrantfile

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# -*- mode: ruby -*-
2+
# vi: set ft=ruby :
3+
default_box = "generic/opensuse42"
4+
5+
# All Vagrant configuration is done below. The "2" in Vagrant.configure
6+
# configures the configuration version (we support older styles for
7+
# backwards compatibility). Please don't change it unless you know what
8+
# you're doing.
9+
Vagrant.configure("2") do |config|
10+
# The most common configuration options are documented and commented below.
11+
# For a complete reference, please see the online documentation at
12+
# https://docs.vagrantup.com.
13+
14+
# Every Vagrant development environment requires a box. You can search for
15+
# boxes at https://vagrantcloud.com/search.
16+
17+
config.vm.define "master" do |master|
18+
master.vm.box = default_box
19+
master.vm.hostname = "master"
20+
master.vm.network 'private_network', ip: "192.168.0.200", virtualbox__intnet: true
21+
master.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: true
22+
master.vm.network "forwarded_port", guest: 22, host: 2000 # Master Node SSH
23+
master.vm.network "forwarded_port", guest: 6443, host: 6443 # API Access
24+
for p in 30000..30100 # expose NodePort IP's
25+
master.vm.network "forwarded_port", guest: p, host: p, protocol: "tcp"
26+
end
27+
master.vm.provider "virtualbox" do |v|
28+
v.memory = "3072"
29+
v.name = "master"
30+
end
31+
master.vm.provision "shell", inline: <<-SHELL
32+
sudo zypper refresh
33+
sudo zypper --non-interactive install bzip2
34+
sudo zypper --non-interactive install etcd
35+
curl -sfL https://get.k3s.io | sh -
36+
SHELL
37+
end
38+
39+
40+
# Disable automatic box update checking. If you disable this, then
41+
# boxes will only be checked for updates when the user runs
42+
# `vagrant box outdated`. This is not recommended.
43+
# config.vm.box_check_update = false
44+
45+
# Create a forwarded port mapping which allows access to a specific port
46+
# within the machine from a port on the host machine. In the example below,
47+
# accessing "localhost:8080" will access port 80 on the guest machine.
48+
# NOTE: This will enable public access to the opened port
49+
# config.vm.network "forwarded_port", guest: 80, host: 8080
50+
51+
# Create a forwarded port mapping which allows access to a specific port
52+
# within the machine from a port on the host machine and only allow access
53+
# via 127.0.0.1 to disable public access
54+
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
55+
56+
# Create a private network, which allows host-only access to the machine
57+
# using a specific IP.
58+
# config.vm.network "private_network", ip: "192.168.33.10"
59+
60+
# Create a public network, which generally matched to bridged network.
61+
# Bridged networks make the machine appear as another physical device on
62+
# your network.
63+
# config.vm.network "public_network"
64+
65+
# Share an additional folder to the guest VM. The first argument is
66+
# the path on the host to the actual folder. The second argument is
67+
# the path on the guest to mount the folder. And the optional third
68+
# argument is a set of non-required options.
69+
# config.vm.synced_folder "../data", "/vagrant_data"
70+
71+
# Provider-specific configuration so you can fine-tune various
72+
# backing providers for Vagrant. These expose provider-specific options.
73+
# Example for VirtualBox:
74+
#
75+
# config.vm.provider "virtualbox" do |vb|
76+
# # Display the VirtualBox GUI when booting the machine
77+
# vb.gui = true
78+
#
79+
# # Customize the amount of memory on the VM:
80+
# vb.memory = "1024"
81+
# end
82+
#
83+
# View the documentation for the provider you are using for more
84+
# information on available options.
85+
86+
# Enable provisioning with a shell script. Additional provisioners such as
87+
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
88+
# documentation for more information about their specific syntax and use.
89+
# config.vm.provision "shell", inline: <<-SHELL
90+
# apt-get update
91+
# apt-get install -y apache2
92+
# SHELL
93+
end

deployment/udaconnect-api.yaml

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -34,13 +34,6 @@ spec:
3434
- image: isjustintime/udaconnect-api:latest
3535
name: udaconnect-api
3636
imagePullPolicy: Always
37-
resources:
38-
requests:
39-
memory: "32Mi"
40-
cpu: "256m"
41-
limits:
42-
memory: "64Mi"
43-
cpu: "512m"
4437
env:
4538
- name: DB_USERNAME
4639
valueFrom:

0 commit comments

Comments
 (0)