Skip to content

mennahany93/kubernetes

Repository files navigation

kubernetes

Starting up machines

  1. two controller machines
  2. two worker machines
  3. loadbalancer

Manage all the work on kubernetes machine from local (client) machine

Install client tools

  1. cfssl
  2. kubectl

Generate certificates and distribute them

Generate all certs needed locally using cfssl and then copy them to k8s servers https://github.com/mennahany93/kubernetes/tree/master/Certs

Generate kubeconfigs

kubeconfig files are used by a service or a user to authenticate oneself. enable k8 client to locate and authenticate with api-server https://github.com/mennahany93/kubernetes/tree/master/kubeconfigs

Generate Encryption key

Kubernetes offers the ability to encrypt sensitive data when it is stored. However, in order to use this feature it is necessary to provide Kubernetes with a data encrpytion config containing an encryption key

create this key , put it into config file then copy it to k8 controller servers

Generate the Kubernetes Data encrpytion config file containing the encrpytion key:

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

cat > encryption-config.yaml << EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

Copy the file to both controller servers:

scp encryption-config.yaml user@<controller 1 public ip>:~/
scp encryption-config.yaml user@<controller 2 public ip>:~/

Setting up etcd

It is a distributed key value store that provides a way to store data across cluster of machines https://github.com/mennahany93/kubernetes/tree/master/etcd

Setting up control plane components

  1. kube apiserver
  2. controller manager
  3. kube scheduler

Installing control plane binaries

sudo mkdir -p /etc/kubernetes/config

wget -q --show-progress --https-only --timestamping \
  "https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-scheduler" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl"

chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl

sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

setting up kube-apiserver

https://github.com/mennahany93/kubernetes/tree/master/kube-apiserver

setting up controller manager

https://github.com/mennahany93/kubernetes/tree/master/controller_manager

setting up kube-scheduler

https://github.com/mennahany93/kubernetes/tree/master/kube-scheduler

Enable http health checks

Part of Kelsey Hightower's original Kubernetes the Hard Way guide involves setting up an nginx proxy on each controller to provide access to the Kubernetes API /healthz endpoint over http. This lesson explains the reasoning behind the inclusion of that step and guides you through the process of implementing the http /healthz proxy.

sudo apt-get install -y nginx

Create an nginx configuration for the health check proxy:

cat > kubernetes.default.svc.cluster.local << EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF

Set up the proxy configuration so that it is loaded by nginx:

sudo mv kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
sudo systemctl restart nginx
sudo systemctl enable nginx

You can verify that everything is working like so:

curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz

creating a ClusterRole

to set up a new Kubernetes cluster from scratch you need to assign permissions that allow the Kubernetes API to access various functionality within the worker kubelets.

  1. configure RBAC for kubelet authorization with these commands. Note that these commands only need to be run on one control node. Create a role with the necessary permissions:
cat << EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF
  1. Bind the role to the kubernetes user:
cat << EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

Setting up a Kube API Frontend Load Balancer

In order to achieve redundancy for your Kubernetes cluster, you will need to load balance usage of the Kubernetes API across multiple control nodes

sudo apt-get install -y nginx
sudo systemctl enable nginx
sudo mkdir -p /etc/nginx/tcpconf.d
sudo vi /etc/nginx/nginx.conf

Add the following to the end of nginx.conf:

include /etc/nginx/tcpconf.d/*;

Set up some environment variables for the lead balancer config file:

CONTROLLER0_IP=<controller 0 private ip>
CONTROLLER1_IP=<controller 1 private ip>

Create the load balancer nginx config file:

cat << EOF | sudo tee /etc/nginx/tcpconf.d/kubernetes.conf
stream {
    upstream kubernetes {
        server $CONTROLLER0_IP:6443;
        server $CONTROLLER1_IP:6443;
    }

    server {
        listen 6443;
        listen 443;
        proxy_pass kubernetes;
    }
}
EOF

Reload the nginx configuration:

sudo nginx -s reload

You can verify that the load balancer is working like so:

curl -k https://localhost:6443/version

Setting up worker node components

  1. worker node binaries
  2. containerd
  3. kubelet
  4. kube-proxy

Install worker binaries

sudo apt-get -y install socat conntrack ipset

wget -q --show-progress --https-only --timestamping \
  https://github.com/kubernetes-incubator/cri-tools/releases/download/v1.0.0-beta.0/crictl-v1.0.0-beta.0-linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-the-hard-way/runsc \
  https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
  https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
  https://github.com/containerd/containerd/releases/download/v1.1.0/containerd-1.1.0.linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl \
  https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-proxy \
  https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubelet

sudo mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes

chmod +x kubectl kube-proxy kubelet runc.amd64 runsc

sudo mv runc.amd64 runc

sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/

sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/

sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/

sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C /

Install containerd

https://github.com/mennahany93/kubernetes/tree/master/Containerd

worker components

https://github.com/mennahany93/kubernetes/tree/master/Kubelet

https://github.com/mennahany93/kubernetes/tree/master/Kubeproxy

Set up kubectl for remote access

In a separate shell, open up an ssh tunnel to port 6443 on your Kubernetes API load balancer:

ssh -L 6443:localhost:6443 user@<your Load balancer cloud server public IP>

You can configure your local kubectl in your main shell like so. Set KUBERNETES_PUBLIC_ADDRESS to the public IP of your load balancer.

cd ~/kthw

kubectl config set-cluster kubernetes-the-hard-way \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://localhost:6443

kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem

kubectl config set-context kubernetes-the-hard-way \
  --cluster=kubernetes-the-hard-way \
  --user=admin

kubectl config use-context kubernetes-the-hard-way

Verify that everything is working with:

kubectl get pods
kubectl get nodes
kubectl version

Networking

https://github.com/mennahany93/kubernetes/tree/master/Networking

Deploying the DNS Cluster

https://github.com/mennahany93/kubernetes/tree/master/DNS-cluster

About

build k8 from scratch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •