0% found this document useful (0 votes)
6 views

CCIT-K8'S

Kubernetes is an open-source container orchestration platform developed by Google, designed to automate the deployment, management, and scaling of containerized applications. It consists of a cluster with master and worker nodes, where various components like API Server, ETCD, and Controllers manage tasks and resources. Kubernetes offers features such as auto-scaling, self-healing, and supports multiple deployment strategies, making it a robust solution for managing containerized applications compared to alternatives like Docker Swarm.

Uploaded by

Rakesh Mirchi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

CCIT-K8'S

Kubernetes is an open-source container orchestration platform developed by Google, designed to automate the deployment, management, and scaling of containerized applications. It consists of a cluster with master and worker nodes, where various components like API Server, ETCD, and Controllers manage tasks and resources. Kubernetes offers features such as auto-scaling, self-healing, and supports multiple deployment strategies, making it a robust solution for managing containerized applications compared to alternatives like Docker Swarm.

Uploaded by

Rakesh Mirchi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

KUBERNETES:

KUBERNETES
K 8S
KUBERNETES:

IT is an open-source container orchestration platform.


It is used to automates many of the manual processes like deploying, managing,
and scaling containerized applications.
Kubernetes was developed by GOOGLE using GO Language.
Google donated K8's to CNCF in 2014.
1st version was released in 2015.

why KUBERNETES:

Containers are a good and easy way to bundle and run your applications. In a
production environment, you need to manage the containers that run the applications
and ensure that there is no downtime. In docker we used docker swarm for this. but any
how docker has drawbacks!

so we moved to KUBERNETES.
aechitecture:
cluster:

It is a group of servers
It will have both manager and worker nodes.
Master Node is used to assign tasks to
Worker Nodes.
Worker node will perform the task.
we have 4 components in Master Node

1.API Server
2.ETCD
3. Controllers-manager
4. Schedulers
we have 4 components in Worker Node.

1. Kubelet
2. Kube-Proxy
3. Pod
4. Container
api server:
It is used to accept the request from the user and store the request in ETCD.
ETCD:
It is like a database to our k8's
it is used to store the requests from API Server in the KEY-VALUE format.
scheduler:
It is used to search pending tasks which are present in ETCD.
If any pending task found in ETCD, it will schedule in worker node.
It will decide in which worker node our task should gets executed. It will
decide by communication with the kubelet in worker node

controllers:
It is used to perform the operations which is scheduled by the scheduler.
it wil control the containers creation in worker nodes.
kubelet:

Agent ensures that pod is running or not.


kube-proxy
It is used to maintain a network connection between worker and manager nodes.

pod:
A group of one or more containers.

container:

It is a virtual machine which does not have any OS.


it is used to run the applications in worker nodes.
kubernetes cluster setup:
There are multiple ways to setup kubernetes cluster.

1.SELF MANAGER K8'S CLUSTER


a.mini kube (single node cluster)
b.kubeadm(multi node cluster)
c. KOPS

1.MANAGED K8'S CLUSTER


a. AWS EKS
b.AZURE AKS
c.GCP GKS
d.IBM IKE
WHY KUBERNETES

Earlier We used Docker Swarm as a container orchestration tool that we


used to manage multiple containerized applications on our environments.

FEATURES DOCKER SWARM KUBERNETES

Setup Easy Complex

Auto Scaling No Auto Scaling Auto Scaling

Community Good Community Greater community for users like


documentation, support and resources

GUI No GUI GUI


MINIKUBE:

It is a tool used to setup single node cluster on K8's.


It contains API Servers, ETDC database and container runtime
It helps you to containerized applications.
It is used for development, testing, and experimentation purposes on local.
Here Master and worker runs on same machine
It is a platform Independent.
By default it will create one node only.
Installing Minikube is simple compared to other tools.

NOTE: But we dont implement this in real-time


MINIKUBE SETUP:
REQUIREMENTS:

2 CPUs or more
2GB of free memory
20GB of free disk space
Internet connection
Container or virtual machine manager, such as: Docker.
UPDATE SERVER:
1 apt update -y
2 apt upgrade -y
INSTALL DOCKER:
3 sudo apt install curl wget apt-transport-https -y
4 sudo curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
INSTALL MINIKUBE:
5 sudo curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
6 sudo mv minikube-linux-amd64 /usr/local/bin/minikube
7 sudo chmod +x /usr/local/bin/minikube
8 sudo minikube version
INSTALL KUBECTL:
9 sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
10 sudo curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
11 sudo echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
12 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
13 sudo kubectl version --client
14 sudo kubectl version --client --output=yaml
15 sudo minikube start --driver=docker --force
KUBECTL:
kubectl is the CLI which is used to interact with a Kubernetes cluster.
We can create, manage pods, services, deployments, and other resources
We can also monitoring, troubleshooting, scaling and updating the pods.
To perform these tasks it communicates with the Kubernetes API server.
It has many options and commands, to work on.
The configuration of kubectl is in the $HOME/.kube directory.
The latest version is 1.26

SYNTAX:
kubectl [command] [TYPE] [NAME] [flags]
kubectl api-resources : to list all api resources
POD:
It is a smallest unit of deployment in K8's.
It is a group of containers.
Pods are ephemeral (short living objects)
Mostly we can use single container inside a pod but if we required, we can
create multiple containers inside a same pod.
when we create a pod, containers inside pods can share the same network
namespace, and can share the same storage volumes .
While creating pod, we must specify the image, along with any necessary
configuration and resource limits.
K8's cannot communicate with containers, they can communicate with only
pods.
We can create this pod in two ways,
1. Imperative(command)
2. Declarative (Manifest file)
POD CREATION:
IMPERATIVE:

The imperative way uses kubectl command to create pod.


This method is useful for quickly creating and modifying the pods.
SYNTAX: kubectl run pod_name --image=image_name
COMMAND: kubectl run pod-1 --image=nginx

kubectl : command line tool run : action


pod-1 : name of pod
nginx : name of image
KUBECTL:
DECLARATIVE:

The Declarativ tive way we need to create a Manifest file in YAML Extension.
This file contains the desired state of a Pod.
It takes care of creating, updating, or deleting the resources.
This manifest file need to follow the yaml indentation.
YAML file consist of KEY-VALUE Pair.
Here we use create or apply command to execute the Manifest file.

SYNTAX: kubectl create/apply -f file_name

CREATE: if you are creating the object for first time we use create only.
APPLY: if we change any thing on files and changes need to apply the resources.
KUBECTL:
MANIFEST FILE:
apiVersion: For communicating with master node
Kind: it is a type of resource
Metadata: data about pod
Spec: it is a specifications of a container
Name : Name of the Container
Image : Conatiner image
Ports : To Expose the Application

- : It is called as Array
THE DRAWBACK
In the above both methods we can be able to create a pod
but what if we delete the pod ?

once you delete the pod we cannot able to access the pod,
so it will create a lot of difficulty in Real time

NOTE: Self healing is not Avaliable here.

To overcome this on we use some Kubernetes components called RC, RS,


DEPLOYMENTS, DAEMON SETS, SERVICES etc ...
REPLICATION CONTROLLER:
Replication controller can run specific number of pods as per our
requirement.
It is the responsible for managing the pod lifecycle
It will make sure that always pods are up and running.
If there are too many pods, RC will terminates the extra pods.
If there are two less RC will create new pods.
This Replication controller will have slef-healing capability, that means
automatically it will creates.
If a pod is failed, terminated or deleted then new pod will get crated
automatically. Replication Controllers use labels to identify the pods that they
are managing.
We need to specifies the desired number of replicas in YAML file.
SELECTOR: It select the resources based on
labels. Labels are key value pairs.

This will helps to pass a command to the pods with


lable app=nginx
TO EXECUTE : kubectl create -f file_name.yml
TO GET : kubectl get rc
TO DESCRIBE : kubectl describe rc/nginx
TO SCALE UP : kubectl scale rc rc_name --replicas 5
TO SCALE DOWN : kubectl scale rc rc_name --replicas 2
(drops from 5 to 2)
TO DELETE REPLICA CONTROLLER : kubectl delete rc rc_name

IF WE DELETE RC, PODS ARE ALSO GETS DELETED, BUT IF WE DON’T WANT TO
DELETE PODS, WE WANT TO DELETE ONLY REPLICA SETS THEN
kubectl delete rc rc_name --cascade=orphan
kubectl get rc rc_name
kubectl get pod
Now we deleted the RC but still pods are present, if we want to assign this pods to
another RC use the same selector which we used on the RC last file.
THE DRAWBACK
RC used only equality based selector
ex: env=prod

here we can assign only one value (equality based selector)

We are not using these RC in recent times, because RC is replaced by RS(Replica


Set)
REPLICASET:
it is nothing but group of identical pods. If one pod crashes automatically
replica sets gives one more pod immediately.
it uses labels to identify the pods
Difference between RC and RS is selector and it offers more advanced
functionality.
The key difference is that the replication controller only supports equality-
based selectors whereas the replica set supports set-based selectors.
it monitoring the number of replicas of a pod running in a cluster and
creating or deleting new pods.
It also provides better control over the scaling of the pod.
A ReplicaSet identifies new Pods to acquire by using its selector.
we can provide multiple values to same key.
ex: env in (prod,test)
Replicas: Number of pod copies we need to create
Matchelabel: label we want to match with pods
Template: This is the template of pod.
Note: give apiVersion: apps/v1 on top

Labels : mandatory to create RS (if u have 100


pods in a cluster we can inform which pods we
need to take care by using labels) if u labeled some
pods a raham then all the pods with label raham
will be managed.
COMMANDS:
TO EXECUTE : kubectl create -f replicaset-nginx.yaml
TO LIST : kubectl get replicaset/rs
TO GET INFO : kubectl get rs -o wide
TO GET IN YAML : kubectl get rs -o yaml
TO GET ALL RESOURCES : kubectl get all

Now delete a pod and do list now it will be created automatically

TO DELETE A POD : Kubectl delete po pod_name


TO DELETE RS : kubectl delete rs
TO SHOW LABLES OF RS : Kubectl get po –show-labels
TO SHOW POD IN YAML : Kubectl get pod -o yaml
THE DRAWBACK
Here Replica set is an lower level object which focus on maintaining desired
number of replica pods.

To manage this replica set we need a higher-level object called deployment which
provide additional functionality like rolling updates and roll backs.

Deployments use ReplicaSets under the hood to manage the actual pods that run
the application.
DEPLOYMENT:
It has features of Replicaset and some other extra features like updating and
rollbacking to a particular version.
The best part of Deployment is we can do it without downtime.
you can update the container image or configuration of the application.
Deployments also provide features such as versioning, which allows you to
track the history of changes to the application.
It has a pause feature, which allows you to temporarily suspend updates to
the application
Scaling can be done manually or automatically based on metrics such as CPU
utilization or requests per second.
Deployment will create ReplicaSet, ReplicaSet will created Pods.
If you delete Deployment, it will delete ReplicaSet and then ReplicaSet will
delete Pods.
Replicas: Number of pod copies we need to create
Matchelabel: label we want to match with pods
Template: This is the template of pod.

Labels : mandatory to create RS (if u have 100


pods in a cluster we can inform which pods we
need to take care by using labels) if u labeled some
pods a raham then all the pods with label raham
will be managed.
MANIFEST FILE:
TO EXECUTE : TO LIST : kubectl create -f replicaset-nginx.yaml
TO GET INFO : TO GET INkubectl get deployment/deploy
YAML : TO GET ALL INFO. : kubectl get deploy -o wide
kubectl get deploy -o yaml
kubectl describe deploy

Now delete a pod and do list now it will be created automatically

TO DELETE A POD. : TOKubectl delete po pod_name


CHECK THE LOGS. : TOkubectl logs pod_name
DELETE DEPLOY : kubectl delete deploy deploy_name
When you inspect the Deployments in your cluster, the following fields are
displayed:

NAME lists the names of the Deployments in the namespace.


READY displays how many replicas of the application are available to your
users. It follows the pattern ready/desired.
UP-TO-DATE displays the number of replicas that have been updated to
achieve the desired state.
AVAILABLE displays how many replicas of the application are available to
your users.
AGE displays the amount of time that the application has been running.
UPDATING A DEPLOYMENT:
updating image from nginx:1.14.2 to nginx:1.16.1

kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1


or
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
or
kubectl edit deployment/nginx-deployment

To rollout from 1.16.1 to 1.14.2

kubectl rollout status deployment/nginx-deployment


Now run kubectl get rs : you will get the new deployment.
kubectl get pod
You see that the number of old replicas is 3, and new replicas is 0.
Next time you want to update these Pods, you only need to update the
Deployment's Pod template again.
While updating pods, 75% will be available and 25% will be unavailable during
updates

Lets assume, if we have 4 pods, while trying to update them atleast 3 of them
are available and 1 should be updated in the mean time.
DEPLOYMENT SCALING:

Lets assume we have 3 pods, if i want to scale it to 10 pods


kubectl scale deployment/nginx-deployment --replicas=10
Assume we have HPA(Horizontal POD Auto-scaling) enabled in cluster.
So if we want to give the min and max count of replicas
kubectl autoscale deployment/nginx-deployment --min=10 --max=15
Now if we want to assign cpu limit for it
kubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-
percent=80
DEPLOYMENT SCALING:
Now run kubectl get rs : you will get the new deployment.
kubectl get pod
You see that the number of old replicas is 3, and new replicas is 0.
Next time you want to update these Pods, you only need to update the
Deployment's Pod template again.
While updating pods, 75% will be available and 25% will be unavailable during
updates

Lets assume, if we have 4 pods, while trying to update them atleast 3 of them
are available and 1 should be updated in the mean time.
KUBERNETES SERVICES
Service is a method for exposing Pods in your cluster.
Each Pod gets its own IP address But we need to access from IP of the Node..
If you want to access pod from inside we use Cluster-IP.
If the service is of type NodePort or LoadBalancer, it can also be accessed.
from outside the cluster.
It enables the pods to be decoupled from the network topology, which makes
it easier to manage and scale applications

TYPES:
CLUSTER-IP
NODE PORT
LOAD BALANCER
COMPONENTS OF SERVICES

A service is defined using a Kubernetes manifest file that describes its properties
and specifications. Some of the key properties of a service include:

Selector: A label selector that defines the set of pods that the service will
route traffic to.
Port: The port number on which the service will listen for incoming traffic.
TargetPort: The port number on which the pods are listening for traffic.
Type: The type of the service, such as ClusterIP, NodePort, LoadBalancer, or
ExternalName.
TYPES OF SERVICES
ClusterIP: A ClusterIP service provides a stable IP address and DNS name for
pods within a cluster. This type of service is only accessible within the cluster
and is not exposed externally.
NodePort: A NodePort service provides a way to expose a service on a static
port on each node in the cluster. This type of service is accessible both within
the cluster and externally, using the node's IP address and the NodePort.
LoadBalancer: A LoadBalancer service provides a way to expose a service
externally, using a cloud provider's load balancer. This type of service is
typically used when an application needs to handle high traffic loads and
requires automatic scaling and load balancing capabilities.
ExternalName: An ExternalName service provides a way to give a service a
DNS name that maps to an external service or endpoint. This type of service is
typically used when an application needs to access an external service, such
as a database or API, using a stable DNS name.
CLUSTER-IP:

To deploy the application we create a container, which stores inside the pod.
After container is created we will be not able to access the application.
Because we cannot access the pods and ports from the cluster.
To Avoid this we are creating the Services.
In this code we are exposing the applcation.
here we use clusterip service
By using this we can access application
inside
the cluster only.
But if we want to access the application from
outside we need to use nodeport.
ClusterIP will assign one ip for service to
access the pod.
Just Replace ClusterIP=NodePort
kubectl apply -f filename.yml
In this code we are exposed the application from
anywhere (inside & outside)
We need to Give public ip of node where pod is running.
Node Port Range= 30000 - 32767
here i hae defined port number as 30001
if we dont specify the port it will assign automatically.
kubectl apply -f filename.yml.
NodePort expose service on a static port on each node.
NodePort services are typically used for smaller
applications with a lower traffic volume.
To avoid this we are using the LoadBalancer service.
Just Replace NodePort=LoadBalancer
In LoadBalaner we can expose application externally
with the help of Cloud Provider LoadBalancer.
it is used when an application needs to handle high
traffic loads and requires automatic scaling and load
balancing capabilities.
After the LoadBalancer service is created, the cloud
provider will created the Load Balancer.
This IP address can be used by clients outside the cluster
to access the service.
The LoadBalancer service also automatically distributes
incoming traffic across the pods that match the selector
defined in the YAML manifest.
access : publicip:port and LB url
http://a0dce056441c04035918de4bfb5bff97-40528368.us-
east-1.elb.amazonaws.com/
NAMESPACE:
Namespaces are used to group the components like pods, services, and
deployments.
This can be useful for separating environments, such as development, staging,
and production, or for separating different teams or applications.
In real-time all the frontend pods are mapped to one namespace and backend
pods are mapped to another namespace.
It represents the cluster inside the cluster.
You can have multiple namespaces within one Kubernetes cluster, and they
are all logically isolated from one another.
Namespaces provide a logical separation of cluster resources between
multiple users, teams, projects, and even customers.
Within the same Namespace, Pod to Pod communication.
Namespaces provide a logical separation between the environments (Dev,
QA, Test, and Prod) with many users, or projects.
Namespaces are only hidden from each other but are not fully isolated from
each other.
One service in a Namespace can talk to another service in another
Namespace if the target and sources are used with the full name which
includes service/object name followed by Namespace.
The name of resources within one namespace must be unique.
When you delete a namespace all the resources will gets deleted.
kubectl get namespaces : To get the name space

NAME STATUS AGE default


Active 31m kube-public Active
31m kube-system Active 31m

Default : when we create resources like pod, service, deployements all will
gets stored in default namespace
kube-public : The namespace for resources that are publicly available by all.
kube-system: The namespace for objects created by the Kubernetes system.
kube-node-lease: It is used for the lease objects associated with each node
that improves the performance of the node heartbeats as the cluster scales
COMMANDS:
kubectl get ns : used to get namespaces
kubectl create ns mustafa : used to create new namespace
kubectl config set-context --current --namespace=mustafa : to check to namespace
kubectl config view --minify | grep namespace : to verify the name space.

CREATE A POD IN NS (IMPERATIVE):


1.Create a namespace - kubectl create namespace mustafa
2.create a pod in ns - kubectl run nginx --image=nginx --namespace mustafa
(or)
kubectl run nginx --image=nginx -n development
3. To check the pods: kubectl get pod -n mustafa
(0r)
kubectl get pod --namespace mustafa
CREATE A POD IN NS (DECLARATIVE):

When you delete a namespace all the resources will gets deleted.
kubectl get pods -n mustafa : used to get pods from namespace
kubectl describe pod nginx -n development :describe a pod in namespace
delete a pod:in namespace
kubectl delete pod nginx -n development
POD FILE NODEPORT FILE

Before going to deploy this file we need to create mustafa namespace.


Here i just mentioned namespace in normal manifest file

You might also like