@@ -19,6 +19,49 @@ We can check on which nodes the Pods are running with
1919kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
2020----
2121
22+ For being sure that always four Pods are running, we create a PodDisruptionBudget with
23+
24+ [source, bash]
25+ ----
26+ kubectl create -f https://k8spatterns.io/SingletonService/pdb.yml
27+ ----
28+
29+ Now let's drain a node and see how the Pods are relocated.
30+ Let's say that the second node is called `node01` (that's the case for a Katakoda setup)
31+
32+ [source, bash]
33+ ----
34+ kubectl drain --ignore-daemonsets node01 >/dev/null 2>&1 &
35+ ----
36+
37+ We do the removing of the node in the background, so that we can watch how the Pods are relocated:
38+
39+
40+ [source, bash]
41+ ----
42+ watch kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
43+ ----
44+
45+ As you can see, there are always at least four Pods running on both nodes until eventually all six Pods are running on the master node.
46+
47+ You can undo the drain operation with
48+
49+ [source, bash]
50+ ----
51+ kubectl patch node node01 -p '{"spec":{"unschedulable":false}}'
52+ ----
53+
54+ and restore the Deployment with
55+
56+ [source, bash]
57+ ----
58+ kubectl scale deployment random-generator --replicas 0
59+ kubectl scale deployment random-generator --replicas 6
60+ ----
61+
62+
63+
64+
2265
2366For the rest of the example we are watching the Pods in the console by putting the following command into the background
2467
@@ -49,11 +92,3 @@ kubectl set image deployment random-generator random-generator=k8spatterns/rando
4992The PDB use Pod selector which maches on the labels `app=random-generator`
5093
5194Let's create such Pods with
52-
53-
54- [source, bash]
55- ----
56- kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":false}}'
57-
58- kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
59- ----
0 commit comments