Skip to content

Problem with provisioning PV in the kind cluster #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
accessd opened this issue Aug 6, 2024 · 5 comments
Closed

Problem with provisioning PV in the kind cluster #7

accessd opened this issue Aug 6, 2024 · 5 comments

Comments

@accessd
Copy link
Contributor

accessd commented Aug 6, 2024

Software that I'm using:
docker desktop for Mac, version 4.33.0 (160616)
kind v0.23.0 go1.22.2 darwin/amd64

I'm trying to apply manifests:

(devbox) ➜ PersistentVolume (main) ✗ t 02-apply-manual-pv-pvc-pod-kind
task: [02-apply-manual-pv-pvc-pod-kind] kubectl apply -f kind/PersistentVolume.manual-kind.yaml
persistentvolume/manual-kind unchanged
task: [02-apply-manual-pv-pvc-pod-kind] kubectl apply -f kind/PersistentVolumeClaim.manual-pv-kind.yaml
persistentvolumeclaim/manual-pv-kind unchanged
task: [02-apply-manual-pv-pvc-pod-kind] kubectl apply -f kind/Pod.manual-pv-and-pvc-kind.yaml
pod/manual-pv-and-pvc created

Then if we're checking the status of the pod and PVC, we get the following:

(devbox) ➜ PersistentVolume (main) ✗ k get pods -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
manual-pv-and-pvc   0/1     Pending   0          11m   <none>   <none>   <none>           <none>
(devbox) ➜ PersistentVolume (main) ✗ k get pvc
NAME             STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
manual-pv-kind   Pending                                      standard       <unset>                 12m

Output of k describe pvc:

Name:          manual-pv-kind
Namespace:     04--persistentvolume
StorageClass:  standard
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
               volume.kubernetes.io/selected-node: kind-worker2
               volume.kubernetes.io/storage-provisioner: rancher.io/local-path
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       manual-pv-and-pvc
Events:
  Type     Reason                Age                  From                                                                                              Message
  ----     ------                ----                 ----                                                                                              -------
  Normal   WaitForFirstConsumer  12m (x2 over 12m)    persistentvolume-controller                                                                       waiting for first consumer to be created before binding
  Normal   Provisioning          2m54s (x7 over 11m)  rancher.io/local-path_local-path-provisioner-988d74bc-rkbdg_255ad579-9fff-4949-a268-e9f1c32dbe84  External provisioner is provisioning volume for claim "04--persistentvolume/manual-pv-kind"
  Warning  ProvisioningFailed    2m54s (x7 over 11m)  rancher.io/local-path_local-path-provisioner-988d74bc-rkbdg_255ad579-9fff-4949-a268-e9f1c32dbe84  failed to provision volume with StorageClass "standard": claim.Spec.Selector is not supported
  Normal   ExternalProvisioning  2m2s (x41 over 11m)  persistentvolume-controller                                                                       Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

So, we have the error claim.Spec.Selector is not supported, I think it's specific to "standard" storage class.
If we're removing the selector block, then everything is provisioned successfully:

(devbox) ➜ PersistentVolume (main) ✗ k describe pods manual-pv-and-pvc
Name:             manual-pv-and-pvc
Namespace:        04--persistentvolume
Priority:         0
Service Account:  default
Node:             kind-worker2/172.25.0.3
Start Time:       Tue, 06 Aug 2024 20:18:19 +0300
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               10.244.1.23
IPs:
  IP:  10.244.1.23
Containers:
  nginx:
    Container ID:   containerd://53c7a2a03b4b465df619034eec8b0616f4cff685087a7c5a56f6276e0a195640
    Image:          nginx:1.26.0
    Image ID:       docker.io/library/nginx@sha256:192e88a0053c178683ca139b9d9a2afb0ad986d171fae491949fe10970dd9da9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 06 Aug 2024 20:18:19 +0300
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /some/mount/path from storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x4zrm (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  manual-pv-kind
    ReadOnly:   false
  kube-api-access-x4zrm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m49s  default-scheduler  Successfully assigned 04--persistentvolume/manual-pv-and-pvc to kind-worker2
  Normal  Pulled     3m49s  kubelet            Container image "nginx:1.26.0" already present on machine
  Normal  Created    3m49s  kubelet            Created container nginx
  Normal  Started    3m49s  kubelet            Started container nginx

but I can't see /some/path/in/container path mounted inside the container.

@sidpalas
Copy link
Owner

sidpalas commented Aug 6, 2024

Thanks for raising!

Can you post the output of describing the storageclass?

k describe storageclass standard

@accessd
Copy link
Contributor Author

accessd commented Aug 7, 2024

(devbox) ➜ PersistentVolume (main) ✗ k describe storageclass standard
Name:            standard
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"standard"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           rancher.io/local-path
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

@sidpalas
Copy link
Owner

sidpalas commented Aug 7, 2024

I think the root cause here is that I didn't specify that the pod needed to be scheduled onto the same node that the PV was manually created on (kind-worker).

Because of this, sometimes the pod would be scheduled there and it would work as expected but other times the k8s scheduler would place it onto kind-worker2 and it would get stuck in a Pending state forever.

There are two things I could do to fix this:

  1. Add an affinity to the pod spec to guarantee the pod would also be scheduled onto kind-worker (this is what you would probably do in the real world)
  2. Add a second PV on kind-worker2 so that regardless of which node the pod is scheduled on there will be one available. This one is more fun because you can see the contents of the file on the host inside the pod after it starts up.

I went with (2), implemented in this commit 2bad7c2

@accessd -- can you confirm if this solves the problem for you? (and close the issue if so!)

Thanks again for raising it!

@accessd
Copy link
Contributor Author

accessd commented Aug 7, 2024

can you confirm if this solves the problem for you?

Yep! It works! Thank you!

@accessd accessd closed this as completed Aug 7, 2024
@thangnh0608
Copy link

Hi @sidpalas ,
I created multiple PVCs with different names but the same label selector. They’re stuck in Pending. I think it’s because one PV (on node 1) is already bound via ReadWriteOnce, and Kubernetes keeps trying to bind new PVCs to that same PV instead of the other available one (on node 2). My question is why Kubernetes doesn't bind the new PVCs to other PV instead of the one that already bound, and for best practice, should we set the same label for PV?

Thank you for the awesome tutorial!

Here is the PVCs definition that I tried to create

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: manual-pv-kind-5
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  selector:
    matchLabels:
      name: manual-kind
  storageClassName: standard

Here is the log

$ kubectl get pv

NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                 STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
manual-kind-worker    100Mi      RWO            Retain           Bound       04--persistentvolume/manual-pv-kind   standard       <unset>                          17h
manual-kind-worker2   100Mi      RWO            Retain           Available                                         standard       <unset>                          17h

$ kubectl get pvc

NAME               STATUS    VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
manual-pv-kind     Bound     manual-kind-worker   100Mi      RWO            standard       <unset>                 17h
manual-pv-kind-5   Pending                                                  standard       <unset>                 17m
manual-pv-kind-6   Pending                                                  standard       <unset>                 116s


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants