Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 0 additions & 32 deletions asciidoc/edge-book/releasenotes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,22 +60,6 @@ If deploying new clusters, please follow <<guides-kiwi-builder-images>> to build
* When using `toolbox` in SUSE Linux Micro 6.1, the default container image does not contain some tools which were included in the previous 5.5 version. The workaround is to configure toolbox to use the previous `suse/sle-micro/5.5/toolbox` container image, see `toolbox --help` for options to configure the image.
* In some cases rolling upgrades via CAPI can result in Machines stuck in Deleting state, this will be resolved via a future update https://github.com/rancher/cluster-api-provider-rke2/issues/655[Upstream RKE2 provider issue 655]
* Due to fixes related to https://nvd.nist.gov/vuln/detail/CVE-2025-1974[CVE-2025-1974] as mentioned in 3.3.0, SUSE Linux Micro 6.1 *must* be updated to include kernel `>=6.4.0-26-default` or `>=6.4.0-30-rt` (real-time kernel) due to required SELinux kernel patches. If not applied, the ingress-nginx pod will remain in a `CrashLoopBackOff` state. To apply the kernel update run `transactional-update` on the host itself (to update all packages), or `transactional-update pkg update kernel-default` (or kernel-rt) to update just the kernel, then reboot the host. If deploying new clusters, please follow <<guides-kiwi-builder-images>> to build fresh images containing the latest kernel.
* A bug with Kubernetes Job Controller has been identified that on certain conditions it can cause the RKE2/K3s nodes to stay in `NotReady` state (see the https://github.com/rancher/rke2/issues/8357[#8357 RKE2 issue]). The errors can look like:

[,bash]
----
E0605 23:11:18.489721 »···1 job_controller.go:631] "Unhandled Error" err="syncing job: tracking status: adding uncounted pods to status: Operation cannot be fulfilled on jobs.batch \"helm-install-rke2-ingress-nginx\": StorageError: invalid object, Code: 4, Key: /registry/jobs/kube-system/helm-install-rke2-ingress-nginx, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0aa6a781-7757-4c61-881a-cb1a4e47802c, UID in object meta: 6a320146-16b8-4f83-88c5-fc8b5a59a581" logger="UnhandledError"
----

As a workaround, the `kube-controller-manager` pod can be restarted with `crictl` as:

[,bash]
----
export CONTAINER_RUNTIME_ENDPOINT=unix:///run/k3s/containerd/containerd.sock
export KUBEMANAGER_POD=$(/var/lib/rancher/rke2/bin/crictl ps --label io.kubernetes.container.name=kube-controller-manager --quiet)
/var/lib/rancher/rke2/bin/crictl stop ${KUBEMANAGER_POD} && \
/var/lib/rancher/rke2/bin/crictl rm ${KUBEMANAGER_POD}
----

* On RKE2/K3s 1.31 and 1.32 versions, the directory `/etc/cni` being used to store CNI configurations may not trigger a notification of the files being written there to `containerd` due to certain conditions related to `overlayfs` (see the https://github.com/rancher/rke2/issues/8356[#8356 RKE2 issue]). This in turn results in the deployment of RKE2/K3s to get stuck waiting for the CNI to start, and the RKE2/K3s nodes to stay in `NotReady` state. This can be seen at node level with `kubectl describe node <affected_node>`:

Expand Down Expand Up @@ -242,22 +226,6 @@ If deploying new clusters, please follow <<guides-kiwi-builder-images>> to build
* When using RKE2 1.32.3, which resolves https://nvd.nist.gov/vuln/detail/CVE-2025-1974[CVE-2025-1974], SUSE Linux Micro 6.1 *must* be updated to include kernel `>=6.4.0-26-default` or `>=6.4.0-30-rt` (real-time kernel) due to required SELinux kernel patches. If not applied, the ingress-nginx pod will remain in a `CrashLoopBackOff` state. To apply the kernel update run `transactional-update` on the host itself (to update all packages), or `transactional-update pkg update kernel-default` (or kernel-rt) to update just the kernel, then reboot the host. If deploying new clusters, please follow <<guides-kiwi-builder-images>> to build fresh images containing the latest kernel.
* When configuring networking via nm-configurator, certain configurations which identify interfaces by MAC currently do not work, this will be resolved in a future update https://github.com/suse-edge/nm-configurator/issues/163[Upstream NM Configurator Issue]
* For long running Metal^3^ management clusters, it is possible for certificate expiry to cause the baremetal-operator connection to ironic to fail, requiring a workaround of a manual pod restart https://github.com/suse-edge/charts/issues/178[SUSE Edge charts issue]
* A bug with Kubernetes Job Controller has been identified that on certain conditions it can cause the RKE2/K3s nodes to stay in `NotReady` state (see the https://github.com/rancher/rke2/issues/8357[#8357 RKE2 issue]). The errors can look like:

[,bash]
----
E0605 23:11:18.489721 1 job_controller.go:631] "Unhandled Error" err="syncing job: tracking status: adding uncounted pods to status: Operation cannot be fulfilled on jobs.batch \"helm-install-rke2-ingress-nginx\": StorageError: invalid object, Code: 4, Key: /registry/jobs/kube-system/helm-install-rke2-ingress-nginx, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0aa6a781-7757-4c61-881a-cb1a4e47802c, UID in object meta: 6a320146-16b8-4f83-88c5-fc8b5a59a581" logger="UnhandledError"
----

As a workaround, the `kube-controller-manager` pod can be restarted with `crictl` as:

[,bash]
----
export CONTAINER_RUNTIME_ENDPOINT=unix:///run/k3s/containerd/containerd.sock
export KUBEMANAGER_POD=$(/var/lib/rancher/rke2/bin/crictl ps --label io.kubernetes.container.name=kube-controller-manager --quiet)
/var/lib/rancher/rke2/bin/crictl stop ${KUBEMANAGER_POD} && \
/var/lib/rancher/rke2/bin/crictl rm ${KUBEMANAGER_POD}
----

* On RKE2/K3s 1.31 and 1.32 versions, the directory `/etc/cni` being used to store CNI configurations may not trigger a notification of the files being written there to `containerd` due to certain conditions related to `overlayfs` (see the https://github.com/rancher/rke2/issues/8356[#8356 RKE2 issue]). This in turn results in the deployment of RKE2/K3s to get stuck waiting for the CNI to start, and the RKE2/K3s nodes to stay in `NotReady` state. This can be seen at node level with `kubectl describe node <affected_node>`:

Expand Down