You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are occasions where relying on known MAC addresses is not an option. In these cases we can opt for the so-called _unified configuration_
706
706
which allows us to specify settings in an `_all.yaml` file which will then be applied across all provisioned nodes.
707
707
708
-
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<config_dir_creation>> up until <<default_network_definition>>.
708
+
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<image-config-dir-creation>> up until <<default-network-definition>>.
709
709
710
710
In this example we define a desired state of two Ethernet interfaces (eth0 and eth1) - one using DHCP, and one assigned a static IP address.
711
711
@@ -856,7 +856,7 @@ its limitation stems from the fact that using it is much less convenient when bo
856
856
> NOTE: It is recommended to use the default network configuration via files describing the desired network states under the `/network` directory.
857
857
Only opt for custom scripting when that behaviour is not applicable to your use case.
858
858
859
-
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<config_dir_creation>> up until <<default_network_definition>>.
859
+
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<image-config-dir-creation>> up until <<default-network-definition>>.
860
860
861
861
In this example, we will create a custom script which applies static configuration for the `eth0` interface on all provisioned nodes,
862
862
as well as removing and disabling the automatically created wired connections by NetworkManager. This is beneficial in situations where you want to make sure that every node in your cluster has an identical networking configuration, and as such you do not need to be concerned with the MAC address of each node prior to image creation.
Copy file name to clipboardExpand all lines: asciidoc/day2/downstream-cluster-helm.adoc
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -187,7 +187,7 @@ For K3s, see link:https://docs.k3s.io/installation/registry-mirror[Embedded Regi
187
187
188
188
[NOTE]
189
189
====
190
-
The below upgrade procedure utilises Rancher's <<components-fleet,Fleet>> funtionality. Users using a third-party GitOps workflow should retrieve the chart versions supported by each Edge release from the <<release_notes>> and populate these versions to their third-party GitOps workflow.
190
+
The below upgrade procedure utilises Rancher's <<components-fleet,Fleet>> funtionality. Users using a third-party GitOps workflow should retrieve the chart versions supported by each Edge release from the <<release-notes>> and populate these versions to their third-party GitOps workflow.
191
191
====
192
192
193
193
This section focuses on the following Helm upgrade procedure use-cases:
@@ -321,9 +321,9 @@ Once deployed with Fleet, for Helm chart upgrades, see <<day2-helm-upgrade-fleet
321
321
[#day2-helm-upgrade-fleet-managed-chart]
322
322
==== I would like to upgrade a Fleet managed Helm chart
323
323
324
-
. Determine the version to which you need to upgrade your chart so that it is compatible with an Edge 3.X.Y release. Helm chart version per Edge release can be viewed from the <<release_notes>>.
324
+
. Determine the version to which you need to upgrade your chart so that it is compatible with an Edge 3.X.Y release. Helm chart version per Edge release can be viewed from the <<release-notes>>.
325
325
326
-
. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <<release_notes>>.
326
+
. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <<release-notes>>.
327
327
328
328
. After commiting and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart
329
329
@@ -493,7 +493,7 @@ Executing this steps should result in a successfully created `GitRepo` resource.
493
493
494
494
Fleet will then deploy all the Kubernetes resources from the Bundle to the specified downstream clusters. One of this resources will be a SUC Plan that will trigger the chart upgrade. For a full list of the resoruces that will be deployed and the workflow of the upgrade process, refer to the <<day2-helm-upgrade-eib-chart-overview, overview>> section.
495
495
496
-
To track the upgrade process itself, refer to the <<monitor_suc_plans, Monitor SUC Plans>> section.
496
+
To track the upgrade process itself, refer to the <<monitor-suc-plans, Monitor SUC Plans>> section.
Copy file name to clipboardExpand all lines: asciidoc/day2/downstream-cluster-k8s.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -144,7 +144,7 @@ Kubernetes version upgrade steps:
144
144
The above *SUC Plans* will be deployed in the `cattle-system` namespace of each downstream cluster.
145
145
====
146
146
147
-
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor_suc_plans>>.
147
+
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor-suc-plans>>.
148
148
149
149
. Depending on which *SUC Plans* you have deployed, the *Update Pod* will run either a https://hub.docker.com/r/rancher/rke2-upgrade/tags[rke2-upgrade] or a https://hub.docker.com/r/rancher/k3s-upgrade/tags[k3s-upgrade] image and will execute the following workflow on *each* cluster node:
150
150
@@ -175,7 +175,7 @@ A *GitRepo* resource, that ships the needed `Kubernetes upgrade` *SUC Plans*, ca
175
175
176
176
. By <<k8s-upgrade-suc-plan-deployment-git-repo-manual, manually deploying>> the resource to your `management cluster`.
177
177
178
-
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the <<monitor_suc_plans>> documentation.
178
+
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the <<monitor-suc-plans>> documentation.
Copy file name to clipboardExpand all lines: asciidoc/day2/downstream-cluster-os.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -133,7 +133,7 @@ OS pacakge update steps:
133
133
The above resources will be deployed in the `cattle-system` namespace of each downstream cluster.
134
134
====
135
135
136
-
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor_suc_plans>>.
136
+
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor-suc-plans>>.
137
137
138
138
. The *Update Pod* (deployed on each node) *mounts* the `os-pkg-update` Secret and *executes* the `update.sh` script that the Secret ships.
139
139
@@ -175,7 +175,7 @@ A *GitRepo* resource, that ships the needed `OS package update` *SUC Plans*, can
175
175
176
176
. By <<os-pkg-suc-plan-deployment-git-repo-manual, manually deploying>> the resource to your `management cluster`.
177
177
178
-
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor_suc_plans>> documentation.
178
+
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor-suc-plans>> documentation.
179
179
180
180
[#os-pkg-suc-plan-deployment-git-repo-rancher]
181
181
===== GitRepo creation - Rancher UI
@@ -251,7 +251,7 @@ A *Bundle* resource, that ships the needed `OS package update` *SUC Plans*, can
251
251
252
252
. By <<os-pkg-suc-plan-deployment-bundle-manual, manually deploying>> the resource to your `management cluster`.
253
253
254
-
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor_suc_plans>> documentation.
254
+
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor-suc-plans>> documentation.
Copy file name to clipboardExpand all lines: asciidoc/day2/downstream-cluster-suc.adoc
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ If using the `suse-edge/fleet-examples` repository, make sure you are using the
50
50
51
51
* By <<day2-suc-dep-gitrepo-manual, manually deploying>> the resources to your `management cluster`
52
52
53
-
Once created, `Fleet` will be responsible for picking up the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor_suc_deployment>>.
53
+
Once created, `Fleet` will be responsible for picking up the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor-suc-deployment>>.
54
54
55
55
[#day2-suc-dep-gitrepo-rancher]
56
56
===== GitRepo deployment - Rancher UI
@@ -134,7 +134,7 @@ If using the `suse-edge/fleet-examples` repository, make sure you are using the
134
134
135
135
* By <<day2-suc-dep-bundle-manual, manually deploying>> the resources to your `management cluster`
136
136
137
-
Once created, `Fleet` will be responsible for pickuping the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor_suc_deployment>>.
137
+
Once created, `Fleet` will be responsible for pickuping the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor-suc-deployment>>.
138
138
139
139
[#day2-suc-dep-bundle-rancher]
140
140
===== Bundle creation - Rancher UI
@@ -226,7 +226,7 @@ Use the above mentioned resoruces to populate the data that your third-party Git
226
226
227
227
This section covers how to monitor the lifecycle of the *SUC* deployment and any deployed *SUC* Plans using the Rancher UI.
228
228
229
-
[#monitor_suc_deployment]
229
+
[#monitor-suc-deployment]
230
230
==== Monitor SUC deployment
231
231
232
232
To check the *SUC* pod logs for a specific cluster:
Copy file name to clipboardExpand all lines: asciidoc/day2/downstream-clusters-introduction.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ For use-cases, where a third party GitOps tool usage is desired, see:
56
56
57
57
. For `Kubernetes distribution upgrades` - <<k8s-upgrade-suc-plan-deployment-third-party>>
58
58
59
-
. For `Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <<release_notes>> page and populate the chart version and URL in your third party GitOps tool
59
+
. For `Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <<release-notes>> page and populate the chart version and URL in your third party GitOps tool
0 commit comments