Skip to content

Commit 2a37351

Browse files
authored
Validation fixes (suse-edge#320)
* Headlines in Appendix need to be Level 1 * SEO improvement: replacing underscores with dashes in IDs
1 parent 70f105a commit 2a37351

File tree

8 files changed

+22
-23
lines changed

8 files changed

+22
-23
lines changed

asciidoc/components/networking.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ The EIB container image is publicly available and can be downloaded from the SUS
6565
podman pull registry.suse.com/edge/edge-image-builder:1.0.2
6666
----
6767

68-
=== Creating the image configuration directory [[config_dir_creation]]
68+
=== Creating the image configuration directory [[image-config-dir-creation]]
6969

7070
Let's start with creating the configuration directory:
7171

@@ -128,7 +128,7 @@ The configuration directory at this point should look like the following:
128128
└── SLE-Micro.x86_64-5.5.0-Default-GM.raw
129129
----
130130

131-
=== Defining the network configurations [[default_network_definition]]
131+
=== Defining the network configurations [[default-network-definition]]
132132

133133
The desired network configurations are not part of the image definition file that we just created.
134134
We'll now populate those under the special `network/` directory. Let's create it:
@@ -705,7 +705,7 @@ Wired Connection 300ed658-08d4-4281-9f8c-d1b8882d29b9 ethernet eth0 /var/r
705705
There are occasions where relying on known MAC addresses is not an option. In these cases we can opt for the so-called _unified configuration_
706706
which allows us to specify settings in an `_all.yaml` file which will then be applied across all provisioned nodes.
707707

708-
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<config_dir_creation>> up until <<default_network_definition>>.
708+
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<image-config-dir-creation>> up until <<default-network-definition>>.
709709

710710
In this example we define a desired state of two Ethernet interfaces (eth0 and eth1) - one using DHCP, and one assigned a static IP address.
711711

@@ -856,7 +856,7 @@ its limitation stems from the fact that using it is much less convenient when bo
856856
> NOTE: It is recommended to use the default network configuration via files describing the desired network states under the `/network` directory.
857857
Only opt for custom scripting when that behaviour is not applicable to your use case.
858858

859-
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<config_dir_creation>> up until <<default_network_definition>>.
859+
We will build and provision an edge node using different configuration structure. Follow all steps starting from <<image-config-dir-creation>> up until <<default-network-definition>>.
860860

861861
In this example, we will create a custom script which applies static configuration for the `eth0` interface on all provisioned nodes,
862862
as well as removing and disabling the automatically created wired connections by NetworkManager. This is beneficial in situations where you want to make sure that every node in your cluster has an identical networking configuration, and as such you do not need to be concerned with the MAC address of each node prior to image creation.

asciidoc/day2/downstream-cluster-helm.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ For K3s, see link:https://docs.k3s.io/installation/registry-mirror[Embedded Regi
187187

188188
[NOTE]
189189
====
190-
The below upgrade procedure utilises Rancher's <<components-fleet,Fleet>> funtionality. Users using a third-party GitOps workflow should retrieve the chart versions supported by each Edge release from the <<release_notes>> and populate these versions to their third-party GitOps workflow.
190+
The below upgrade procedure utilises Rancher's <<components-fleet,Fleet>> funtionality. Users using a third-party GitOps workflow should retrieve the chart versions supported by each Edge release from the <<release-notes>> and populate these versions to their third-party GitOps workflow.
191191
====
192192

193193
This section focuses on the following Helm upgrade procedure use-cases:
@@ -321,9 +321,9 @@ Once deployed with Fleet, for Helm chart upgrades, see <<day2-helm-upgrade-fleet
321321
[#day2-helm-upgrade-fleet-managed-chart]
322322
==== I would like to upgrade a Fleet managed Helm chart
323323

324-
. Determine the version to which you need to upgrade your chart so that it is compatible with an Edge 3.X.Y release. Helm chart version per Edge release can be viewed from the <<release_notes>>.
324+
. Determine the version to which you need to upgrade your chart so that it is compatible with an Edge 3.X.Y release. Helm chart version per Edge release can be viewed from the <<release-notes>>.
325325

326-
. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <<release_notes>>.
326+
. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <<release-notes>>.
327327

328328
. After commiting and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart
329329

@@ -493,7 +493,7 @@ Executing this steps should result in a successfully created `GitRepo` resource.
493493

494494
Fleet will then deploy all the Kubernetes resources from the Bundle to the specified downstream clusters. One of this resources will be a SUC Plan that will trigger the chart upgrade. For a full list of the resoruces that will be deployed and the workflow of the upgrade process, refer to the <<day2-helm-upgrade-eib-chart-overview, overview>> section.
495495

496-
To track the upgrade process itself, refer to the <<monitor_suc_plans, Monitor SUC Plans>> section.
496+
To track the upgrade process itself, refer to the <<monitor-suc-plans, Monitor SUC Plans>> section.
497497

498498
[#day2-helm-upgrade-eib-chart-example]
499499
===== Example
@@ -681,7 +681,7 @@ image::day2_helm_chart_upgrade_example_gitrepo.png[]
681681

682682
Now we need to monitor the upgrade procedures on the clusters:
683683

684-
. Check the status of the *Upgrade Pods*, following the directions from the <<monitor_suc_plans, SUC plan monitor>> section.
684+
. Check the status of the *Upgrade Pods*, following the directions from the <<monitor-suc-plans, SUC plan monitor>> section.
685685

686686
.. A successfully completed *Upgrade Pod* that has been working on an `intialiser` node should hold logs similar to:
687687
+

asciidoc/day2/downstream-cluster-k8s.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ Kubernetes version upgrade steps:
144144
The above *SUC Plans* will be deployed in the `cattle-system` namespace of each downstream cluster.
145145
====
146146

147-
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor_suc_plans>>.
147+
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor-suc-plans>>.
148148

149149
. Depending on which *SUC Plans* you have deployed, the *Update Pod* will run either a https://hub.docker.com/r/rancher/rke2-upgrade/tags[rke2-upgrade] or a https://hub.docker.com/r/rancher/k3s-upgrade/tags[k3s-upgrade] image and will execute the following workflow on *each* cluster node:
150150

@@ -175,7 +175,7 @@ A *GitRepo* resource, that ships the needed `Kubernetes upgrade` *SUC Plans*, ca
175175

176176
. By <<k8s-upgrade-suc-plan-deployment-git-repo-manual, manually deploying>> the resource to your `management cluster`.
177177

178-
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the <<monitor_suc_plans>> documentation.
178+
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the <<monitor-suc-plans>> documentation.
179179

180180
[#k8s-upgrade-suc-plan-deployment-git-repo-rancher]
181181
===== GitRepo creation - Rancher UI
@@ -273,7 +273,7 @@ A *Bundle* resource, that ships the needed `Kubernetes upgrade` *SUC Plans*, can
273273

274274
. By <<k8s-upgrade-suc-plan-deployment-bundle-manual, manually deploying>> the resource to your `management cluster`.
275275

276-
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the <<monitor_suc_plans>> documentation.
276+
Once deployed, to monitor the Kubernetes upgrade process of the nodes of your targeted cluster, refer to the <<monitor-suc-plans>> documentation.
277277

278278
[#k8s-upgrade-suc-plan-deployment-bundle-rancher]
279279
===== Bundle creation - Rancher UI

asciidoc/day2/downstream-cluster-os.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ OS pacakge update steps:
133133
The above resources will be deployed in the `cattle-system` namespace of each downstream cluster.
134134
====
135135

136-
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor_suc_plans>>.
136+
. On the downstream cluster, *SUC* picks up the newly deployed *SUC Plans* and deploys an *_Update Pod_* on each node that matches the *node selector* defined in the *SUC Plan*. For information how to monitor the *SUC Plan Pod*, refer to <<monitor-suc-plans>>.
137137

138138
. The *Update Pod* (deployed on each node) *mounts* the `os-pkg-update` Secret and *executes* the `update.sh` script that the Secret ships.
139139

@@ -175,7 +175,7 @@ A *GitRepo* resource, that ships the needed `OS package update` *SUC Plans*, can
175175

176176
. By <<os-pkg-suc-plan-deployment-git-repo-manual, manually deploying>> the resource to your `management cluster`.
177177

178-
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor_suc_plans>> documentation.
178+
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor-suc-plans>> documentation.
179179

180180
[#os-pkg-suc-plan-deployment-git-repo-rancher]
181181
===== GitRepo creation - Rancher UI
@@ -251,7 +251,7 @@ A *Bundle* resource, that ships the needed `OS package update` *SUC Plans*, can
251251

252252
. By <<os-pkg-suc-plan-deployment-bundle-manual, manually deploying>> the resource to your `management cluster`.
253253

254-
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor_suc_plans>> documentation.
254+
Once deployed, to monitor the OS package update process of the nodes of your targeted cluster, refer to the <<monitor-suc-plans>> documentation.
255255

256256
[#os-pkg-suc-plan-deployment-bundle-rancher]
257257
===== Bundle creation - Rancher UI

asciidoc/day2/downstream-cluster-suc.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ If using the `suse-edge/fleet-examples` repository, make sure you are using the
5050

5151
* By <<day2-suc-dep-gitrepo-manual, manually deploying>> the resources to your `management cluster`
5252

53-
Once created, `Fleet` will be responsible for picking up the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor_suc_deployment>>.
53+
Once created, `Fleet` will be responsible for picking up the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor-suc-deployment>>.
5454

5555
[#day2-suc-dep-gitrepo-rancher]
5656
===== GitRepo deployment - Rancher UI
@@ -134,7 +134,7 @@ If using the `suse-edge/fleet-examples` repository, make sure you are using the
134134

135135
* By <<day2-suc-dep-bundle-manual, manually deploying>> the resources to your `management cluster`
136136

137-
Once created, `Fleet` will be responsible for pickuping the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor_suc_deployment>>.
137+
Once created, `Fleet` will be responsible for pickuping the resource and deploying the *SUC* resources to all your *target* clusters. For information on how to track the deployment process, see <<monitor-suc-deployment>>.
138138

139139
[#day2-suc-dep-bundle-rancher]
140140
===== Bundle creation - Rancher UI
@@ -226,7 +226,7 @@ Use the above mentioned resoruces to populate the data that your third-party Git
226226

227227
This section covers how to monitor the lifecycle of the *SUC* deployment and any deployed *SUC* Plans using the Rancher UI.
228228

229-
[#monitor_suc_deployment]
229+
[#monitor-suc-deployment]
230230
==== Monitor SUC deployment
231231

232232
To check the *SUC* pod logs for a specific cluster:
@@ -250,7 +250,7 @@ image::day2-monitor-suc-deployment-2.png[]
250250
+
251251
image::day2-monitor-suc-deployment-3.png[]
252252

253-
[#monitor_suc_plans]
253+
[#monitor-suc-plans]
254254
==== Monitor SUC Plans
255255

256256
[IMPORTANT]

asciidoc/day2/downstream-clusters-introduction.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ For use-cases, where a third party GitOps tool usage is desired, see:
5656
5757
. For `Kubernetes distribution upgrades` - <<k8s-upgrade-suc-plan-deployment-third-party>>
5858
59-
. For `Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <<release_notes>> page and populate the chart version and URL in your third party GitOps tool
59+
. For `Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <<release-notes>> page and populate the chart version and URL in your third party GitOps tool
6060
====
6161

6262
==== System-upgrade-controller (SUC)

asciidoc/edge-book/edge.adoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -146,8 +146,7 @@ include::../product/atip-lifecycle.adoc[]
146146
//--------------------------------------------
147147
148148
[appendix]
149-
150-
= Release Notes
149+
== Release Notes
151150
152151
//--------------------------------------------
153152
// Release Notes

asciidoc/edge-book/releasenotes.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
[#release_notes]
1+
[#release-notes]
22

33
= Abstract
44
ifdef::env-github[]

0 commit comments

Comments
 (0)