Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions asciidoc/components/akri.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -115,20 +115,20 @@ See <<components-rancher-dashboard-extensions>> for installation guidance.

Once the extension is installed you can navigate to any Akri-enabled managed cluster using cluster explorer. Under *Akri* navigation group you can see *Configurations* and *Instances* sections.

image::akri-extension-configurations.png[]
image::akri-extension-configurations.png[scaledwidth=100%]

The configurations list provides information about `Configuration Discovery Handler` and number of instances. Clicking the name opens a configuration detail page.

image::akri-extension-configuration-detail.png[]
image::akri-extension-configuration-detail.png[scaledwidth=100%]

You can also edit or create a new *Configuration*. The extension allows you to select discovery handler, set up broker pod or job, customize configurations and instance services, and set the configuration capacity.

image::akri-extension-configuration-edit.png[]
image::akri-extension-configuration-edit.png[scaledwidth=100%]

Discovered devices are listed in the *Instances* list.

image::akri-extension-instances-list.png[]
image::akri-extension-instances-list.png[scaledwidth=100%]

Clicking the *Instance* name opens a detail page allowing to view the workloads and instance service.

image::akri-extension-instance-detail.png[]
image::akri-extension-instance-detail.png[scaledwidth=100%]
10 changes: 5 additions & 5 deletions asciidoc/components/fleet.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Fleet shines as an integrated part of Rancher. Clusters managed with Rancher aut

Fleet comes preinstalled in Rancher and is managed by the *Continuous Delivery* option in the Rancher UI.

image::fleet-dashboard.png[]
image::fleet-dashboard.png[scaledwidth=100%]

Continuous Delivery section consists of following items:

Expand Down Expand Up @@ -77,13 +77,13 @@ helm:

3. The Repository creation wizard guides through creation of the Git repo. Provide *Name*, *Repository URL* (referencing the Git repository created in the previous step) and select the appropriate branch or revision. In the case of a more complex repository, specify *Paths* to use multiple directories in a single repository.
+
image::fleet-create-repo1.png[]
image::fleet-create-repo1.png[scaledwidth=100%]

4. Click `Next`.

5. In the next step, you can define where the workloads will get deployed. Cluster selection offers several basic options: you can select no clusters, all clusters, or directly choose a specific managed cluster or cluster group (if defined). The "Advanced" option allows to directly edit the selectors via YAML.
+
image::fleet-create-repo2.png[]
image::fleet-create-repo2.png[scaledwidth=100%]

6. Click `Create`. The repository gets created. From now on, the workloads are installed and kept in sync on the clusters matching the repository definition.

Expand All @@ -93,12 +93,12 @@ The "Advanced" navigation section provides overviews of lower-level Fleet resour

To find bundles relevant to a specific repository, go to the Git repo detail page and click the `Bundles` tab.

image::fleet-repo-bundles.png[]
image::fleet-repo-bundles.png[scaledwidth=100%]

For each cluster, the bundle is applied to a BundleDeployment resource that is created. To view BundleDeployment details, click the `Graph` button in the upper right of the Git repo detail page.
A graph of *Repo > Bundles > BundleDeployments* is loaded. Click the BundleDeployment in the graph to see its details and click the `Id` to view the BundleDeployment YAML.

image::fleet-repo-graph.png[]
image::fleet-repo-graph.png[scaledwidth=100%]

For additional information on Fleet troubleshooting tips, refer https://fleet.rancher.io/troubleshooting[here].

Expand Down
8 changes: 4 additions & 4 deletions asciidoc/components/rancher-dashboard-extensions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,17 +34,17 @@ Each extension is distributed via its own OCI artifact. They are available from
SUSE Edge Helm charts repository URL:
`oci://registry.suse.com/edge/charts`
+
image::dashboard-extensions-create-oci-repository.png[]
image::dashboard-extensions-create-oci-repository.png[scaledwidth=100%]

. You can see that the extension repository is added to the list and is in `Active` state.
+
image::dashboard-extensions-repositories-list.png[]
image::dashboard-extensions-repositories-list.png[scaledwidth=100%]

. Navigate back to the *Extensions* in the *Configuration* section of the navigation sidebar.
+
In the *Available* tab you can see the extensions available for installation.
+
image::dashboard-extensions-available-extensions.png[]
image::dashboard-extensions-available-extensions.png[scaledwidth=100%]

. On the extension card click `Install` and confirm the installation.
+
Expand Down Expand Up @@ -122,7 +122,7 @@ For more information, see <<components-fleet>> and the `https://github.com/suse-

Once the Extensions are installed they are listed in *Extensions* section under *Installed* tabs. Since they are not installed via Apps/Marketplace, they are marked with `Third-Party` label.

image::installed-dashboard-extensions.png[]
image::installed-dashboard-extensions.png[scaledwidth=100%]

== KubeVirt Dashboard Extension

Expand Down
10 changes: 5 additions & 5 deletions asciidoc/day2/fleet-helm-upgrade.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -529,7 +529,7 @@ As mentioned previously this will trigger the `helm-controller` which will perfo

Below you can find a diagram of the above description:

image::fleet-day2-{cluster-type}-helm-eib-upgrade.png[]
image::fleet-day2-{cluster-type}-helm-eib-upgrade.png[scaledwidth=100%]

[#{cluster-type}-day2-fleet-helm-upgrade-procedure-eib-deployed-chart-upgrade-steps]
===== Upgrade Steps
Expand Down Expand Up @@ -884,7 +884,7 @@ endif::[]
. Deploy the Bundle through the Rancher UI:
+
.Deploy Bundle through Rancher UI
image::day2_helm_chart_upgrade_example_1.png[]
image::day2_helm_chart_upgrade_example_1.png[scaledwidth=100%]
+
From here, select *Read from File* and find the `bundle.yaml` file on your system.
+
Expand All @@ -895,13 +895,13 @@ Select *Create*.
. After a successful deployment, your Bundle would look similar to:
+
.Successfully deployed Bundle
image::day2_helm_chart_upgrade_example_2.png[]
image::day2_helm_chart_upgrade_example_2.png[scaledwidth=100%]

After the successful deployment of the `Bundle`, to monitor the upgrade process:

. Verify the logs of the `Upgrade Pod`:
+
image::day2_helm_chart_upgrade_example_3_{cluster-type}.png[]
image::day2_helm_chart_upgrade_example_3_{cluster-type}.png[scaledwidth=100%]

. Now verify the logs of the Pod created for the upgrade by the helm-controller:

Expand All @@ -911,7 +911,7 @@ image::day2_helm_chart_upgrade_example_3_{cluster-type}.png[]
+
.Logs for successfully upgraded Longhorn chart
+
image::day2_helm_chart_upgrade_example_4_{cluster-type}.png[]
image::day2_helm_chart_upgrade_example_4_{cluster-type}.png[scaledwidth=100%]

. Verify that the `HelmChart` version has been updated by navigating to Rancher's `HelmCharts` section (`More Resources -> HelmCharts`). Select the namespace where the chart was deployed, for this example it would be `kube-system`.

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/day2/fleet-k8s-upgrade.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ Once the `K8s SUC plans` are deployed, the workflow looks like this:

Below you can find a diagram of the above description:

image::fleet-day2-{cluster-type}-k8s-upgrade.png[]
image::fleet-day2-{cluster-type}-k8s-upgrade.png[scaledwidth=100%]

[#{cluster-type}-day2-fleet-k8s-upgrade-requirements]
=== Requirements
Expand Down
2 changes: 1 addition & 1 deletion asciidoc/day2/fleet-os-upgrade.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Once the OS upgrade process finishes, the corresponding node will be `rebooted`

Below you can find a diagram of the above description:

image::fleet-day2-{cluster-type}-os-upgrade.png[]
image::fleet-day2-{cluster-type}-os-upgrade.png[scaledwidth=100%]

[#{cluster-type}-day2-fleet-os-upgrade-requirements]
=== Requirements
Expand Down
6 changes: 3 additions & 3 deletions asciidoc/edge-book/welcome.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ SUSE Edge is comprised of both existing SUSE and Rancher components along with a

==== Management Cluster

image::suse-edge-management-cluster.svg[]
image::suse-edge-management-cluster.svg[scaledwidth=100%]

* *Management*: This is the centralized part of SUSE Edge that is used to manage the provisioning and lifecycle of connected downstream clusters. The management cluster typically includes the following components:
** Multi-cluster management with <<components-rancher,Rancher Prime>>, enabling a common dashboard for downstream cluster onboarding and ongoing lifecycle management of infrastructure and applications, also providing comprehensive tenant isolation and `IDP` (Identity Provider) integrations, a large marketplace of third-party integrations and extensions, and a vendor-neutral API.
Expand All @@ -49,7 +49,7 @@ image::suse-edge-management-cluster.svg[]

==== Downstream Clusters

image::suse-edge-downstream-cluster.svg[]
image::suse-edge-downstream-cluster.svg[scaledwidth=100%]

* *Downstream*: This is the distributed part of SUSE Edge that is used to run the user workloads at the Edge, i.e. the software that is running at the edge location itself, and is typically comprised of the following components:
** A choice of Kubernetes distributions, with secure and lightweight distributions like <<components-k3s,K3s>> and <<components-rke2,RKE2>> (`RKE2` is hardened, certified and optimized for usage in government and regulated industries).
Expand All @@ -60,7 +60,7 @@ image::suse-edge-downstream-cluster.svg[]

=== Connectivity

image::suse-edge-connected-architecture.svg[]
image::suse-edge-connected-architecture.svg[scaledwidth=100%]

The above image provides a high-level architectural overview for *connected* downstream clusters and their attachment to the management cluster. The management cluster can be deployed on a wide variety of underlying infrastructure platforms, in both on-premises and cloud capacities, depending on networking availability between the downstream clusters and the target management cluster. The only requirement for this to function are API and callback URL's to be accessible over the network that connects downstream cluster nodes to the management infrastructure.

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/guides/air-gapped-eib-deployments.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -400,7 +400,7 @@ replicaset.apps/system-upgrade-controller-56696956b 1 1 1

And when we go to `\https://192.168.100.50.sslip.io` and log in with the `adminadminadmin` password that we set earlier, we are greeted with the Rancher dashboard:

image::air-gapped-rancher.png[]
image::air-gapped-rancher.png[scaledwidth=100%]

== SUSE Security Installation [[suse-security-install]]

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/guides/clusterclass.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The implementation of ClusterClass yields several key advantages that address th
* Improved Scalability and Automation Capabilities
* Declarative Management and Robust Version Control

image::clusterclass.png[]
image::clusterclass.png[scaledwidth=100%]



Expand Down
2 changes: 1 addition & 1 deletion asciidoc/integrations/nvidia-slemicro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ We recommend that you ensure that the driver version that you are selecting is c
====
To find the NVIDIA open-driver versions, either run `zypper se -s nvidia-open-driver` on the target machine _or_ search the SUSE Customer Center for the "nvidia-open-driver" in {link-nvidia-open-driver}[SUSE Linux Micro {version-operatingsystem} for {x86-64}].

image::scc-packages-nvidia.png[SUSE Customer Centre]
image::scc-packages-nvidia.png[SUSE Customer Centre,scaledwidth=100%]
====

When you have confirmed that an equivalent version is available in the NVIDIA repos, you are ready to install the packages on the host operating system. For this, we need to open up a `transactional-update` session, which creates a new read/write snapshot of the underlying operating system so we can make changes to the immutable platform (for further instructions on `transactional-update`, see {link-micro-transactional-updates}[here]):
Expand Down
8 changes: 4 additions & 4 deletions asciidoc/product/atip-architecture.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This page explains the architecture and components used in SUSE Telco Cloud.

The following diagram shows the high-level architecture of SUSE Telco Cloud:

image::product-atip-architecture1.png[]
image::product-atip-architecture1.png[scaledwidth=100%]


=== Components
Expand Down Expand Up @@ -54,7 +54,7 @@ Directed network provisioning is the workflow that enables the deployment of a n

Using the <<components-eib,Edge Image Builder>> to create a new `ISO` image with the management stack included. You can then use this `ISO` image to install a new management cluster on VMs or bare-metal.

image::product-atip-architecture2.png[]
image::product-atip-architecture2.png[scaledwidth=100%]

NOTE: For more information about how to deploy a new management cluster, see the <<atip-management-cluster,SUSE Telco Cloud Management Cluster guide>>.

Expand All @@ -67,7 +67,7 @@ Once we have the management cluster up and running, we can use it to deploy a si

The following diagram shows the high-level workflow to deploy it:

image::product-atip-architecture3.png[]
image::product-atip-architecture3.png[scaledwidth=100%]

NOTE: For more information about how to deploy a downstream cluster, see the <<atip-automated-provisioning,SUSE Telco Cloud Automated Provisioning guide.>>

Expand All @@ -79,7 +79,7 @@ Once we have the management cluster up and running, we can use it to deploy a hi

The following diagram shows the high-level workflow to deploy it:

image::product-atip-architecture4.png[]
image::product-atip-architecture4.png[scaledwidth=100%]

NOTE: For more information about how to deploy a downstream cluster, see the <<atip-automated-provisioning,SUSE Telco Cloud Automated Provisioning guide.>>

Expand Down
4 changes: 2 additions & 2 deletions asciidoc/product/atip-automated-provision.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -608,7 +608,7 @@ This is the simplest way to automate the provisioning of a downstream cluster.

The following diagram shows the workflow used to automate the provisioning of a single-node downstream cluster using directed network provisioning:

image::atip-automated-singlenode1.png[]
image::atip-automated-singlenode1.png[scaledwidth=100%]

There are two different steps to automate the provisioning of a single-node downstream cluster using directed network provisioning:

Expand Down Expand Up @@ -919,7 +919,7 @@ This is the simplest way to automate the provisioning of a downstream cluster. T

The following diagram shows the workflow used to automate the provisioning of a multi-node downstream cluster using directed network provisioning:

image::atip-automate-multinode1.png[]
image::atip-automate-multinode1.png[scaledwidth=100%]

1. Enroll the three bare-metal hosts to make them available for the provisioning process.
2. Provision the three bare-metal hosts to install and configure the operating system and the Kubernetes cluster using `MetalLB`.
Expand Down
2 changes: 1 addition & 1 deletion asciidoc/product/atip-management-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ For more information about `Metal^3^`, see: <<components-metal3,Metal^3^>>

The following steps are necessary to set up the management cluster (using a single node):

image::product-atip-mgmtcluster1.png[]
image::product-atip-mgmtcluster1.png[scaledwidth=100%]

The following are the main steps to set up the management cluster using a declarative approach:

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/product/atip-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ The hardware requirements for SUSE Telco Cloud are as follows:

As a reference for the network architecture, the following diagram shows a typical network architecture for a Telco environment:

image::product-atip-requirements1.svg[]
image::product-atip-requirements1.svg[scaledwidth=100%]

The network architecture is based on the following components:

Expand Down
20 changes: 10 additions & 10 deletions asciidoc/quickstart/elemental.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This approach can be useful in scenarios where the devices that you want to cont

== High-level architecture

image::quickstart-elemental-architecture.svg[]
image::quickstart-elemental-architecture.svg[scaledwidth=100%]

== Resources needed

Expand Down Expand Up @@ -124,23 +124,23 @@ helm install -n cattle-elemental-system \

. To use the Elemental UI, log in to your Rancher instance, click the three-line menu in the upper left:
+
image::installing-elemental-extension-1.png[Installing Elemental extension1]
image::installing-elemental-extension-1.png[Installing Elemental extension 1,scaledwidth=100%]
+
. From the "Available" tab on this page, click "Install" on the Elemental card:
+
image::installing-elemental-extension-2.png[Installing Elemental extension 2]
image::installing-elemental-extension-2.png[Installing Elemental extension 2,scaledwidth=100%]
+
. Confirm that you want to install the extension:
+
image::installing-elemental-extension-3.png[Installing Elemental extension 3]
image::installing-elemental-extension-3.png[Installing Elemental extension 3,scaledwidth=100%]
+
. After it installs, you will be prompted to reload the page.
+
image::installing-elemental-extension-4.png[Installing Elemental extension 4]
image::installing-elemental-extension-4.png[Installing Elemental extension 4,scaledwidth=100%]
+
. Once you reload, you can access the Elemental extension through the "OS Management" global app.
+
image::accessing-elemental-extension.png[Accessing Elemental extension]
image::accessing-elemental-extension.png[Accessing Elemental extension,scaledwidth=100%]

== Configure Elemental [[configure-elemental]]

Expand Down Expand Up @@ -192,25 +192,25 @@ UI Extension::
+
. From the OS Management extension, click "Create Registration Endpoint":
+
image::click-create-registration.png[Click Create Registration]
image::click-create-registration.png[Click Create Registration,scaledwidth=100%]
+
. Give this configuration a name.
+
image::create-registration-name.png[Add Name]
image::create-registration-name.png[Add Name,scaledwidth=100%]
+
[NOTE]
====
You can ignore the Cloud Configuration field as the data here is overridden by the following steps with Edge Image Builder.
====
. Next, scroll down and click "Add Label" for each label you want to be on the resource that gets created when a machine registers. This is useful for distinguishing machines.
+
image::create-registration-labels.png[Add Labels]
image::create-registration-labels.png[Add Labels,scaledwidth=100%]
+
. Click "Create" to save the configuration.

. Once the registration is created, you should see the Registration URL listed and can click "Copy" to copy the address:
+
image::get-registration-url.png[Copy URL]
image::get-registration-url.png[Copy URL,scaledwidth=100%]
+
[TIP]
====
Expand Down
2 changes: 1 addition & 1 deletion asciidoc/quickstart/metal3.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ cluster bare-metal servers, including automated inspection, cleaning and provisi

== High-level architecture

image::quickstart-metal3-architecture.svg[]
image::quickstart-metal3-architecture.svg[scaledwidth=100%]

== Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/tips/elemental.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ This can be mitigated by one of the following approaches:

_Example with UTM on MacOS_

image::tpm.png[TPM]
image::tpm.png[TPM,scaledwidth=100%]

* Emulate TPM by using negative value for the TPM seed in the `MachineRegistration` resource

Expand Down