Dell Emc MX Smartfabric Config Ts Guide
Dell Emc MX Smartfabric Config Ts Guide
Abstract
This document provides the steps for configuring and troubleshooting the
Dell EMC PowerEdge MX networking switches in SmartFabric mode. It
includes examples for ethernet connections to Dell EMC Networking,
Cisco Nexus, and Fibre Channel networks.
September 2019
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
© 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Dell believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
The Dell EMC PowerEdge MX is a unified, high-performance data center infrastructure. PowerEdge MX
provides the agility, resiliency, and efficiency to optimize a wide variety of traditional and new, emerging data
center workloads and applications. With its kinetic architecture and agile management, PowerEdge MX
dynamically configures compute, storage, and fabric, increases team effectiveness, and accelerates
operations. The responsive design delivers the innovation and longevity that customers need for their IT and
digital business transformations.
As part of the PowerEdge MX platform, Dell EMC SmartFabric OS10 includes SmartFabric Services.
SmartFabric Services is a network automation and orchestration solution that is fully integrated with the MX
Platform.
This document provides information about SmartFabric OS10 SmartFabric Services running on the
PowerEdge MX platform. This document also provides examples for the deployment of two PowerEdge
MX7000 chassis and the setup and configuration of SmartFabric Services. In SmartFabric mode, switches
This guide also demonstrates connectivity with different upstream switch options, including:
Note: The examples in document assume that the MX7000 chassis are configured in a Multi-Chassis
Management group and that no errors have been found. Additionally, this guide assumes the reader has a basic
understanding of the PowerEdge MX platform.
Scalable Fabric – This is exclusive to the MX7000 platform. This is an architecture comprised of the Dell
EMC Networking MX9116n Fabric Switching Engine (FSE) and Dell EMC Networking MX7116n Fabric
Expander Module (FEM) allowing a fabric to span up to ten MX7000 chassis. This creates a single network
fabric enabling efficient east/west traffic flows between participating chassis. Scalable Fabric is supported in
both SmartFabric and Full Switch modes.
SmartFabric mode - SmartFabric mode leverages SmartFabric Services (see below) to create a Layer 2
network leveraging one to ten MX7000 chassis. Switches operating in SmartFabric mode are administered
through the OpenManage Enterprise - Modular (OME-M) GUI interfaces that provide complete lifecycle
management of the network fabric.
Full Switch mode – When operating in Full Switch mode, the switch can perform any functionality supported
by the version of SmartFabric OS10 running on the switch. Most of the configuration is performed using the
CLI, not the OME-M GUI.
SmartFabric Services (SFS) – In PowerEdge MX, SFS technology provides the underlying network
automation and orchestration to support all automated network operations. SFS is the underlying technology
for all Dell EMC SmartFabric OS10 automation efforts including PowerEdge MX, Isilon back-end storage
networking, VxRail network automation, and so on.
Table 1 outlines what this document is and is not. Also, this guide assumes a basic understanding of the
PowerEdge MX platform.
Dell EMC PowerEdge MX SmartFabric Configuration and Troubleshooting Guide - is/is not
This guide is This guide is not/does not
A reference for the most used features of A guide for all features of the MX7000 platform
SmartFabric operating mode
A secondary reference to the Release Notes Take precedence over the Release Notes
Note: For a general overview and details of PowerEdge MX networking concepts, see the Dell EMC PowerEdge
MX Network Architecture Guide.
Bold Monospace Text Commands entered at the CLI prompt, or to highlight information in CLI
output
1.2 Attachments
This document in .pdf format includes one or more file attachments. To access attachments in Adobe Acrobat
Reader, click the icon in the left pane halfway down the page, then click the icon.
• Full Switch mode (Default) – All switch-specific SmartFabric OS10 capabilities are available
• SmartFabric mode – Switches operate as a Layer 2 I/O aggregation fabric and are managed
through the Open Manage Enterprise-Modular (OME-M) console
The following SmartFabric OS10 CLI commands have been added specifically for the PowerEdge MX
platform:
Note: For more information, see the SmartFabric OS10 User Guide for PowerEdge MX I/O Modules on the
Support for Dell EMC Networking MX9116n - Manuals and documents and Support for Dell EMC Networking
MX5108- Manuals and documents web pages.
Full Switch mode is typically used when a desired feature or function is not available when operating in
SmartFabric mode. For more information about Dell EMC SmartFabric OS10 operations, see Dell EMC
Networking OS Info Hub.
In the PowerEdge M1000e and FX2 platforms, I/O Aggregation (IOA) was implemented to simplify the
process to connect blade servers to upstream networks, so server administrators and generalists could
manage uplinks, downlinks, and VLAN assignments without needing to be fluent with the CLI.
• I/O Aggregation
• Plug-and-play fabric deployment
• Single interface to manage all switches in the fabric
2. Lifecycle management
4. Failure remediation
• Dynamically adjusts bandwidth across all inter-switch links in the event of a link failure
• Automatically detects fabric misconfigurations or link level failure conditions
• Automatically heals the fabric on failure condition removal
Note: In SmartFabric mode, MX series switches operate entirely as a Layer 2 network fabric. Layer 3 protocols
are not supported.
When operating in SmartFabric mode, access to certain CLI commands is restricted to SmartFabric OS10
show commands and can run the following subset of CLI configuration commands:
Table 2 outlines the differences between the two operating modes and apply to both the MX9116n FSE and
the MX5108n switches.
clock
fc alias
fc zone
fc zoneset
hostname
host-description
interface
ip nameserver
ip ssh server
ip telnet server
login concurrent-session
login statistics
logging
management route
ntp
snmp-server
tacacs-server
username
spanning-tree
vlan
All switch interfaces are assigned to VLAN 1 by Layer 2 bridging is disabled by default. Interfaces
default and are in the same Layer 2 bridge domain. must join a bridge domain (VLAN) before being able
to forward frames.
All configuration changes are saved in the running Verify configuration changes using feature-specific
configuration by default. To display the current show commands, such as show interface and
configuration, use the show running- show vlan, instead of show running-
configuration command. configuration.
By default, a switch is in Full Switch mode. When that switch is added to a fabric, it automatically changes to
SmartFabric mode. When you change from Full Switch to SmartFabric mode, all Full Switch CLI
configurations are deleted except for the subset of CLI commands supported in SmartFabric mode.
Note: There is no CLI command to switch between operating modes. Delete the fabric to change from
SmartFabric to Full Switch mode.
The CLI command show switch-operating-mode displays the currently configured operating mode
of the switch. This information is also available on the switch landing page in the OME-M GUI.
Note: If the servers in the chassis have dual-port NICs, only QSFP28-DD port 1 on the FEM needs to be
connected. Do not connect QSFP28-DD port 2.
To verify the auto-discovered Fabric Expander Modules, enter the show discovered-expanders
command.
If the FSE is in SmartFabric mode, the attached FEM is automatically configured and virtual ports on the
Fabric Expander Module and a virtual slot ID are created and mapped to 8x25GbE breakout interfaces in
FEM on the Fabric Engine.
An FSE in Full Switch mode automatically discovers the FEM when these conditions are met:
• The FEM is connected to the FSE by attaching a cable between the QSFP28-DD ports on both
devices
• The interface for the QSFP28-DD port-group connected to the FSE is in 8x25GbE FEM mode
• At least one blade server is inserted into the MX7000 chassis containing the FEM
Note: If the FSE is in Full Switch mode, you must manually configure the unit ID of the FEM. See the OS10
Enterprise Edition User Guide — PowerEdge MX I/O Modules for implementation.
Once the FSE discovers the FEM, it creates virtual ports by mapping each 8x25GbE FEM breakout interface
in port groups 1 to 10 to a FEM virtual port. Table 3 shows an example of this mapping.
When a QSFP28-DD port group is mapped to a FEM, in the show interface status output, the eight
interfaces display dormant instead of up until a virtual port starts to transmit server traffic:
You can also use the show interface command to display the Fabric Engine physical port-to-Fabric
Expander virtual port mapping, and the operational status of the line:
Note: If you move a FEM by cabling it to a different QSFP28-DD port on the Fabric Engine, all software
configurations on virtual ports are maintained. Only the QSFP28-DD breakout interfaces that map to the virtual
ports change.
With the critical need for high availability in modern data centers and enterprise networks, VLT plays a vital
role connecting with rapid convergence, seamless traffic flow, efficient load balancing, and loop free
capabilities.
With the instantaneous synchronization of MAC and ARP entries, both the nodes remain Active-Active and
continue to forward the data traffic seamlessly.
For more information on VLT, see the Virtual Link Trunking chapter in the OS10 Enterprise Edition User
Guide - PowerEdge MX I/O Modules and Virtual Link Trunking (VLT) in Dell EMC OS10 Enterprise Edition
Best Practices and Deployment Guide.
Configuring FC connectivity in SmartFabric mode is simple and is almost identical across the three
connectivity types.
FC Switch FC Switch
Spine 1 Spine 2
FC SAN A
FC SAN B
Controller A Controller B
This example demonstrates Fibre Channel directly attaching to the Dell EMC Unity 500F storage array.
MX9116n FSE universal ports 44:1 and 44:2 are required for FC connections and operate in F_port mode,
which allows for an FC storage array to be connected directly to the MX9116n FSE. The uplink type enables
F_port functionality on the MX9116n unified ports, converting FCoE traffic to native FC traffic and passing that
traffic to a directly attached FC storage array.
FC SAN A
FC SAN B
MX7000 MX7000
chassis 1 chassis 2
Fibre Channel (F_Port) direct connect network to Dell EMC Unity
When operating in FSB mode, the switch snoops FIP packets on FCoE-enabled VLANs and discovers the
following information:
Using the discovered information, the switch installs ACL entries that provide security and point-to-point link
emulation.
S4148U-ON S4148U-ON
(NPG mode) (NPG mode)
ToR switch 1 VLT ToR switch 2 FCoE SAN A
FC SAN A
FCoE SAN B
FC SAN B
FC Switch FC Switch
MX5108n MX5108n
FSB mode VLT
FSB mode
(Leaf 2) Controller A Controller B
(Leaf 1)
Table 4 lists the network types and related settings. The QoS group is the numerical value for the queues
available in SmartFabric mode. Available queues include 2 through 5. Queues 1, 6, and 7 are reserved.
Note: In SmartFabric mode, an administrator cannot change the default weights for the queues.
General Purpose (Platinum) Used for extremely high priority data traffic 5
2.8.1 Templates
A template is a set of system configuration settings referred to as attributes. A template may contain a small
set of attributes for a specific purpose, or all the attributes for a full system configuration. Templates allow for
multiple servers to be configured quickly and automatically without the risk of human error.
Networks (VLANs) are assigned to NICs as part of the server template. When the template is deployed, those
networks are programmed on the fabric for the servers associated with the template.
Note: Network assignment through template only functions for servers connected to a SmartFabric. If a template
with network assignments is deployed to a server connected to a switch in Full Switch mode, the network
assignments are ignored.
• Most frequently, templates are created by getting the current system configuration from a server that
has been configured to the exact specifications required (referred to as a “Reference Server”).
• Templates may be cloned or copied and edited.
• A template can be created by importing a Server Configuration Profile (SCP) file. The SCP file may
be from a server or exported by OpenManage Essentials, OpenManage Enterprise, or OME-M.
• OME-M comes prepopulated with several templates for specific purposes.
Devices come with unique manufacturer-assigned identity values preinstalled, such as a factory-assigned
MAC address. Those identities are fixed and never change. However, devices can assume a set of alternate
identity values, called a “virtual identity”. A virtual identity functions on the network using that identity, as if the
virtual identity was its factory-installed identity. The use of virtual identity is the basis for stateless operations.
OME-M console uses identity pools to manage the set of values that can be used as virtual identities for
discovered devices. It controls the assignment of virtual identity values, selecting values for individual
deployments from pre-defined ranges of possible values. This allows the customer to control the set of values
which can be used for identities. The customer doesn’t have to enter all needed identity values with every
deployment request, or remember which values have or have not been used. Identity pools make
configuration deployment and migration much easier to manage.
Identity pools are used in conjunction with template deployment and profile operations. They provide sets of
values that can be used for virtual identity attributes for deployment. After a template is created, an identity
pool may be associated with it. Doing this directs the identity pool to get identity values whenever the
template is deployed to a target device. The same identity pool can be associated with, or used by, any
number of templates. Only one identity pool can be associated with a template.
Each template will have specific virtual identity needs, based on its configuration. For example, one template
may have iSCSI configured, so it needs the appropriate virtual identities for iSCSI operations. Another
template may not have iSCSI configured, but may have FCoE configured, so it will need virtual identities for
FCoE operations but not for iSCSI operations, etc.
2.8.3 Deployment
Deployment is the process of applying a full or partial system configuration on a specific target device. In
OME-M, templates are the basis for all deployments. Templates contain the system configuration attributes
that get sent to the target server, then the iDRAC on the target device applies the attributes contained in the
template and reboots the server if necessary. Often, templates contain virtual identity attributes. As mentioned
above, identity attributes must have unique values on the network. Identity Pools facilitate the assignment and
management of unique virtual identities.
Note: SmartFabric mode can be enabled on a single chassis having two MX9116n FSEs or two MX5108n
switches. For a SmartFabric implemented using a single chassis, creating an MCM group is not mandatory but
recommended. The chassis must be in an MCM group for a SmartFabric containing more than one MX chassis.
A minimum of one physical uplink from each MX switch to each upstream switch is required and that the
uplinks must be connected in a “mesh” or “bowtie” design.
Note: The upstream switch ports must be in a single LACP LAG as shown in the figure below. Creating multiple
LAGs within a single uplink results in a network loop.
Note: Dell EMC recommends using RSTP instead of RPVST+ when more than 64 VLANs are required in a fabric
to avoid performance problems.
Caution should be taken when connecting an RPVST+ to an existing RSTP environment. RPVST+ creates a
single topology per VLAN with the default VLAN, typically VLAN 1, for the Common Spanning Tree (CST) with
RSTP.
For non-native VLANs, all bridge protocol data unit (BPDU) traffic is tagged and forwarded by the upstream,
RSTP-enabled switch with the associated VLAN. These BPDUs use a protocol-specific multicast address.
Any other RPVST+ tree attached to the RSTP tree might processes these packets accordingly leading to the
potential of unexpected trees.
Note: When connecting to an existing environment that is not using RPVST+, Dell EMC recommends changing to
the existing spanning tree protocol before connecting a SmartFabric OS10 switch. This ensures same type of
Spanning Tree is run on the SmartFabric OS10 MX switches and the upstream switches.
To switch from RPVST+ to RSTP, use the spanning-tree mode rstp command:
To validate the STP configuration, use the show spanning-tree brief command:
Note: STP is required. Operating a SmartFabric with STP disabled creates network loops and is not supported.
For example, the QSFP28 interfaces that belong to port groups 13, 14, 15, and 16 on MX9116n FSE are
typically used for uplink connections. By default, the ports are set to 1x100GbE. The QSFP28 interface
supports the following Ethernet breakout configurations:
The MX9116n FSE also supports Fibre Channel (FC) capabilities via Universal Ports on port-groups 15 and
16. For more information on configuring FC storage on the MX9116n FSE, see Section 10.3 and 10.4.
For more information on interface breakouts, see OS10 User Guide - PowerEdge MX I/O Modules Release
10.4.0E(R3S).
Note: The cabling shown in this section, Section 3.5, is the VLTi connections between the MX switches.
For the MX5108n, ports 9 and 10 are used. Port 10 operates at 40GbE instead of 100GbE because all VLTi
links must run at the same speed.
Note: The VLTi ports are not user selectable and the connection topology is enforced by the SmartFabric engine.
To see the status of VLT backup link, run show vlt domain-id backup-link.
For example:
OS10# show vlt 255 backup-link
VLT Backup Link
------------------------
Destination : fde1:53ba:e9a0:de14:2204:fff:fe00:a267
Peer Heartbeat status : Up
Heartbeat interval : 30
Heartbeat timeout : 90
Destination VRF : default
• Switch dependent: Also referred to as LACP, 802.3ad, or Dynamic Link Aggregation, this teaming
method uses the LACP protocol to understand the teaming topology. This teaming method provides
Active-Active teaming and requires the switch to support LACP teaming.
• Switch independent: This method uses the operating system and NIC device drivers on the server
to team the NICs. Each NIC vendor may provide slightly different implementations with different pros
and cons.
NIC Partitioning (NPAR) can impact how NIC teaming operates. Based on restrictions implemented by the
NIC vendors related to NIC partitioning, certain configurations will preclude certain types of teaming.
The following restrictions are in place for both Full Switch and SmartFabric modes:
• If NPAR is NOT in use, both Switch Dependent (LACP) and Switch Independent teaming methods
are supported
• If NPAR IS in use, only Switch Independent teaming methods are supported. Switch Dependent
teaming is NOT supported
If Switch Dependent (LACP) teaming is used, the following restrictions are in place:
• The iDRAC shared LAN on motherboard (LOM) feature can only be used if the “Failover” option on
the iDRAC is enabled
• If the host OS is Windows, the LACP timer MUST be set to “slow” (also referred to as “normal”)
Refer to the network adapter or operating system documentation for detailed NIC teaming instructions.
Note: If using VMware ESXi and LACP, it is recommended to use VMware ESXi 6.7.0 Update 2.
Table 6 shows the options that MX Platform provides for NIC Teaming.
1. If the MTU is not specifically set on a specific interface, the MTU will be 9216 bytes.
2. If the MTU has been specifically set on a specific interface, the MTU will be the value that has
been specified.
3. If a FCoE VLAN is assigned to an interface, the MTU will be set to 2500 bytes even if the
MTU had been manually set to a different value before the FCoE VLAN was assigned. It is
recommended that you set the MTU back to 9216 bytes after the FCoE VLAN has been
assigned
1. Interconnecting switches in Slots A1/A2 with switches in Slots B1/B2 regardless of chassis is not
supported.
2. When operating with multiple chassis, switches in Slots A1/A2 or Slots B1/B2 in one chassis must be
interconnected only with other Slots A1/A2 or Slots B1/B2 switches respectively. Connecting switches
that reside in Slots A1/A2 in one chassis with switches in Slots B1/B2 in another is not supported.
3. Uplinks must be symmetrical. If one switch in a SmartFabric has two uplinks, the other switch must have
two uplinks of the same speed.
4. You cannot have a pair of switches in SmartFabric mode uplink to another pair of switches in SmartFabric
mode. A SmartFabric can uplink to a pair of switches in Full Switch mode.
5. VLANs 4001 to 4020 are reserved for internal switch communication and must not be assigned to an
interface.
• All MX7000 chassis and management modules are cabled correctly and in a Multi-Chassis
Management group.
• The VLTi cables between switches have been connected.
• Open Manage Enterprise - Modular is at version 1.10.00 and OS10 is version 10.5.0.1
Note: All server, network, and chassis hardware has been updated to the latest firmware. See Appendix D for the
minimum recommended firmware versions.
• For Management Module cabling, see Dell EMC PowerEdge MX Networking Architecture Guide.
• For VLTi cabling of different IOM placements, see Figure 8, Figure 9, and Figure 10.
For information on cabling the MX chassis to the upstream switches, see the example topologies in Section
10 in this document.
For further information on cabling PowerEdge MX in general, see Dell EMC PowerEdge MX Networking
Architecture Guide.
Note: VLAN1 will be created as a Default VLAN when the first fabric is created.
To define VLANs using the OME-M console, perform the following steps:
Figure 13 shows VLAN 1 and VLAN 10 after being created using the steps above.
A standard Ethernet uplink carries assigned VLANs on all physical uplinks. When implementing FCoE, traffic
for SAN path A and SAN path B must be kept separate. The storage arrays have two separate controllers
which creates two paths SAN path A and SAN path B connected to MX9116n FSE. For storage traffic to be
redundant, two separate VLANs are created for that traffic.
Using the same process as above, create two additional VLANs for FCoE traffic.
VLAN attributes
Name Description Network Type VLAN ID SAN
Note: To create VLANs for FCoE, From the Network Type list, select Storage – FCoE, and then click Finish.
VLANs to be used for FCoE must be configured as the Storage – FCoE network type.
Note: From the Summary window a list of the physical cabling requirements can be printed.
The SmartFabric deploys. This process can take several minutes to complete. During this time all related
switches will be rebooted, and the operating mode changed to SmartFabric mode.
Note: After the fabric is created, the fabric health will be critical until at least one uplink is created.
Figure 16 shows the new SmartFabric object and some basic information about the fabric.
To configure the Ethernet breakout on port groups using OME-M Console, perform the following steps:
6. Choose Configure Breakout. In the Configure Breakout dialog box, select HardwareDefault.
7. Click Finish.
8. Once the job is completed, choose Configure Breakout. In the Configure Breakout dialog box,
select the required Breakout Type. In this example, the Breakout Type for port-group1/1/13 is
selected as 1x40GE. Click Finish.
9. Configure the remaining breakout types on additional uplink port groups as needed.
After initial deployment, the new fabric shows Uplink Count as ‘zero’ and shows a warning ( ). The lack of a
fabric uplink results in a failed health check ( ). To create the uplink, follow these steps:
7. Click Finish.
At this point, SmartFabric creates the uplink object and the status for the fabric changes to OK .
On the MX9116n FSE, port-group 1/1/15 and 1/1/16 are universal ports capable of connecting to FC devices
at a variety of speeds depending on the optic being used. In this example, we are configuring the universal
port speed as 4x16G FC. To enable FC capabilities, perform the following steps on each MX9116n FSE.
Note: See SmartFabric mode - MX Port-Group Configuration Errors video for more information on configuration
errors.
5. Click the port-group 1/1/16 check box, then click Configure breakout.
6. In Configure breakout panel, select HardwareDefault as the breakout type.
7. Click Finish.
8. To set the port group 1/1/16 to 4X16GFC, select the port-group 1/1/16 check box, then click
Configure breakout.
9. In Configure breakout panel, select 4X16GFC as the breakout type.
10. Click Finish.
Note: When enabling Fibre Channel ports, they are set administratively down by default. Select the ports and
click Toggle Admin State button. Click Finish to administratively set the ports to up.
Note: The steps in this section allow you to connect to an existing FC switch via NPG mode, or directly attach a
FC storage array. The uplink type is the only setting within the MX chassis that distinguishes between the two
configurations.
For the example shown here in Section 10.3 and 10.4, The uplink attributes are defined here.
Uplink attributes
Uplink Name Description Ports VLAN (Tagged)
Note: Do not assign the same FCoE VLAN to both switches. They must be kept separate.
• Scenario 1: SmartFabric deployment with Dell EMC PowerSwitch Z9100-ON upstream switches
• Scenario 2: SmartFabric connected to Cisco Nexus 3232C switches
• Scenario 3: Connect MX9116n FSE to Fibre Channel Storage – NPIV Proxy gateway mode
• Scenario 4: Connect MX9116n FSE to Fibre Channel Storage – FC Direct attach mode
• Scenario 5: Connect MX5108n to Fibre Channel Storage - FSB mode
• Scenario 6: Configure Boot from SAN
Note: iDRAC steps in this section may vary depending on hardware, software and browser versions used. See
the Installation and Service Manual for your PowerEdge server for instructions on connecting to the iDRAC.
From the link, select your server, then Manuals and documents.
Reset the CNAs to their factory defaults using the steps in this section. Resetting CNAs to factory default is
only necessary if the CNAs installed have been modified from their factory default settings.
1. From the OME-M console, select the server to use to access the storage.
2. Launch the server Virtual Console.
3. From the Virtual Console, select Next Boot then BIOS Setup.
4. Reboot the server.
5. From the System Setup Main Menu, select Device Settings.
6. From the Device Settings page, select the first CNA port.
7. From the Main Configuration page, click the Default button.
8. Click Yes to load the default settings, and then click OK.
9. Click Finish. Notice if a message indicates a reboot is required for changes to take effect.
10. Click Yes to save changes, then click OK.
11. Repeat the steps in this section for each CNA port listed on the Device Settings page.
If required per step 9, reboot the system and return to System Setup to configure NIC partitioning.
If the system is already in System Setup from the previous section, skip to step 4.
1. Using a web browser, connect to the iDRAC server and launch the Virtual Console.
2. From the Virtual Console, click Next Boot menu then select BIOS Setup.
3. Select the option to reboot the server.
4. On the System Setup Main Menu, select Device Settings.
5. Select the first CNA port.
6. Select Device Level Configuration.
7. Set the Virtualization Mode to NPAR, if not already set, and then click Back.
8. Select NIC Partitioning Configuration, Partition 1 Configuration, and click to set the NIC +
RDMA Mode to Disabled.
9. Click Back.
10. Select Partition 2 Configuration and set the NIC Mode to Disabled.
11. Set the FCoE Mode to Enabled, then click Back.
12. If present, select Partition 3 Configuration and set all modes to Disabled, then click Back.
13. If present, select Partition 4 Configuration and set all modes to Disabled, then click Back.
14. Click Back, and then Finish.
15. When prompted to save changes, click Yes and then click OK in the Success window.
Before creating the template, select a server to be the reference server and configure the hardware to the
exact settings required for the implementation.
Note: In SmartFabric mode, you must use a template to deploy a server and to configure networking.
A job starts and the new server template displays on the list. When complete, the Completed successfully
status displays.
1. From the Deploy pane, select the template to be associated with VLANs. In this example, MX740c
with Intel mezzanine server template is selected.
2. Click Edit Network.
3. In the Edit Network window, complete the following:
• Optionally, from the Identity Pool list, choose the desired identity pool. In this example,
Ethernet ID Pool is selected
• Optionally, from the NIC Teaming option, choose the desired NIC Teaming option.
1. From the Deploy pane, select the template to be associated with VLANs. In this example, MX740c
with FCoE CNA server template is selected.
2. Click Edit Network.
3. In the Edit Network window, complete the following:
• To choose FCoE VLANs, From identity pool list, Choose Ethernet CNA.
• For NIC in Mezzanine 1A Port 1, from the Tagged Network list, Choose FC A1.
• For NIC in Mezzanine 1A Port 2, from the Tagged Network list, Choose FC A2.
• Click Finish.
1. From the Deploy pane, select the template to be deployed. In this example, MX740c with FCOE CNA
server template is selected.
2. Click Deploy Template.
3. In the Deploy Template window, complete the following:
• Click the Select button to choose which slots or compute sleds to deploy the template to.
• Select the Do not forcefully reboot the host OS option.
• Click Next. Choose Run Now. Click Finish.
The interfaces on the switch are updated automatically. SmartFabric configures each interface with an
untagged VLAN and any tagged VLANs. Additionally, SmartFabric deploys associated QoS settings. See
Section 2.7 for more information.
To monitor the deployment progress, go to Monitor > Jobs > Select Job > View Details. This will show the
progress of Deployment of Server Template.
SmartFabric
Fabric components include Uplinks, Switches, Servers and ISL Links. Uplinks connect the MX9116n
switches with upstream switches. In this example, the uplink is named as Uplink1.
Uplinks
Note: Fabric Expander Modules are transparent and therefore do not appear on the Fabric Details page.
Switches
Servers are the compute sleds that are part of the fabric. In this example, two PowerEdge MX740c compute
sleds are part of the fabric.
Servers
ISL Links are the VLT interconnects between the two switches. The ISL links should be connected on port
groups 11 and 12 on MX9116n switches, and ports 9 and 10 on MX5108n switches. This is a requirement
and failure to connect the defined ports will result in a fabric validation error.
• Uplinks
• Switches
• Servers
• ISL Links
Editing the fabric discussed in this section includes editing the fabric name and description.
To edit the name of the fabric that was created, follow the steps below:
Note: The uplink type cannot be modified once the fabric is created. If the uplink type needs to be changed after
the fabric is created, delete the uplink and create a new uplink with the desired uplink type.
6. Click Next.
Note: Care should be taken to modify the uplink ports on both MX switches. Select the IOM to display the
respective uplink switch ports.
5. Choose the desired server. In this example PowerEdge MX740C with service tag 8XQP0T2 is
selected.
6. Choose Edit Networks.
7. Choose NIC teaming from LACP, No Teaming and Other options.
8. Modify the VLAN selections as required by defining the tagged and untagged VLANs.
9. Select VLANs on Tagged and Untagged Network for each Mezzanine card port.
10. Click Save.
Modify VLANs
Note: At this time, only one server can be selected at a time in the GUI.
Preparation of servers will be same as mentioned in Section 5. Determine the FC WWPNs for the compute
sleds and storage array as discussed in Appendix B.
These examples assume that the storage array has been successfully connected to the MX9116n FSE’s FC
uplinks and there are no errors.
Below are examples of the steps and commands to configure FC Zoning. For more information on Dell EMC
SmartFabric OS10 Fibre Channel capabilities and commands, see the Dell EMC SmartFabric OS10 User
Guide.
The WWNs for the servers are obtained via OME-M console.
MX9116n-A1 MX9116n-A2
Server and storage adapter WWPNs, or their aliases, are combined into zones to allow communication
between devices in the same zone. Dell EMC recommends single-initiator zoning. In other words, no more
than one server HBA port per zone. For high availability, each server HBA port should be zoned to at least
one port from SP A and one port from SP B. In this example, one zone is created for each server HBA port.
The zone contains the server port and the two storage processor ports connected to the same MX9116n FSE.
MX9116n-A1 MX9116n-A2
A zone set is a collection of zones. A zone set named zoneset1 is created on each switch, and the zones are
added to it.
MX9116n-A1 MX9116n-A2
To connect a non-MX device to a switch running in SmartFabric mode, perform the following steps:
1. Open the OME-M Console. To Configure Breakout on port-group, refer to Section 4.4.
2. Once the Breakout on Port-group is done, select the port. Click Edit VLANs. Make sure the port is
not in use for any other purpose.
3. Click Edit VLANs and select Default VLAN 1 as shown as Untagged Network in the example below
and choose any other VLAN for Tagged Network.
4. Click Finish.
5. Repeat these steps for any other port or IOM
• Identify the faulty IOM to replace. Disconnect the cables connected to faulty IOM.
• Insert the new IOM in the same slot as the failed IOM. The IOM should be in SmartFabric and must
be up and running.
• The model of the IOM should be same.
• The new IOM should have the same version of SmartFabric OS10 as the old IOM.
• Confirm if the new IOM has been recognized by OME-M before proceeding further.
admin@MX9116N-A1:/opt/dell/os10/bin$sfs_master_details.py
• If you run the command on master IOM, the output will be I am the Master. If you run the
command on other member IOMs it will return the IPv6 address of master IOM.
For Example:
admin@MX9116N-A1:/opt/dell/os10/bin$sfs_master_details.py
I am the Master
admin@MX9116N-A1:/opt/dell/os10/bin$sfs_master_details.py
Master IP Address: fde1:53ba:e9a0:de14:2204:fff:fe01:cd90
• To login to master IOM from member IOM, run ssh admin@Master IP Address command.
You can find the Service tag by running show license status command on IOM.
The system displays the following error when the IOM is not part of the SmartFabric:
Password:
No Fabric found for specified nodes. Please recheck and issue this command
again.
If IOM is part of the Fabric, the module replacement process will be initiated.
For Example:
Node replacement work-flow is initiated, the node C23RPK2 will reboot into
Fabric mode.
• Chassis information
• Recent Alerts
• Recent Activity
• IOM Subsystems
The Blink LED drop down button provides an option to turn on or turn off the ID LED on the IOM. To turn on
the ID LED, choose:
This activates a blinking blue LED and provides easy identification. To turn off the blinking ID LED, choose:
• FRU
• Device Management Info
• Installed Software
• Port Information
Hardware Tab
Port Information
Firmware Tab
• Acknowledge
• Unacknowledged
• Ignore
• Export
• Delete
Alerts Tab
• Network
• Management
• Monitoring
• Advanced Settings
Settings Tab
The Network option includes configuring IPv4, IPv6, DNS Server and Management VLAN settings.
Network Settings
Note: Although the GUI has the field name listed as Root Password, it denotes the linuxadmin password. For
logging on to the CLI of the MX switch, use default credentials with username as admin and password as admin.
Management Settings
Monitoring Settings
The Advanced Settings tab offers the option for time configuration replication and alert replication. Select the
Replicate Time Configuration from Chassis check box to replicate the time settings configured in the
chassis to the IOM. Select the Replicate Alert Destination Configuration from Chassis check box to
replicate the alert destination settings configured in the chassis to the IOM.
Note: In SmartFabric mode, configuring interfaces via the CLI should not be performed. Use the OME-Modular
GUI
1. From the switch management page, choose Hardware > Port Information.
2. To configure MTU, select the port listed under the respective port-group.
3. Click Configure MTU. Enter MTU size in bytes.
4. Click Finish.
Configure MTU
5. To configure Auto Negotiation, select the port listed under the respective port-group. Click Toggle
AutoNeg. This will change the Auto Negotiation of the port to Disabled/Enabled. Click Finish.
6. To configure the administrative state (shut/no shut) of a port, select the port listed under the
respective port-group. Click Toggle Admin State. This will toggle the port’s administrative state to
Disabled/Enabled state.
7. Click Finish.
Note: If an IOM is in SmartFabric mode, all switches that are part of the fabric will be updated. Do not select both
switches in the fabric to be updated.
If an IOM is in Full Switch mode, the firmware upgrade is completed only on the specific IOMs selected.
To upgrade the IOMs that are part of a fabric, follow the steps below:
2. Once the file is uploaded, select the check box next to the file and click Next.
3. Select Update Now and then click Finish.
The firmware upgrade job can be monitored by navigating to Monitor > Jobs > Select Job > View Details.
SmartFabric cabling
The Overview tab shows the current inventory, including switches, servers, and interconnects between the
MX9116n FSEs in the fabric. Figure 63 shows the SmartFabric switch in a healthy state. Figure 64 shows the
participating servers in a healthy state.
Figure 65 shows the Topology tab and the VLTi created by the SmartFabric mode.
Figure 66 displays the wiring diagram table from the Topology tab.
Figure 67 shows ethernet 1/1/1, 1/1/3, 1/71/1, and 1/72/1 in the correct operational status (Up). These
interfaces correspond to the MX740c compute sleds in slots 1 and 2 in both chassis. The figure also shows
the VLT connection (port channel 1000) and the uplinks (port channel 1) to the Z9100-ON leaf switches.
The iDRAC MAC address can be verified by selecting iDRAC Settings > Overview > Current Network
Settings from the iDRAC GUI of a compute sled. An example is shown as follows:
Subsequently, viewing the LLDP neighbors shows the iDRAC MAC address in addition to the NIC MAC
address of the respective mezzanine card.
In the example deployment validation of LLDP neighbors, Ethernet1/1/1, ethernet 1/1/3, and
ethernet 1/1/71-1/1/72 represent the two MX740c sleds in one chassis. The first entry is the iDRAC
for the compute sled. The iDRAC uses connectivity to the mezzanine card to advertise LLDP information. The
second entry is the mezzanine card itself.
Ethernet 1/71/1 and ethernet 1/71/2 represent the MX740c compute sleds connected to the
MX7116n FEM in the other chassis.
Ethernet range ethernet1/1/37-1/1/40 are the VLTi interfaces for the SmartFabric. Last,
ethernet1/1/41-1/1/42 are the links in a port channel connected to the Z9100-ON leaf switches.
For cases where breakout of port groups is required, the breakout must be configured after the SmartFabric
creation and before adding the uplinks.
CREATE VLANs
You may encounter the following errors if the recommended order of steps is not followed:
• Configuration of the breakout requires you to create the SmartFabric first. When attempting to
configure breakout before creating a SmartFabric, the following error displays:
• Configuration of the breakout requires you to select the HardwareDefault breakout type first. If the
breakout type is directly selected without first selecting HardwareDefault, the following error displays:
• Once the uplinks are added, they are most often associated with tagged or untagged VLANs. When
attempting to configure the breakout on the uplink port-groups after adding uplinks associated with
VLANs to the fabric, the following error displays:
• The following example shows the STP on the upstream switches, Cisco Nexus 3232C, is configured
to run MST:
• The recommended course of action is to change the STP type to RPVST+ on the upstream Cisco
Nexus switches.
• Another course of action in the above case can be to change the spanning tree type on the MX
switches operating in SmartFabric mode to match the STP type on the upstream switches. This can
be done using the SmartFabric OS10 CLI. The options available on the type of STP are as follows:
The following example shows a mismatch of the VLT domain IDs on VLT peer switches. To resolve this issue,
ensure that a single VLT domain is used across the VLT peers.
The following example shows a mismatch of the vPC domain IDs on vPC peer switches. To resolve this
issue, ensure that a single vPC domain is used across the vPC peers.
• If there is no link indicated on the FSE port, toggle the auto-negotiation settings for that port.
• Ensure that the compute sled is properly seated in the compute slot in the MX chassis.
• Make sure that the compute sled is turned on.
The OME-M console is used to disable/enable auto negotiation ports on MX switches. The following steps
illustrate turning disabling auto negotiation on ports 41 and 42 of a MX9116n.
The OME-M console can be used to disable/enable the ports on MX switches. The following steps illustrate
turning setting the administrative state on ports 41 and 42 of an MX9116n.
For example, if the auto negotiation was disabled on the Cisco Nexus upstream switches, the setting can be
turned on. To enable the auto-negotiation on an ethernet interface on Cisco Nexus switches, follow the below
steps:
The following example shows interface ethernet 1/2 that has auto negotiation enabled on the interface:
Checking interface 1 reveals that the ports are not receiving the LACP PDUs as shown in the following
example:
Note: In Dell EMC PowerSwitches, use show interface status command to view the interfaces and
associated status information. Use show interface ethernet interface number to view the interface
details.
In this example, the errors listed above occurred because an uplink was not created on the fabric.
The resolution is to add the uplinks and verify that the fabric turns healthy.
• Ensure that the firmware and drivers are up to date on the CNAs.
• Check the storage guide to ensure that the CNAs are supported by the storage used in the
deployment. For qualified support matrix, see elab navigator and Dell EMC Storage Compatibility
Matrix for SC Series, PS Series and FS Series.
• Verify that port group breakout mode is appropriately configured.
• Ensure that the FC port-groups broken out on the unified ports in MX9116n switches are made
administratively up once the ports are changed from Ethernet to FC.
• MX9116n switches operating in SmartFabric mode support various commands to verify the
configuration. Use the following commands to verify FC configurations from MX9116n CLI:
OS10# show fc
alias Show FC alias
ns Show FC NS Switch parameters
statistics Show FC Switch parameters
switch Show FC Switch parameters
zone Show FC Zone
zoneset Show fc zoneset
Use the following commands to verify FCoE configurations from MX9116n CLI:
The show vfabric command output provides a variety of information including the default zone mode, the
active zone set, and interfaces that are members of the vfabric.
The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses,
Ethernet interfaces, the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.
Note: Due to the width of the command output, each line of output is shown on two lines below.
Note: For more information on FC and FCoE, see the Dell EMC SmartFabric OS10 User Guide.
show smartfabric personality command is used on a node to view the personality of SmartFabric
Services configured. The possible values can be PowerEdge MX, Isilon, VxRail, L3 fabric.
----------------------------------------------------------
CLUSTER DOMAIN ID : 50
VIP : fde1:53ba:e9a0:de14:0:5eff:fe00:150
ROLE : MASTER
SERVICE-TAG : CBJXLN2
----------------------------------------------------------
show smartfabric details command is used to see the all configured fabric details. This displays
which nodes are part of the fabric, status of the fabric, and design type associated with the fabric.
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33SFP+ Stack ID
VLTi
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33SFP+ Stack ID
MX7116n FEM
MX9116n FSE
VLTi
MX9116n FSE
MX7116n FEM
Note: See Dell EMC PowerEdge MX Network Architecture Guide for more information on QSFP28-DD cables.
Note: The MX IOMs run Rapid Per-VLAN Spanning Tree Plus (RPVST+) by default. RPVST+ runs RSTP on each
VLAN while RSTP runs a single instance of spanning tree across the default VLAN. The Dell EMC PowerSwitch
Z9100-ON used in this example runs SmartFabric OS10 and has RPVST+ enabled by default. See Spanning
Tree Protocol recommendations for more information.
Use the following commands to set the hostname, and to configure the OOB management interface and
default gateway.
Note: Use spanning-tree {vlan vlan-id priority priority-value} command to set the bridge
priority for the upstream switches. The bridge priority ranges from 0 to 61440 in increments of 4096. For example,
to make Z9100-ON Leaf 1 as the root bridge for VLAN 10, enter the command spanning-tree vlan 10
priority 4096.
Configure the VLT between switches using the following commands. VLT configuration involves setting a
discovery interface range and discovering the VLT peer in the VLTi.
vlt-domain 1 vlt-domain 1
backup destination 100.67.162.34 backup destination 100.67.169.35
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31
Configure the port channels that connect to the downstream switches. The LACP protocol is used to create
the dynamic LAG. Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is
configured allow VLAN 10.
end end
write memory write memory
Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
port-channel1 Root 128.2517 128 50 FWD 0 AUTO No
VLAN 10
Executing IEEE compatible Spanning Tree Protocol
Root ID Priority 32778, Address 4c76.25e8.e840
Root Bridge hello time 2, max age 20, forward delay 15
Bridge ID Priority 32778, Address 4c76.25e8.f2c0
Configured hello time 2, max age 20, forward delay 15
Flush Interval 200 centi-sec, Flush Invocations 5
Flush Indication threshold 0 (MAC flush optimization is disabled)
Interface Designated
Interface Designated
Name PortID Prio Cost Sts Cost Bridge ID PortID
--------------------------------------------------------------------------------
port-channel1 128.2517 128 50 FWD 1 32768 2004.0f00
Interface
Name Role PortID Prio Cost Sts Cost Link-type Edge
--------------------------------------------------------------------------------
S1 / 33 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1
2
ENV
LS
S2 / 34
vPC Peer-Link
S1 / 33 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1
2
ENV
LS
S2 / 34
MX7116n FEM
MX9116n FSE
VLTi
MX9116n FSE
MX7116n FEM
Note: See Dell EMC PowerEdge MX Network Architecture Guide for more information on QSFP28-DD cables.
Note: While this configuration example is specific to the Cisco Nexus 3232C switch, the same concepts apply to
other Cisco Nexus and IOS switches.
The switches start at their factory default settings, as described in Appendix C.4.
Note: The MX IOMs run Rapid per-VLAN Spanning Tree Plus (RPVST+) by default. Ensure the Cisco and Dell
switches are configured to use compatible STP protocols. The mode of STP on the Cisco switch can be set using
the command spanning-tree mode, which is shown below. See Spanning Tree Protocol recommendations for
more information. In this deployment example, default VLAN is VLAN 1 and the created VLAN is VLAN 10. See
Cisco Nexus 3000 Series NX-OS configuration guide for more details.
1. Set switch hostname, management IP address, enable features and spanning tree
2. Configure vPC between the switches
3. Configure the VLANs
4. Configure the downstream port channels to connect to the MX switches
Enter the following commands to set the hostname, enable required features, and enable RPVST spanning
tree mode. Configure the management interface and default gateway.
Enter the following commands to create a virtual port channel (vPC) domain and assign the keepalive
destination to the peer switch management IP. Then create a port channel for the vPC peer link and assign
the appropriate switchport interfaces.
Enter the following commands to configure the port channels to connect to the downstream MX9116n FSEs.
Then, exit configuration mode and save the configuration.
end end
copy running-configuration startup-configuration copy running-configuration startup-configuration
Trunk ports on switches allow tagged traffic to traverse the links. All flooded traffic for the a VLAN will be sent
across trunk ports to all the switches even if those switches do not have associated VLAN. This takes up the
network bandwidth with unnecessary traffic. VLAN or VTP Pruning is the feature that can be used to eliminate
this unnecessary traffic by pruning the VLANs.
Pruning restricts the flooded traffic to only those trunk ports with associated VLANs to optimize the usage of
network bandwidth. If the existing environment is configured for Cisco VTP or VLAN pruning, ensure that the
Cisco upstream switches are configured appropriately. See See Cisco Nexus 3000 Series NX-OS
configuration guides for additonal information.
Note: Do not use switchport trunk allow vlan all on the Cisco interfaces. VLANs must be explicitly assigned to the
interface
FC Switch FC Switch
Spine 1 Spine 2
FC SAN A
FC SAN B
Controller A Controller B
This scenario shows attachment to a brownfield FC switch infrastructure. Configuration of the existing FC
switches is beyond the scope of this document.
Note: The MX5108n Ethernet Switch does not support this feature.
This example assumes that an existing SmartFabric has been created and is fully operational. For instructions
on creating a SmartFabric, see Section 4.3.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M
console:
1. Connect the MX9116n FSE to the FC SAN. Note that the cables do NOT “criss-cross” between
the switches
2. Define FCoE VLANs to use in the fabric. For instructions, see Section 4.2.1 for defining VLANs.
3. Create Identity Pools if desired. see Section 5.3 for more information on how to create identity
pools.
4. Configure the physical switch ports for FC operation. See Section 4.6 for instructions.
5. Create the FC Gateway uplinks. For instruction, see Section 4.7 on creating Uplinks.
6. Create and deploy the appropriate server templates to the compute sleds. See Section 5.2 to 5.6
for more information.
Once the server operating system loads the FCoE driver, the WWN will appear on the fabric and on the FC
SAN. At that point, your system is now ready to connect to Fibre Channel storage. See Appendix B for setting
up storage logical unit numbers (LUNs).
Note: Due to the width of the command output, each line of output is shown on two lines below.
As you can see, the steps to configure NPG mode or FC Direct Attach mode on the MX9116n FSE is different
only by selecting which type of uplink is desired.
FC SAN A
FC SAN B
MX7000 MX7000
chassis 1 chassis 2
Fibre Channel (F_Port) direct connect network to Dell EMC Unity
This example shows directly attaching a Dell EMC Unity 500F storage array to the MX9116n FSE using
universal ports 44:1 and 44:2.
Note: The MX5108n Ethernet Switch does not support this feature
This example assumes that an existing SmartFabric has been created and is fully operational. For instructions
on creating a SmartFabric, see Section 4.3.
To configure NPG mode on an existing SmartFabric, the following steps are completed using the OME-M
console:
1. Connect the storage array to the MX9116n FSE. Note that each storage controller is
connected to each MX9116n FSE. Define FCoE VLANs to use in the fabric. For instructions,
see Section 4.2.1 for defining VLANs.
2. Create Identity Pools if desired. See Section 5.3 for more information on how to create
identity pools.
3. Configure the physical switch ports for FC operation. See Section 4.6 for instructions.
4. Create the FC Direct Attached uplinks. See Section 4.7 on creating Uplinks.
5. Create and deploy the appropriate server templates to the compute sleds. See Section 5.2 to
5.6 for more information.
6. Configure zones and zonesets. See Section 6.5 for instructions.
Once the server operating system loads the FCoE, the WWN will appear on the fabric and on the FC SAN. At
that point, your system is now ready to connect to Fibre Channel storage. See Appendix B for setting up
storage logical unit numbers (LUNs).
Note: Due to the width of the command output, each line of output is shown on two lines below.
Dell EMC SmartFabric OS10 uses a FIP Snooping Bridge (FSB) to detect and manage FCoE traffic and
discovers the following information:
Using the discovered information, the switch installs ACL entries that provide security and point-to-point link
emulation to ensure that FCoE traffic is handled appropriately.
Note: The examples in this chapter use the Dell EMC Networking MX5108n. The same instructions may also be
applied and used with the MX9116n.
The FSB switch can connect to an upstream switch operating in NPG mode:
S4148U-ON S4148U-ON
(NPG mode) (NPG mode)
ToR switch 1 VLT ToR switch 2 FCoE SAN A
FC SAN A
FCoE SAN B
FC SAN B
FC Switch FC Switch
MX5108n MX5108n
FSB mode VLT
FSB mode
(Leaf 2) Controller A Controller B
(Leaf 1)
FCoE (FSB) Network to Dell EMC Unity through NPG mode switch
FCoE SAN B
FC SAN B
Controller A Controller B
MX5108n MX5108n
FSB mode VLT
FSB mode Unity500F
500F
(Leaf 1) (Leaf 2) UnityUnity
500F
MX7000 chassis
FCoE (FSB) Network to Dell EMC Unity through F_port mode switch
Note: See the Dell EMC SmartFabric OS10 Documentation for configuring NPG mode globally on the S4148U-
ON switches.
1. To configure FCoE mode on an existing SmartFabric, the following steps are completed using the
OME-M console:Connect the MX switch to the S4148U. Note that the cables do NOT “criss-cross”
between the switches
2. Define FCoE VLANs to use in the fabric. For instructions, see Section 4.2.1 for defining VLANs.
3. Create Identity Pools if desired. See Section 5.3 for more information.
4. Create the FCoE uplinks. See Section 4.7 on creating Uplinks.
5. Create and deploy the appropriate server templates to the compute sleds. See Section 5.2 to 5.6 for
more information.
6. Configure the S4148U switch. See Dell EMC Networking Fibre Channel Deployment with S4148U-
ON in F_port Mode for more information
Once the server operating system loads the FCoE driver, the WWN displays on the fabric and on the FC
SAN. At that point, your system is now ready to connect to Fibre Channel storage. See Appendix B for setting
up storage logical unit numbers (LUNs).
Figure 87 shows the example topology used in this chapter to demonstrate Boot from SAN. The required
steps are provided to configure NIC partitioning, system BIOS, an FCoE LUN, and an OS install media device
required for Boot from SAN.
S4148u S4148u
(NPG mode) (NPG mode)
ToR switch 1 VLT ToR switch 2 FCoE SAN A
FC SAN A
FCoE SAN B
FC SAN B
FC Switch FC Switch
MX5108n MX5108n
FSB mode VLT
FSB mode
(Leaf 2) Controller A Controller B
(Leaf 1)
Note: See the OS10 User Guide document for configuring NPG mode globally on the S4148U-ON switches.
Note: This is only done on CNA ports that carry converged traffic. In this example, these are the two 25GbE
QLogic CNA ports on each server that attach to the switches internally through an orthogonal connection.
1. Connect to the server's iDRAC in a web browser and launch the virtual console.
2. In the virtual console, select BIOS Setup from the Next Boot menu.
3. Reboot the server.
16. If present, select Partition 3 Configuration in NIC Partitioning Configuration. Set all modes to
Disabled and then click Back.
17. If present, select Partition 4 Configuration in NIC Partitioning Configuration. Set all modes to
Disabled and then click Back.
18. Select FCoE Configuration.
19. Set the Virtual LAN ID (30 is used in this example).
20. Set Connect 1 to Enabled.
21. Set the World Wide Port Name Target 1 to the connected port on Unity.
FCoE configuration
Note: As previously documented, this server configuration may be used to generate a template to deploy to other
servers with identical hardware. When a template is not used, repeat the steps in this chapter for each MX server
sled that requires access to the FC storage.
1. Connect to the server’s iDRAC in a web browser and launch the virtual console.
2. In the virtual console, from the Virtual Media menu, select Virtual Media.
3. In the virtual console, from the Virtual Media menu, select Map CD/DVD.
4. Click Browse to find the location of the OS install media then click Map Device.
5. In the virtual console, from the Next Boot menu, select Lifecycle Controller.
6. Reboot the server.
4. Click Next.
5. Click the Manual Install check box, then click Next.
6. Click Next on the Insert OS Media screen.
For detailed information on Hardware components related to MX Platform please see the Dell EMC
PowerEdge MX Networking Architecture Guide.
The Fibre Channel Ports page is displayed as shown in Figure 100. A zoomed-in view of the area inside the
red box is shown in Figure 101.
Two WWNs are listed for each port. The World Wide Node Name (WWNN), outlined in black, identifies this
Unity storage array (the node). The WWPNs, outlined in blue, identify the individual ports. Record the
WWPNs as shown in Table 9.
SP A 0 50:06:01:66:47:E0:1B:19
SP A 1 50:06:01:67:47:E0:1B:19
SP B 0 50:06:01:6E:47:E0:1B:19
SP B 1 50:06:01:6F:47:E0:1B:19
1. Connect to the first server's iDRAC in a web browser and log in.
2. Select System, then click Network Devices.
3. Click the CNA. In this example, it is NIC Mezzanine 1A. Under Ports and Partitioned Ports, the
FCoE partition for each port is displayed as shown in Figure 102:
4. The first FCoE partition is Port 1, Partition 2. Click the icon to view the MAC Addresses as
shown in Figure 103:
Note: A convenient method is to copy and paste these values into a text file.
The FCoE WWPNs and MAC addresses used in this deployment example are shown in Table 10.
Note: Additional hosts may be added as needed by clicking the icon from the Hosts tab.
The newly created LUN is now visible on the LUNs tab as shown in Figure 105. In this example, a LUN
named FC-80GB that is 80GB in size has been created.
LUN Created
Note: To modify host access at any time, check the box next to the LUN to select it. Click the pencil icon, and
select the Host Access tab.
At this point, the OME-M console removes the MCM group. To manage the chassis, use the individual IP
addresses assigned to each.
Optionally, after the reset process is complete, use the LCD screen to reassign a static IP address.
After the next reboot the switch loads with default configuration settings.
For example:
Connect to I/O Module 1 serial console:
racadm connect -m switch-1
To exit the IOM or server and switch back to racadm shell, use command ~.
MX-series components
Qty Item Version
4 Dell EMC PowerEdge M9002m modules 1.10.00
4 Dell EMC PowerEdge MX740c compute sleds A01
OS10 Enterprise Edition User Guide for PowerEdge MX IO Modules Release 10.4.0E R3S
Dell EMC OME-M v1.00.01 for PowerEdge MX7000 Chassis User's Guide
Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10
Dell EMC encourages readers to provide feedback on the quality and usefulness of this publication by
sending an email to [email protected]