0% found this document useful (0 votes)
22 views90 pages

vxr-vp-760

The document outlines the hardware requirements and specifications for Dell VxRail VP-760 and VS-760 systems, including details on system design, processor and memory specifications, and supported configurations. It serves as a guide for customers and personnel familiar with Dell systems and VMware products, providing essential information for setup, configuration, and support resources. Additionally, it includes safety notes, revision history, and a comprehensive overview of system components and connectivity options.

Uploaded by

ruifigmes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views90 pages

vxr-vp-760

The document outlines the hardware requirements and specifications for Dell VxRail VP-760 and VS-760 systems, including details on system design, processor and memory specifications, and supported configurations. It serves as a guide for customers and personnel familiar with Dell systems and VMware products, providing essential information for setup, configuration, and support resources. Additionally, it includes safety notes, revision history, and a comprehensive overview of system components and connectivity options.

Uploaded by

ruifigmes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Dell VxRail VP-760 and VS-760

Hardware Requirements and Specifications

December 2024
Rev. 6
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2023 - 2024 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Revision history..........................................................................................................................................................................5

Chapter 1: Introduction................................................................................................................. 6
Dell Technologies Support.................................................................................................................................................6
Register for a Dell Technologies Support account................................................................................................ 6
Support resources......................................................................................................................................................... 6
Use SolVe Online for VxRail procedures.................................................................................................................. 6
Locate your VxRail serial number.....................................................................................................................................7
Locate your VxRail serial number in VxRail Manager............................................................................................ 7
Locate your physical VxRail serial number............................................................................................................... 7
Access VxRail content using the QRL...................................................................................................................... 7

Chapter 2: Overview......................................................................................................................9
Front view of the system.................................................................................................................................................. 9
Rear view of the system................................................................................................................................................... 11
Inside the system............................................................................................................................................................... 14
System information labels................................................................................................................................................ 16
Rail sizing and rack compatibility matrix......................................................................................................................20

Chapter 3: Technical specifications............................................................................................. 21


Chassis physical design.....................................................................................................................................................21
Processor specifications..................................................................................................................................................22
PSU specifications............................................................................................................................................................ 22
Supported operating systems........................................................................................................................................ 23
Cooling fan specifications............................................................................................................................................... 23
System battery specifications........................................................................................................................................24
Expansion card riser specifications............................................................................................................................... 24
Memory specifications..................................................................................................................................................... 25
Storage controller specifications...................................................................................................................................25
Drives................................................................................................................................................................................... 26
Ports and connector specifications.............................................................................................................................. 26
Video specifications.......................................................................................................................................................... 27
Environmental specifications.......................................................................................................................................... 27
Thermal restriction matrix......................................................................................................................................... 29

Chapter 4: Initial setup and configuration....................................................................................37


Set up the system............................................................................................................................................................. 37
iDRAC configuration......................................................................................................................................................... 37
Options to set up iDRAC IP address....................................................................................................................... 38

Chapter 5: Pre-operating system management applications........................................................ 39


Manage the pre-operating system applications.........................................................................................................39
Set up the system....................................................................................................................................................... 39
Dell Lifecycle Controller.............................................................................................................................................56
Boot Manager...............................................................................................................................................................56

Contents 3
PXE boot........................................................................................................................................................................57

Chapter 6: Configuration information..........................................................................................58


Configuration validation...................................................................................................................................................58
Error messages............................................................................................................................................................ 59

Chapter 7: Component replacement guidelines............................................................................60


Use SolVe Online for VxRail procedures......................................................................................................................60
Supported hardware components.................................................................................................................................60
System memory guidelines.............................................................................................................................................. 61
General memory module installation guidelines....................................................................................................62
Expansion card installation guidelines.......................................................................................................................... 63
Drive backplane..................................................................................................................................................................76

Chapter 8: Jumpers and connectors............................................................................................ 78


System board jumpers and connectors........................................................................................................................78
System board jumper settings....................................................................................................................................... 80
Disable system and software password features......................................................................................................80

Chapter 9: System diagnostics and indicator codes..................................................................... 82


Status LED indicators.......................................................................................................................................................82
System health and system ID indicator codes........................................................................................................... 83
iDRAC Direct LED indicator codes................................................................................................................................ 84
iDRAC Quick Sync 2 indicator codes............................................................................................................................84
NIC indicator codes.......................................................................................................................................................... 85
PSU indicator codes......................................................................................................................................................... 85
Drive indicator codes........................................................................................................................................................86
Use system diagnostics....................................................................................................................................................87
Dell Embedded System Diagnostics........................................................................................................................88
System board diagnostic LED indicators............................................................................................................... 89
Enhanced Preboot System Assessment................................................................................................................ 89

Chapter 10: Additional support.................................................................................................... 90


Automated support with secure connect gateway...................................................................................................90

4 Contents
Revision history
Table 1. Revision history
Date Revision Description of change
December 2024 6 Removed support for 256 GB RDIMM with 5600 MT/s.
July 2024 5 Added features for VxRail 8.0.230 and 8.0.240, including support for VxRail VS-760.
April 2024 4 Minor updates.
December 2023 3 Added features for VxRail 8.0.120.
August 2023 2 Minor updates and corrections.
August 2023 1 Initial release.

Revision history 5
1
Introduction
This document provides an overview about the system, diagnostic tools, and guidelines describing high-level operations.
The target audience for this document includes customers, field personnel, and partners who want to operate and maintain a
VxRail. This document is designed for people familiar with:
● Dell systems and software
● VMware virtualization products
● Data center appliances and infrastructure
For the most up-to-date list of VxRail documentation, see the VxRail Documentation Quick Reference List.

Dell Technologies Support


Create a Support account to access support resources for your VxRail. Link your Support account with VxRail Manager to
access resources without a separate login.
If you already have an account, register your VxRail to access the available resources. You can link your Online Support account
with VxRail Manager and access support resources without having to log in separately.

Register for a Dell Technologies Support account


Create a Support account to obtain VxRail documentation and software updates.
If you already have an account, link your Support account with VxRail Manager and access resources without having to log in
separately.
After you register, you can:
● Access or download the SolVe Desktop application for customized procedures to replace hardware components and upgrade
software components.
● Link your Support account with VxRail Manager to access resources.
For information about how to access a Support account or to upgrade an existing account, see KB 21768.
1. Go to Dell Technologies Support.
2. Click Create an Account and follow the steps to create an account.
It may take approximately 48 hours to receive a confirmation of account creation.

Support resources
Support resources are available for your VxRail.
Use the following resources to obtain support for your VxRail:
● In the VMware vSphere Web Client, select VxRail. Use the Support functions on the VxRail Dashboard.
● Go to Dell Technologies Support.

Use SolVe Online for VxRail procedures


To avoid potential data loss, always use SolVe Online for VxRail to generate procedures before you replace any hardware
components or upgrade software.
CAUTION: If you do not use SolVe Online for VxRail to generate procedures to replace hardware components or
perform software upgrades, data loss may occur for VxRail.
You must have a Dell Technologies Support account to use SolVe Online for VxRail.

6 Introduction
Locate your VxRail serial number
If you contact Dell Technologies Support for your VxRail, provide the VxRail serial number, also known as the Product Serial
Number Tag (PSNT).
Identify the VxRail serial number in VMware vSphere Web Client or locate the serial number that is printed on the physical
VxRail.

Locate your VxRail serial number in VxRail Manager


The PSNT is the VxRail serial number in VxRail Manager.
1. On the VMware vSphere Web Client, select the Inventory icon.
2. Select the VxRail cluster and click the Monitor tab.
3. Expand VxRail, and click Physical View to view the serial number.

Locate your physical VxRail serial number


Locate the serial number on your VxRail.
1. On the upper right corner of the VxRail chassis, locate the luggage tag.
2. Pull out the blue-tabbed luggage tag.
3. Locate the serial number label on the pull-out tag.
The Product Serial Number Tag (PSNT) is the 14-digit number that is on the front edge of the luggage tag.

Access VxRail content using the QRL


Use the Service Tag or QRL code on the Dell QRL site to access VxRail information for VxRail 15G, and later models.
If your VxRail has a QRL that is added to the luggage tag, you can use this tag to obtain factory configuration and warranty
information. You can also enter the Service Tag to access information.
1. On the VxRail luggage tag, locate the QRL or Service Tag.

Figure 1. QRL code

Introduction 7
2. Using the camera on your phone or laptop, use the QRL code on the Service Tag to access information specific to your
VxRail. You can also go to qrl.dell.com to enter the Service Tag information.

8 Introduction
2
Overview
The 16G VxRail platforms provide enhanced processor, memory, and PCIe capabilities.
The VxRail VP-760 includes All Flash, Hybrid, and All NVMe storage variants. It is a 2U server that supports:
● Up to two fourth-generation Intel Xeon scalable processors, with up to 56 cores
● Up to two fifth-generation Intel Xeon scalable processors with up to 64 cores
● Up to 16 DIMM slots per processor
● Two redundant AC or DC power supply units
● The following drive configurations:
○ Up to 24 x 2.5-inch drives.
○ Up to 24 x 2.5-inch drives with 4 x 2.5-inch (rear) SAS, SATA, or NVMe SSD drives.
● The following GPU configurations:
○ Up to six single-wide GPUs
○ Up to two double-wide GPUs
The VxRail VS-760 is a 2U server with Hybrid storage variant that supports:
● Up to two fourth-generation Intel Xeon scalable processors, with up to 56 cores
● Up to 16 DIMM slots per processor
● Two redundant AC or DC power supply units
● Up to 12 x 3.5-inch SAS HDD drives with 4 x 2.5-inch (rear) SAS or NVMe SSD drives.
Do not install GPUs, network cards, or other PCIe devices on your system that Dell Technologies has not tested or validated.
CAUTION: The use of unauthorized or unapproved hardware can damage your system and invalidate the system
warranty.
For information about how to hot swap an NVMe PCIe SSD U.2 device, see Dell Express Flash NVMe PCIe User's Guide.
In this document, drives indicates all instances of SAS, SATA, and NVMe drives unless specified otherwise.

Front view of the system


This section provides a description of the indicators, buttons, and connectivity options available on the front side of VxRail
VP-760 and VxRail VS-760.

Figure 2. Front view of VxRail VP-760 with 24 x 2.5-inch drive system

Overview 9
Figure 3. Front view of VxRail VS-760 with 12 x 3.5-inch drive system

Table 2. Features that are available on the front of the system


Item Ports, panels, Description of ports, panels, and slots
and slots
1 Left control panel It includes the system health, system ID, and the status LED indicators.
2 Drive It allows you to install drives that are supported on your system. For drive slot numbers, see
System information labels.
3 Right control panel It includes the power button, VGA port, USB port, iDRAC Direct (Micro-AB USB) port, and the
iDRAC Direct status LED. panel
4 Information tag A slide-out label panel that contains the Service Tag, NIC, MAC address, and other system
information. The Information tag contains the iDRAC secure default password for systems that
have the secure default access to iDRAC.

Figure 4. Left control panel

Table 3. Indicators that are found on the left control panel


Item Indicator Icon Description
1 Status LED N/A It indicates the status of the system. For more information, see Status LED
indicators indicators.
2 System health and It indicates the system health. For more information, see System health and
system ID indicator system ID indicator codes.

For more information about the indicator codes, see System diagnostics and indicator codes.

10 Overview
Figure 5. Right control panel

Table 4. Overview of the buttons and ports on the right control panel
Item Port or button Icon Description
1 Power button It indicates if the system is powered on or off. Press the power button to
manually power on or off the system. Press the power button to gracefully
shutdown an ACPI-compliant operating system.
2 USB 2.0 port The USB port is 4-pin, 2.0-compliant, and allows you to connect USB devices to
the system.
3 iDRAC Direct The iDRAC Direct (Micro-AB USB) port provides access to the iDRAC direct
(Micro-AB USB) Micro-AB USB features. For more information, see the Integrated Dell Remote
port Access Controller User's Guide. Use a USB to micro USB (type AB) cable to
configure iDRAC Direct to your laptop or tablet.
NOTE: The cable length should not exceed 0.91 m (3 ft). The use of a cable
that exceeds 0.91 m (3 ft) may affect performance.

4 VGA port Use this port to connect a display device to the system.

For more information about ports, panels, and slots, see Technical specifications.

Rear view of the system


This section provides an overview of the connectivity options available on the rear side of VxRail.

Figure 6. Rear view of VxRail VP-760

Overview 11
Table 5. Rear view
Item Ports, panels, or slots Icon Description
1 PCIe expansion card riser 1 N/A It allows you to connect PCI Express expansion cards.
(slot 1 and slot 2)
2 BOSS module N/A BOSS module is used for internal system boot.
3 PCIe expansion card riser 2 N/A The expansion card riser enables you to connect PCI Express
(slot 3 and slot 6) expansion cards.
4 PCIe expansion card riser 3 N/A It allows you to connect PCI Express expansion cards.
(slot 4 and slot 5)
5 VGA port Use this port to connect a display device to the system.

6 PCIe expansion card riser 4 N/A It allows you to connect PCI Express expansion cards.
(slot 7 and slot 8)
7 Power supply unit 2 It is the secondary PSU of the system.
(PSU2)
8 USB 2.0 port 4-pin, 2.0-compliant port to connect USB devices to the system.

9 USB 3.0 port 9-pin and 3.0-compliant port to connect USB devices to the system.

10 Dedicated iDRAC9 This port allows you to remotely access iDRAC. For more information,
Ethernet port see Integrated Dell Remote Access Controller User's Guide.
11 System Identification (ID) Press the button on the front or back of the system to identify a
button system in a rack, to reset the iDRAC, or to access the BIOS using the
Step-through mode. When pressed, the System ID LED on the back
panel blinks until either the front button is pressed, or the rear button
is pressed again. You can also press the button to switch between On
or Off mode.

If the server stops responding during POST, press and hold the
System ID button for more than five seconds to enter the BIOS
progress mode.

To reset the iDRAC, press and hold the System ID button for more
than 15 seconds. If this option is disabled, you can enable it by
pressing
F2 during the system boot process, and entering the iDRAC Setup
page.

12 OCP NIC card (optional) N/A The NIC ports are integrated on the OCP card which is connected to
the system board. The OCP NIC card supports OCP 3.0.
13 NIC ports (optional) The NIC ports that are integrated on the LOM card provide network
connectivity which is connected to the system board.
14 Power supply unit 1 (PSU1) PSU1 is the primary PSU of the system.

12 Overview
Figure 7. Rear view of VxRail VP-760 and VxRail VS-760 with 4 x 2.5-inch rear drive module

Table 6. Rear view of the system with 4 x 2.5-inch rear drive module
Item Ports, panels, or slots Icon Description
1 Rear drive module N/A It allows you to install the supported rear drives.
2 PCIe expansion card riser 2 N/A The expansion card riser enables you to connect PCI Express
(slot 3 and slot 6) expansion cards.
3 BOSS module N/A BOSS module is used for internal system boot.
4 VGA port It allows you to connect a display device to the system.

5 PCIe expansion card riser 4 N/A The expansion card riser enables you to connect PCI Express
(slot 7 and slot 8) expansion cards.
6 Power supply unit 2 (PSU2) PSU2 is the secondary PSU of the system.

7 USB 2.0 port 4-pin, 2.0-compliant port to connect USB devices to the system.

8 USB 3.0 port 9-pin and 3.0-compliant port to connect USB devices to the system.

9 Dedicated iDRAC9 Ethernet This port allows you to remotely access iDRAC. For more information,
port see Integrated Dell Remote Access Controller User's Guide.
10 System Identification (ID) Press the button on the front or back of the system to identify a
button system in a rack, to reset the iDRAC, or to access the BIOS using the
Step-through mode. When pressed, the System ID LED on the back
panel blinks until either the front button is pressed, or the rear button
is pressed again. You can also press the button to switch between
On or Off mode.

If the server stops responding during POST, press and hold the
System ID button for more than five seconds to enter the BIOS
progress mode.
To reset the iDRAC, press and hold the System ID button for more
than 15 seconds.
If this option is disabled, you can enable it by pressing F2 during the
system boot process, and entering the iDRAC Setup page.

11 OCP NIC card (optional) N/A The NIC ports are integrated on the OCP card which is connected to
the system board. The OCP NIC card supports OCP 3.0.
12 NIC ports (optional) The NIC ports that are integrated on the LOM card provide network
connectivity which is connected to the system board.
13 Power supply unit 1 (PSU1) PSU1 is the primary PSU of the system.

For more information about ports, panels, and slots, see Technical specifications.

Overview 13
Inside the system
This section provides an overview of the internal components of the system.

Figure 8. Components inside the system

Table 7. Description of the components inside the system


Item Description
1 Backplane
2 Rear mounting front PERC module
3 Cooling fans
4 Air shroud
5 Memory DIMM sockets
6 Expansion riser 4
7 Expansion riser 3
8 Intrusion switch module
9 Power supply unit (PSU2)
10 Power supply unit (PSU1)

14 Overview
Table 7. Description of the components inside the system (continued)
Item Description
11 Rear handle
12 Expansion riser 1
13 Expansion riser 2
14 System board
15 Cooling fan cage assembly
16 Backplane
17 Express Service Tag

Figure 9. Components inside the system with full-length risers and GPU shroud

Table 8. Description of the components inside the system with full-length risers and GPU shroud
Item Description
1 Backplane
2 Rear mounting front PERC module
3 Cooling fans

Overview 15
Table 8. Description of the components inside the system with full-length risers and GPU
shroud (continued)
Item Description
4 GPU air shroud
5 Expansion riser 4
6 Expansion riser 3
7 Intrusion switch module
8 Power supply unit (PSU2)
9 Power supply unit (PSU1)
10 Rear handle
11 Expansion riser 1
12 Expansion riser 2
13 System board
14 Cooling fan cage assembly
15 Backplane
16 Express Service Tag

System information labels


You can find the system information labels at the back of the system cover.

Figure 10. Memory information label

16 Overview
Figure 11. Electrical overview label

Overview 17
Figure 12. LED behavior label

Figure 13. Icon legend label

Figure 14. System tasks label

18 Overview
Figure 15. Heat sink label

Figure 16. BOSS-N1 label

Figure 17. Caution label

Overview 19
Figure 18. Express service tag label

Rail sizing and rack compatibility matrix


For specific information about the rail solutions compatible with your system, see Dell Enterprise Systems Rail Sizing and Rack
Compatibility Matrix.
The matrix provides the following information:
● Details about rail types and their functions
● Rail adjustability range for various types of rack mounting flanges
● Rail depth with and without cable management accessories
● Types of racks that are supported for various types of rack mounting flanges

20 Overview
3
Technical specifications
The technical and environmental specifications of your system are outlined in this section.

Chassis physical design


This section provides an overview of the chassis dimensions and weight limitations of VxRail VP-760 and VxRail VS-760.

Figure 19. Chassis dimensions

Table 9. Chassis dimensions


Xa Xb Y Za Zb Zc
482.0 mm (18.97 434.0 mm (17.08 86.8 mm (3.41 in) 35.84 mm (1.41 in) 700.7 mm (27.58 736.29 mm (28.98
in) in) with bezel in) Ear to rear wall in) Ear to PSU
22.0 mm (0.86 in) handle
Zb is the nominal
without bezel rear wall external
surface where the
system board I/O
connectors reside.

Technical specifications 21
Table 10. Maximum weight limitations
System configuration Maximum weight (with all drives/SSDs)
A server with fully populated drives 36.1 kg (79.58 lbs)
A server with no installed PSU or drives 25.1 kg (55.33 lbs)

Processor specifications
The VxRail VP-760 supports up to two fourth-generation or fifth-generation Intel Xeon scalable processors.
The VxRail VS-760 supports up to two fourth-generation Intel Xeon scalable processors.

PSU specifications
The VxRail VP-760 and VxRail VS-760 supports up to two AC or DC PSUs.

Table 11. PSU specifications


AC
Heat
dissipation Frequency High line Low line
PSU Class Voltage wattage wattage Current
(max BTU/ (Hz) DC
hr) 200 VAC– 100 VAC– (Amps)
240 VAC 120 VAC
1100 W Titanium 4100 100—240 1100 W 1050 W N/A 12—6.3
50/60
mixed mode VAC
N/A 4100 N/A 240 VDC N/A N/A 1100 W 5.2
1400 W Platinum 5250 100—240 1400 W 1050 W N/A 12—8
50/60
mixed mode VAC
N/A 5250 N/A 240 VDC N/A N/A 1400 W 6.6
1800 W Titanium 6750 200—240 1800 W N/A N/A 10
50/60
mixed mode VAC
N/A 6750 N/A 240 VDC N/A N/A 1800 W 8.2
2400 W Platinum 9000 100—240 2400 W 1400 W N/A 16—13.5
50/60
mixed mode VAC
N/A 9000 N/A 240 VDC N/A N/A 2400 W 11.2
2800 W Titanium 10500 200—240 2800 W N/A N/A 15.6
50/60
mixed mode VAC
N/A 10500 N/A 240 VDC N/A N/A 2800 W 13.6
1100 W N/A 4265 -48 VDC to N/A N/A 1100 W 27
N/A
LVDC -60 VDC

Heat dissipation is calculated using the PSU wattage rating.


When selecting or upgrading the system configuration, to ensure optimum power utilization, verify the system power
consumption using the Enterprise Infrastructure Planning Tool.
NOTE: If AC 2400 W PSUs operate at low line 100-120 VAC, the power rating per PSU is degraded to 1400 W. If AC 1400
W or 1100 W PSUs operates at low line 100-120 VAC, the power rating per PSU is degraded to 1050 W.

22 Technical specifications
Figure 20. PSU power cord connectors

Table 12. PSU power cables list


Form factor Output Power cord
Redundant 60 mm (2.36 in) 1100 W AC C13
1100 W -48 LVDC C13
1400 W AC C13
1800 W AC C15
Redundant 86 mm (3.39 in) 2400 W AC C19
2800 W AC C21

C19 power cable combined with C20 to C21 jumper power cable can be used to adapt a 2800 W PSU.
C13 power cable combined with C14 to C15 jumper power cable can be used to adapt an 1800 W PSU.

Supported operating systems


For information about the supported operating systems, see Server Operating Systems.

Cooling fan specifications


The VxRail VP-760 and VxRail VS-760 requires various cooling components that are based on processor TDP, storage modules,
rear drives, GPU, and persistent memory to maintain optimum thermal performance.
The VxRail VP-760 and VxRail VS-760 uses air cooling option that supports up to six standard (STD), High-Performance Silver
(HPR) grade, or High-Performance Gold (VHP) grade cooling fans.

Table 13. Cooling fan specifications


Fan type Abbreviation Label color Label image
Standard fans STD No label

High-Performance HPR Silver


Silver fans

High-Performance Gold VHP Gold


fans

See Thermal restriction matrix for required fan support with air-cooled configurations.

Technical specifications 23
System battery specifications
The VxRail VP-760 and VxRail VS-760 uses one CR 2032 3.0 V Lithium coin-cell battery.

Expansion card riser specifications


The VxRail VP-760 and VxRail VS-760 supports up to eight PCI express (PCIe) slots (six full-length slots and two low profile
slots) on the system board.

Table 14. Supported expansion card slots on the system board

PCIe With With R1P R1Q R1R R2A R3A R3B R4P R4Q R4R
slot regular GPU
shroud shroud

Slot 1 Full height Full - x8 x16 - - - - - -


Half height (Gen5)
length Full
length

Slot 2 Full height Full x16 x8 x16 - - - - - -


Half height (Gen5) (Gen5) (Gen5)
length Full (Double
length width
GPU)

Slot 3 Low Low - - - x16 - - - - -


profile profile
Half Half
length length

Slot 4 Full height Full - - - - - x8 - - -


Half height
length Half
length

Slot 5 Full height Full - - - - x16 x8 - - -


Half height
length Full
length

Slot 6 Low Low - - - x16 - - - - -


profile profile
Half Half
length length

Slot 7 Full height Full - - - - - - x16 x8 -


Half height (Gen5) (Gen5)
length Full (Double
length width
GPU)

Slot 8 Full height Full - - - - - - - x8 x8


Half height (Gen5) (Gen5)
length Half
length

24 Technical specifications
Memory specifications
The VxRail VP-760 and VxRail VS-760 supports several memory specifications for optimized operation.
The following table provides the supported memory specifications for fourth-generation Intel Xeon Scalable processors:

Table 15. Memory specifications for fourth-generation Intel Xeon Scalable processors
Single processor Dual processors
DIMM Minimum
DIMM type DIMM rank capacity Maximum system Minimum Maximum system
system system
capacity capacity
capacity capacity
Single rank 16 GB 16 GB 256 GB 32 GB 512 GB
Dual rank 32 GB 32 GB 512 GB 64 GB 1 TB
DDR5 RDIMM Dual rank 64 GB 64 GB 1 TB 128 GB 2 TB
Quad rank 128 GB 128 GB 2 TB 256 GB 4 TB
Octa rank 256 GB a 256 GB 4 TB 512 GB 8 TB

a. 256 GB RDIMM is supported with VP-760 only.

The VxRail VP-760 and VxRail VS-760 with fourth-generation Intel Xeon Scalable processors supports 32 (288-pin) memory
module sockets at 4800 MT/s.
The following table provides the supported memory specifications for fifth-generation Intel Xeon Scalable processors:

Table 16. Memory specifications for fifth-generation Intel Xeon Scalable processors
Single processor Dual processors
DIMM Minimum
DIMM type DIMM rank capacity Maximum system Minimum Maximum system
system system
capacity capacity
capacity capacity
Single rank 16 GB 16 GB 256 GB 32 GB 512 GB
Dual rank 32 GB 32 GB 512 GB 64 GB 1 TB
Dual rank 64 GB 64 GB 1 TB 128 GB 2 TB
DDR5 RDIMM
Dual rank 96 GB 96 GB 1.5 TB 192 GB 3 TB
Quad rank 128 GB 128 GB 2 TB 256 GB 4 TB
Octa rank 256 GB a 256 GB 4 TB 512 GB 8 TB

a. 256 GB RDIMM with fifth-generation Intel Xeon Scalable processors supported with VP-760 only.

The VxRail VP-760 with fifth-generation Intel Xeon Scalable processors supports 32(288-pin) memory module sockets at 5600
MT/s. It supports 256 GB RDIMM at 4800 MT/s only.
DDR4 memories are not supported.
Memory DIMM slots are not hot pluggable.

NOTE: The processor may reduce the performance of the rated DIMM speed.

Storage controller specifications


The VxRail VP-760 and VxRail VS-760 system supports the following controller cards.
● Internal boot:
○ BOSS-N1: HWRAID 2 x M.2 NVMe SSDs
○ USB
● SAS HBA: HBA355i

Technical specifications 25
● Internal controller: PERC H755 (supported with VP-760 only)

Drives
VxRail VP-760 supports the following drive configurations:
● Up to 24 2.5-inch hot-swappable SAS, SATA, or NVMe drives.
● Up to 24 2.5-inch in the front and four 2.5-inch at the rear hot-swappable SAS, SATA, or NVMe drives.
For more information about how to hot swap NVMe PCIe SSD U.2 device, see the Dell Express Flash NVMe PCIe SSD User's
Guide.
VxRail VP-760 with 24 x 2.5-inch drives with GPU supports the following disk group configurations:
● One disk group with one cache drive and up to five capacity drives.
● One disk group with one cache drive and up to seven capacity drives.
● Two disk groups with two cache drives and up to 10 capacity drives.
● Two disk groups with two cache drives and up to 14 capacity drives.
● Three disk groups with three cache drives and up to 15 capacity drives.
● Three disk groups with three cache drives and up to 21 capacity drives.
● Four disk groups with four cache drives and up to 20 capacity drives.
VxRail VP-760 with 24 x 2.5-inch (front) and 4 x 2.5-inch (rear) drives support the following disk group configurations:
● One disk group with one cache drive and up to six capacity drives.
● Two disk groups with two cache drives and up to 12 capacity drives.
● Three disk groups with three cache drives and up to 18 capacity drives.
● Four disk groups with four cache drives and up to 24 capacity drives.
VxRail VP-760 vSAN ESA (Express Storage Architecture) with 24 x 2.5-inch drives support:
● Up to 24 mixed use NVMe drives.
● A minimum of four drives.
VxRail VS-760 with 12 x 3.5-inch (front) and 4 x 2.5-inch (rear) drives support the following disk group configurations:
● One disk group with one cache drive and up to six capacity drives.
● Two disk groups with two cache drives and up to 12 capacity drives.
● Three disk groups with three cache drives and up to nine capacity drives.
● Four disk groups with four cache drives and up to 12 capacity drives.

Ports and connector specifications


This section describes the port and connector specifications for the VxRail VP-760 and VxRail VS-760.

Table 17. USB specifications


Front Rear Internal (optional)
USB port type No. of ports USB port type No. of ports USB port type No. of ports
USB 2.0- One USB 2.0- One Internal USB 3.0- One
compliant port compliant port compliant port
iDRAC Direct One USB 3.0- One
port (Micro-AB compliant port
USB 2.0-
compliant port)

The micro USB 2.0 compliant port can only be used as an iDRAC Direct or a management port.
The VxRail VP-760 and VxRail VS-760 supports up to two NIC ports embedded on the LOM card, and up to four ports
integrated on the OCP card.

26 Technical specifications
Table 18. NIC port specifications
Feature Specifications
LOM card (required) Two 1 GbE
OCP card (OCP 3.0) (optional) ● Two 10 GbE
● Four 10 GbE
● Two 25 GbE
● Four 25 GbE

The VxRail VP-760 and VxRail VS-760 supports the installation of a LOM card, an OCP card, or both.
NOTE: The system board supports an OCP PCIe with a width of x8. If you install an OCP PCIe with a width of x16, it is
downgraded to x8 width.
The VxRail VP-760 and VxRail VS-760 supports the following:
● One optional data terminal equipment (DTE) 9-pin serial connector card that is 16550-compliant. The optional serial
connector card is installed similar to an expansion card filler bracket.
● DB-15 VGA port on front panel and on rear I/O board.

Video specifications
The VxRail VP-760 and VxRail VS-760 supports an integrated Matrox G200 graphics controller with 16 MB of video frame
buffer.

Table 19. Video specifications


Resolution Refresh rate (Hz) Color depth (bits)
1024 x 768 60 8, 16, 32
1280 x 800 60 8, 16, 32
1280 x 1024 60 8, 16, 32
1360 x 768 60 8, 16, 32
1440 x 900 60 8, 16, 32
1600 x 900 60 8, 16, 32
1600 x 1200 60 8, 16, 32
1680 x 1050 60 8, 16, 32
1920 x 1080 60 8, 16, 32
1920 x 1200 60 8, 16, 32

Environmental specifications
This section provides the physical and environmental specifications for the VxRail VP-760 and VxRail VS-760.
For additional information about environmental certifications, see the Product Environmental Datasheet.

Table 20. Continuous operation specifications for ASHRAE A2


Temperature Specifications
Temperature range for 10–35°C (50–95°F) with no direct sunlight on the equipment
altitudes <= 900 m (<=
2953 ft)
Humidity percent range 8% RH with -12°C (10.4°F) minimum dew point up to 80% RH with 21°C (69.8°F) maximum
(non-condensing always) dew point

Technical specifications 27
Table 20. Continuous operation specifications for ASHRAE A2 (continued)
Temperature Specifications
Operational altitude de- Maximum temperature is reduced by 1°C/300 m (1.8°F/984 ft) above 900 m (2953 ft).
rating

Table 21. Common environmental specifications for ASHRAE A2


Temperature Specifications
Maximum temperature gradient (applies to both 20°C in an hour* (36°F in an hour) and 5°C in 15 minutes (9°F in 15
operation and non-operation) minutes), 5°C in an hour* (9°F in an hour) for tape hardware
Non-operational temperature limits -40°C to 65°C (-40°F to 149°F)
Non-operational humidity limits 5% to 95% RH with 27°C (80.6°F) maximum dew point
Maximum non-operational altitude 12,000 meters (39,370 ft)
Maximum operational altitude 3,050 meters (10,006 ft)

*Per ASHRAE thermal guidelines for tape hardware are not instantaneous rates of temperature change.

Table 22. Maximum vibration specifications


Maximum vibration Specifications
Operating 0.21 Grms at 5 Hz to 500 Hz for 10 minutes (all operation orientations)
Storage 1.88 Grms at 10 Hz to 500 Hz for 15 minutes (all six sides tested)

Table 23. Maximum shock pulse specifications


Maximum shock pulse Specifications
Operating Six consecutively performed shock pulses in the positive and negative x, y, and z axis of 6
G for up to 11 millisecond
Storage Six consecutively performed shock pulses in the positive and negative x, y, and z axis
(one pulse on each side of the system) of 71 G for up to 2 millisecond

Particulate and gaseous contamination specifications


When the levels of particulate or gaseous pollution exceed the specified limitations and result in equipment damage or failure,
you must rectify the environmental conditions. The customer is responsible for the remediation of environmental conditions. To
avoid damage to equipment or failure due to particulate or gaseous contamination, consider the limitations that are specified in
the following tables:

Table 24. Particulate contamination specifications


Particulate Specifications
contamination
Air filtration Data center air filtration as defined by ISO Class 8 per ISO 14644-1 with a 95% upper confidence limit.
This condition applies to data center environments only.
Air filtration requirements do not apply to IT equipment designed to be used outside a data center in
environments such as an office or factory floor.
Air entering the data center must have MERV11 or MERV13 filtration.

Conductive dust Air must be free of conductive dust, zinc whiskers, or other conductive particles. This condition applies
to data center and non-data center environments.

Corrosive dust ● Air must be free of corrosive dust.


● Residual dust present in the air must have a deliquescent point less than 60% relative humidity.
This condition applies to data center and non-data center environments.

28 Technical specifications
Table 24. Particulate contamination specifications (continued)
Particulate Specifications
contamination
Walk-Up Edge Data Filtration is not required for cabinets that are opened six times or less per year. Class 8 per ISO 1466-1
Center or Cabinet filtration as defined above is required otherwise.
(sealed, closed loop In environments commonly above ISA-71 Class G1 or that may have known challenges, special filters may
environment) be required.

Table 25. Gaseous contamination specifications


Gaseous contamination Specifications
Copper coupon corrosion rate <300 Å/month per Class G1 as defined by ANSI/ISA71.04-2013
Silver coupon corrosion rate <200 Å/month as defined by ANSI/ISA71.04-2013

Thermal restriction matrix


The information available here describes the thermal restrictions of VxRail VP-760 and VxRail VS-760.

Table 26. Label reference


Label Description
STD Standard
HPR (Silver) High-Performance Silver (HPR Silver) fan
HPR (Gold) High-Performance Gold (HPR Gold) fan
HSK Heat sink
LP Low profile
FH Full height
DPC DIMM per channel

Table 27. Processor and heat sink matrix


Heat sink Processor TDP
STD HSK ≤ 165 W (supporting only 2.5-inch drives and non-GPU configuration).
2U HPR HSK 125 W-250 W (supports 3.5-inch drives and non-GPU configuration)
165 W–350 W (supporting 2.5-inch drives and non-GPU configuration).
L-type HSK Supports all GPU and FPGA configurations

All GPU/FGPA cards require 1U L-type HSK and GPU shroud.


The critical component in that configuration determines the ambient temperature of the configuration. For example, if the
processor supports an ambient temperature of 35°C (95°F), the DIMM is 35°C (95°F), and the GPU is 30°C (86°F), the
combined configuration can only support 30°C (86°F).

Technical specifications 29
Thermal restriction matrix for fourth-generation Intel Xeon Scalable processors
Table 28. Thermal restriction matrix for VxRail VP-760 air-cooled configuration
24 x 2.5-inch
Configuration 24 x 2.5-inch SAS
NVMe
2.5-inch rear
No rear
Rear storage drives with rear No rear drives Ambient
drives fan temperature
T-Case
CPU TDP/cTDP Cores maximum Fan
center (°C)
5415+ 150 W 1 8 78 STD fan HPR SLVR fan HPR GOLD fan 35°C (95°F)
4410Y 12 78
5416S 16 78
5418N 165 W 1 24 84 STD fan HPR SLVR fan HPR GOLD fan 35°C (95°F)
4416+ 20 82
6426Y 185 W 1 16 72 STD fan HPR SLVR fan HPR GOLD fan 35°C (95°F)
5418Y 24 80
6428N 32 85
6434 205 W 1 8 96 STD fan HPR SLVR fan HPR GOLD fan 35°C (95°F)
5420+ 28 84
6438Y+ 32 76
6438M 32 84
6438N 32 84
6442Y 225 W 1 24 79 STD fan HPR SLVR fan HPR GOLD fan 35°C (95°F)
6448Y 32 79
6444Y 270 W 2 16 75 HPR SLVR HPR SLVR fan HPR GOLD fan 35°C (95°F)
fan
8462Y+ 300 W 2 32 81 HPR SLVR HPR SLVR fan HPR GOLD fan 35°C (95°F)
fan
6454S 270 W 2 32 71 HPR SLVR HPR SLVR fan HPR GOLD fan 35°C (95°F)
fan
6430 32 71
8471N 300 W 2 52 76 HPR SLVR HPR SLVR fan HPR GOLD fan 35°C (95°F)
fan
8470N 52 76
8460Y+ 40 75
8452Y 36 75

NOTE: The platform supports Maximum (MAX) and Mainstream (MS) system boards.
● 1 supports MS system board (CPU TDP < 250 W).
● 2 supports MAX system board (CPU TDP => 250 W).
For more information, see System board jumpers and connectors section.

NOTE: *Supported ambient temperature is 30°C (86°F).

30 Technical specifications
Table 29. Thermal restriction matrix for VxRail VS-760 air cooled configuration
Configuration 12 x 3.5-inch
2.5-inch rear
Rear storage drives with rear Ambient
fan temperature
T-Case max center HPR GOLD fan
CPU TDP/cTDP Cores (°C) 70%^
3408U 125 W 1 8 79 HPR GOLD 35°C (95°F)
5415+ 150 W1 8 78 HPR GOLD 35°C (95°F)
4410Y 12 78
5416S 16 78
5418N 165 W 1 24 84 HPR GOLD 35°C (95°F)
5411N 24 84
4416+ 20 82
6426Y 185 W 1 16 72 HPR GOLD 35°C (95°F)
5418Y 24 80
5412U 24 80
6428N 32 85
6421N 32 85
6434 205 W1 8 96 HPR GOLD 35°C (95°F)
5420+ 28 84
6438Y+ 32 76
6438M 32 84
6438N 32 84
6442Y 225 W 1 24 79 HPR GOLD* 35°C (95°F)
6448Y 32 79
6414U 250 W 2 32 76 HPR GOLD* 35°C (95°F)

NOTE: The platform supports Maximum (MAX) and Mainstream (MS) system boards.
● 1 supports MS system board (CPU TDP < 250 W)
● 2 supports MAX system board (CPU TDP = 250 W)
For more information, see System board jumpers and connectors section.

NOTE: ^The fan speed in the 3.5-inch chassis is limited to 70% due to the drive dynamic profile.

NOTE: *Supported ambient temperature is 30°C (86°F).

Table 30. Thermal restriction matrix for memory with air-cooled configuration (non-GPU)
Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
2.5-inch rear drives with
Rear storage No rear drives No rear drives
rear fan
DIMM 2DPC/ STD fan (CPU TDP <= HPR SLVR fan (CPU TDP HPR GOLD fan (CPU TDP up
Configuration Power 250 W) up to 350 W) to 350 W)
256 GB RDIMM 12.7 W 30°C (86°F) 35°C (95°F) 35°C (95°F)
128 GB RDIMM 8.9 W 30°C (86°F) 35°C (95°F) 35°C (95°F)

Technical specifications 31
Table 30. Thermal restriction matrix for memory with air-cooled configuration (non-GPU) (continued)
Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
2.5-inch rear drives with
Rear storage No rear drives No rear drives
rear fan
DIMM 2DPC/ STD fan (CPU TDP <= HPR SLVR fan (CPU TDP HPR GOLD fan (CPU TDP up
Configuration Power 250 W) up to 350 W) to 350 W)
64 GB RDIMM 6.9 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
32 GB RDIMM 4.1 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
16 GB RDIMM 3W 35°C (95°F) 35°C (95°F) 35°C (95°F)
DIMM 2DPC/ HPR SLVR fan (CPU TDP up to 350 W) HPR GOLD fan (CPU TDP up
Configuration Power to 350 W)
256 GB RDIMM 12.7 W 35°C (95°F) 35°C (95°F) Not supported
128 GB RDIMM 8.9 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
64 GB RDIMM 6.9 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
32 GB RDIMM 4.1 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
16 GB RDIMM 3W 35°C (95°F) 35°C (95°F) 35°C (95°F)

Table 31. Thermal restriction matrix for GPU configurations


Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
Rear storage No rear drives No rear drives
T-Case maximum
CPU TDP/cTDP Cores HPR GOLD fan with 1U HPR L-Type HSK
center (°C)
5415+ 150 W 1 8 78 35°C (95°F) 35°C (95°F)
4410Y 12 78
5416S 16 78
5418N 165 W 1 24 84 35°C (95°F) 35°C (95°F)
4416+ 20 82
6426Y 185 W 1 16 72 35°C (95°F) 35°C (95°F)
5418Y 24 80
6428N 32 85
6434 205 W 1 8 96 35°C (95°F) 30°C (86°F)
5420+ 28 84
6438Y+ 32 76
6438M 32 84
6438N 32 84
6442Y 225 W 1 24 79 35°C (95°F) 35°C (95°F)
6448Y 32 79
6444Y 270 W 2 32 75 35°C (95°F) 35°C (95°F)
8462Y+ 300 W 2 32 81 30°C (86°F) 30°C (86°F)
6454S 270 W 2 32 71 30°C (86°F) 30°C (86°F)
6430 32 71
8471N 300 W 2 52 76 30°C (86°F) 30°C (86°F)

32 Technical specifications
Table 31. Thermal restriction matrix for GPU configurations (continued)
Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
Rear storage No rear drives No rear drives
T-Case maximum
CPU TDP/cTDP Cores HPR GOLD fan with 1U HPR L-Type HSK
center (°C)
8470N 52 76
8460Y+ 40 75
8452Y 36 75

NOTE: The platform supports Maximum (MAX) and Mainstream (MS) system boards.
● 1 supports MS system board (CPU TDP < 250 W).
● 2 supports MAX system board (CPU TDP => 250 W).
For more information, see System board jumpers and connectors section.

NOTE: *Supported ambient temperature is 30°C (86°F).

NOTE: GPU configuration supports only high-performance Gold (HPR Gold) fan.

Table 32. Thermal restriction matrix with Optimized Ecological upgrade for air-cooled configuration
24 x 2.5-inch
Configuration 24 x 2.5-inch SAS NVMe
2.5-inch rear Ambient
Rear storage No rear drives drives with rear No rear drives temperatur
fan e
T-Case maximum
CPU TDP/cTDP Cores Fan/HSK
center (°C)
5415+ 150 8 78 STD fan/2U HPR SLVR HPR GOLD fan/ 35°C
W HPR fan/2U HPR STD (95°F)
4410Y 12 78
5416S 16 78
5418N 165 24 84 STD fan/2U HPR SLVR HPR GOLD fan/ 35°C
W HPR fan/2U HPR STD (95°F)
4416+ 20 82

Table 33. Thermal restriction matrix for memory for air-cooled configuration (GPU)
Configuration 24 x 2.5-inch SAS* 24 x 2.5-inch NVMe*
DIMM Configuration 2DPC/Power HPR GOLD fan with 1U HPR L-Type HSK
128 GB RDIMM 8.9 W 35°C (95°F) 35°C (95°F)
64 GB RDIMM 6.9 W 35°C (95°F) 35°C (95°F)
32 GB RDIMM 4.1 W 35°C (95°F) 35°C (95°F)
16 GB RDIMM 3W 35°C (95°F) 35°C (95°F)

NOTE: *In 24 x 2.5-inch SAS/NVMe configuration, for CPU TDP 270 W - 300 W and specific Low Temperature-case CPUs
supported ambient temperature is 30°C (86°F).

Technical specifications 33
Thermal restriction matrix for fifth-generation Intel Xeon Scalable processors
Table 34. Thermal restriction matrix for air cooled configuration
24 x 2.5-
Configuration 24 x 2.5-inch SAS inch
NVMe
2.5-inch rear Ambient
No rear No rear
Rear storage drives with rear temperature
drives drives
fan
T-Case max
CPU TDP/cTDP Cores Fan
center (°C)
4509Y 125 W 1 8 84 STD HPR SLVR HPR 35°C (95°F)
GOLD
4510 150 W 1 12 84 STD HPR SLVR HPR 35°C (95°F)
GOLD
4514Y 16 79
5512U 185 W 1 28 89 STD HPR SLVR HPR 35°C (95°F)
GOLD
6534 195 W 1 8 64 STD HPR SLVR HPR 35°C (95°F)
GOLD
6526Y 16 82
6542Y 250 W 1 24 83 STD HPR SLVR HPR 35°C (95°F)
GOLD
6548Y+ 32 83
6548N 32 83
8562Y+ 300 W 2 32 81 HPR HPR SLVR HPR 35°C (95°F)
SLVR GOLD
8558U 300 W 2 48 78 HPR HPR SLVR HPR 35°C (95°F)
SLVR GOLD
8568Y+ 350 W 2 48 81 HPR HPR SLVR fan HPR 35°C (95°F)
SLVR GOLD*
8580 60 81
8592+ 64 81

NOTE: The platform supports Maximum (MAX) and Mainstream (MS) system boards.
● 1 supports MS system board (CPU TDP < 250 W)
● 2 supports MAX system board (CPU TDP ≥ 250 W)
For more information, see System board jumpers and connectors section.

NOTE: *Supported ambient temperature is 30°C (86°F).

Table 35. Thermal restriction matrix for memory with air cooled configuration (non-GPU)
Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
2.5-inch rear drives with
Rear storage No rear drives No rear drives
rear fan
STD fan (CPU HPR SLVR fan (CPU TDP HPR GOLD fan (CPU
DIMM Configuration 2DPC/Power TDP <= 250 W) up to 350 W) TDP up to 350 W)
256 GB RDIMM* 12.7 W 30°C (86°F) 35°C (95°F) 35°C (95°F)
128 GB RDIMM 8.9 W 30°C (86°F) 35°C (95°F) 35°C (95°F)
96 GB RDIMM 8.3 W 30°C (86°F) 35°C (95°F) 35°C (95°F)
64 GB RDIMM 6.9 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
32 GB RDIMM 4.1 W 35°C (95°F) 35°C (95°F) 35°C (95°F)

34 Technical specifications
Table 35. Thermal restriction matrix for memory with air cooled configuration (non-GPU) (continued)
Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
2.5-inch rear drives with
Rear storage No rear drives No rear drives
rear fan
STD fan (CPU HPR SLVR fan (CPU TDP HPR GOLD fan (CPU
DIMM Configuration 2DPC/Power TDP <= 250 W) up to 350 W) TDP up to 350 W)
16 GB RDIMM 3W 35°C (95°F) 35°C (95°F) 35°C (95°F)
DIMM Configuration 2DPC/Power HPR SLVR fan (CPU TDP up to 350 W) HPR GOLD fan (CPU
TDP up to 350 W)
256 GB RDIMM* 12.7 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
128 GB RDIMM 8.9 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
96 GB RDIMM 8.3 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
64 GB RDIMM 6.9 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
32 GB RDIMM 4.1 W 35°C (95°F) 35°C (95°F) 35°C (95°F)
16 GB RDIMM 3W 35°C (95°F) 35°C (95°F) 35°C (95°F)

NOTE: *256 GB RDIMM is supported with VP-760 only.

Table 36. Supported ambient temperature for processors with GPU


24 x 2.5-inch
Configuration 24 x 2.5-inch SAS NVMe
Rear storage No rear drives No rear drives
T-Case max center Support HPR GOLD fan with 1U HPR L-
CPU TDP/cTDP Cores (°C) Type HSK
4509Y 125 W 1 8 84 35°C 35°C
4510 150 W1 12 84 35°C 35°C
4514Y 16 79
5512U 185 W 1 28 89 35°C 35°C
6534 195 W 1 8 64 35°C 35°C
6526Y 16 82
6542Y 250 W 1 24 83 35°C 35°C
6548Y+ 32 83
6548N 32 83
8562Y+ 300 W 2 32 81 30°C 30°C
8558U 300 W 2 48 78 30°C 30°C
8568Y+ 350 W 2 48 81 Not supported Not supported
8580 60 81
8592+ 64 81

NOTE: The platform supports Maximum (MAX) and Mainstream (MS) system boards.
● 1 supports MS system board (CPU TDP < 250 W)
● 2 supports MAX system board (CPU TDP ≥ 250 W)
For more information, see System board jumpers and connectors section.

NOTE: *Supported ambient temperature is 30°C (86°F).

Technical specifications 35
Table 37. Thermal restriction matrix for memory with air cooled configuration (GPU)
Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
DIMM Configuration 2DPC/Power HPR GOLD fan with 1U HPR L-Type HSK
128 GB RDIMM 8.9 W 35°C (95°F) 35°C (95°F)
96 GB RDIMM 8.3 W 35°C (95°F) 35°C (95°F)
64 GB RDIMM 6.9 W 35°C (95°F) 35°C (95°F)
32 GB RDIMM 4.1 W 35°C (95°F) 35°C (95°F)
16 GB RDIMM 3W 35°C (95°F) 35°C (95°F)

Common thermal restrictions for both fourth-generation and fifth-generation Intel


Scalable processors
Table 38. Thermal restriction matrix of the supported GPU
Configuration 24 x 2.5-inch SAS 24 x 2.5-inch NVMe
Rear storage No rear drives No rear drives
GPU HPR GOLD fan with 1U HPR L-Type HSK
A40 (Max 2) 30°C (86°F) 30°C (86°F)
A16 (Max 2) 35°C (95°F) 35°C (95°F)
A30 (Max 2) 35°C (95°F) 35°C (95°F)
A2 (Max 6) 35°C (95°F) 35°C (95°F)
H100 (Max 2) 35°C (95°F) 35°C (95°F)
L4 (Max 4) 35°C (95°F) 35°C (95°F)
L40 (Max 2) 35°C (95°F) 35°C (95°F)
L40S (Max 2) 35°C (95°F) 35°C (95°F)

36 Technical specifications
4
Initial setup and configuration
To install and deploy VxRail, you can purchase deployment services from Dell Technologies or select the VxRail self-deployment
option (no installation services).
If you are using VxRail deployment services from Dell Technologies, do not rack the VxRail or connect power. Contact your Dell
Technologies account team or reseller to arrange for deployment by Dell Technologies certified technicians.

Self-deployment
For self-deployment guidance and preparatory instructions, see KB 187954. You must have extensive network experience,
understanding of VxRail infrastructure planning, and deployment knowledge to perform a VxRail self-deployment. Go to the
VxRail Configuration Portal to perform self-deployment.
Contact your sales representative for Dell Technologies Services if you are:
● Uncertain you can complete the end-to-end deployment process.
● Unable to complete the deployment.
During the VxRail deployment, iDRAC creates a vxadmin or PTAdmin account. This account provides hardware information to
the VxRail Manager and is required for the VxRail Manager and the cluster to function properly.
Do not modify or delete the vxadmin or PTAdmin account.
CAUTION: If the vxadmin or PTAdmin account is modified or deleted, VxRail Manager and the cluster may not
function properly.

Set up the system


To set up the system, perform the following steps.
1. Unpack the system.
2. Install the system into the rack.
For more information, see Dell PowerEdge manuals that is relevant to your rail and cable management solution.
3. Connect the peripherals to the system and the system to the electrical outlet.
4. Power on the system.
For more information about setting up the system, see the Getting Started Guide that is included with your system. You can
also go to Dell Technologies support and search for your product.

iDRAC configuration
The Integrated Dell Remote Access Controller (iDRAC) allows administrators to be more productive and improve the overall
availability of Dell products. iDRAC alerts administrators to issues, perform remote management, and reduce the need for
physical access.
You can log in to iDRAC as the following users:
● iDRAC user
● Microsoft Active Directory user
● LDAP user
If secure default access to iDRAC is used, the iDRAC secure default password is available on the back of the appliance
Information tag. If you have not opted for secure default access to iDRAC, then the default username and password are root
and calvin. You can also log in by using Dell SSO or Smart Card.
The following prerequisites are required to log in to iDRAC:

Initial setup and configuration 37


● You must have iDRAC credentials.
● Change the default username and password after setting up the iDRAC IP address.
The iDRAC IP address is preconfigured for DHCP. You can change to a static IP address by logging into iDRAC.
● To access iDRAC, connect the network cable to the Ethernet connector 1 on the system board.
● Change the default username and password after setting up the iDRAC IP address.

Options to set up iDRAC IP address


To enable communication between your system and iDRAC, you must first configure the network settings that are based on
your network infrastructure.
By default, the Network settings option is set to DHCP. For static IP configuration, you must request the settings at the time
of purchase.
To set up the iDRAC IP address, use one of the interfaces in the following table.

Table 39. Interfaces to set up iDRAC IP address


Interface Documentation links
iDRAC Settings utility From the Browse All Products widget, select the iDRAC software that you are using from
the Remote Enterprise Systems Management column. From the Documentation section,
locate the Integrated Dell Remote Access Controller User's Guide. You can also go to Dell
Support and search for your specific product.

To determine the most recent iDRAC release for your platform and for the latest
documentation version, see KB 000178115.

OpenManage Deployment From the Browse All Products widget, select OpenManage Deployment Toolkit from
Toolkit the Enterprise Systems Management column, then select the appropriate version. In the
Documentation section, select the Dell OpenManage Deployment Toolkit User's Guide.
iDRAC Direct From the Browse All Products widget, select the iDRAC software that you are using from
the Remote Enterprise Systems Management column. From the Documentation section,
locate the Integrated Dell Remote Access Controller User's Guide. You can also go to Dell
Support and search for your specific product.

To determine the most recent iDRAC release for your platform and for the latest
documentation version, see KB 000178115.

Lifecycle Controller From the Browse All Products widget, select the Lifecycle Controller software that
you are using from the Remote Enterprise Systems Management column. From the
Documentation section, locate the Dell Lifecycle Controller User's Guide. You can go to
Dell Support and search for your specific product in the Identify your product box.

To access iDRAC, use one of the following:


● Connect the Ethernet cable to the iDRAC dedicated network port.
● Use the iDRAC Direct port by using a micro USB (type AB) cable.
If you have opted for a system in which shared LOM mode has been enabled, you can access iDRAC through the shared LOM
mode.

38 Initial setup and configuration


5
Pre-operating system management
applications
You can manage basic settings and features of the VxRail without booting into the operating system by using the system
firmware.
Dell Technologies optimizes your VxRail with the settings during installation and configuration. Do not change any basic settings
or features set by Dell Technologies to ensure best performance.

CAUTION: Performance may be impacted if settings and features configured by Dell Technologies are changed.

Manage the pre-operating system applications


VxRail contains options to manage the pre-operating system applications.
The following options are available:
● System Setup
● Boot Manager
● Dell Lifecycle Controller
● Preboot Execution Environment (PXE)

Set up the system


Using the System Setup option, configure the BIOS settings, iDRAC settings, and device settings of your VxRail.
You can access the system setup by using any of the following interfaces:
● User interface: To access go to iDRAC Dashboard, click Configurations > BIOS Settings.
● Text browser: To enable the text browser, use the Console Redirection.
To view System Setup, power on the system, press F2, and click System Setup Main Menu. If the operating system begins
to load before you press F2, wait for the system to finish booting, and then restart the system and try again.
The following table describes the options on the System Setup Main Menu screen.

Table 40. Options on the System Setup Main Menu screen


Option Description
System BIOS To configure the BIOS settings.
iDRAC Settings The iDRAC settings utility is an interface to set up and configure the iDRAC parameters by
using UEFI (Unified Extensible Firmware Interface). You can enable or disable various iDRAC
parameters by using the iDRAC settings utility.
Device Settings To configure device settings for devices such as storage controllers or network cards.
Service Tag Settings To configure the System Service Tag.

System BIOS
In the BIOS, access System BIOS to view the available options.
1. To view the System BIOS screen, power on the system and press F2.
2. Click System Setup Main Menu and then System BIOS.
The following table provides the details of the options that are available in the System BIOS:

Pre-operating system management applications 39


Table 41. Options on the System BIOS screen
Option Description
System Information Provides information about the system such as the system model name, BIOS version,
and Service Tag.
Memory Settings Specifies information and options that are related to the installed memory.
Processor Settings Specifies information and options that are related to the processor such as speed and
cache size.
SATA Settings Specifies options to enable or disable the embedded SATA controller and ports.
NVMe Settings Specifies options to change the NVMe settings. If the system contains the NVMe
drives that you want to configure in a RAID array, you must set both this field and the
Embedded SATA field on the SATA Settings menu to RAID mode. You might also
need to change the Boot Mode setting to UEFI. Otherwise, you should set this field to
Non-RAID mode.
Boot Settings Specifies options to specify the Boot mode (BIOS or UEFI). It enables you to modify
UEFI and BIOS boot settings.
Network Settings Specifies options to manage the UEFI network settings and boot protocols.

Legacy network settings are managed from the Device Settings menu.

Network settings are not supported in BIOS boot mode.

Integrated Devices Specifies options to manage integrated device controllers and ports, specifies related
features, and options.
Serial Communication Specifies options to manage the serial ports, its related features, and options.
System Profile Settings Specifies options to change the processor power management settings, memory
frequency.
System Security Specifies options to configure the system security settings, such as system password,
setup password, Trusted Platform Module (TPM) security, and UEFI secure boot. It also
manages the power button on the system.
Redundant OS Control Sets the redundant operating system information for redundant operating system
control.
Miscellaneous Settings Specifies options to change the system date and time.

System information
In the BIOS, access System Information to view several details.
1. To view the System Information screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then System Information.

Table 42. System information details


Option Description
System Model Name It provides the system model name.
System BIOS Version It specifies the BIOS version that is installed on the system.
System Management Engine It displays the current version of the Management Engine firmware.
Version
System Service Tag It provides the system Service Tag information.
System Manufacturer It specifies the name of the system manufacturer.
System Manufacturer Contact It provides the contact information of the system manufacturer.
Information

40 Pre-operating system management applications


Table 42. System information details (continued)
Option Description
System CPLD Version It displays the current version of the system Complex Programmable Logic Device
(CPLD) firmware.
UEFI Compliance Version It specifies the UEFI compliance level of the system firmware.

Memory settings
In the BIOS, access Memory Settings to view details.
1. To view the Memory Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then Memory Settings.

Table 43. Memory Settings details


Option Description
System Memory Size It specifies the size of the system memory.
System Memory Type It specifies the type of memory that is installed in the system.
System Memory Speed It specifies the speed of the system memory.
Video Memory It specifies the size of the video memory.
System Memory Testing It allows you to control whether the system memory tests are run during
system boot. The two options available are Enabled and Disabled. By
default, this option is set to Disabled.
Memory Operating Mode By default, this option is set to Optimizer Mode. The Fault Resilient Mode
and NUMA Fault Resilient Mode options are available for support when the
Advanced RAS capability processor is installed on the system.
Current State of Memory Operating It specifies the current state of the memory operating mode.
Mode
Fault Resilient Mode Memory Size (%) This option allows you to define the percent of total memory size that the
fault resilient mode uses, when in Memory Operating Mode. If the Fault
Resilient Mode option is not selected, it is unavailable and not used by Fault
Resilient Mode.
Node Interleaving It enables or disables the Node interleaving option. It specifies if the
Non-Uniform Memory Architecture (NUMA) is supported. If this field is
set to Enabled, memory interleaving is supported if a symmetric memory
configuration is installed. If the field is set to Disabled, the system supports
NUMA (asymmetric) memory configurations. This option is set to Enabled by
default.
ADDDC Settings When the Adaptive Double DRAM Device Correction (ADDDC) option is
enabled, failing DRAMs are dynamically mapped out. When set to Enabled,
this option impacts the system performance under certain workloads. This
feature is applicable for x4 DIMMs only. By default, this option is set to
Enabled.
Memory Training When the option is set to Fast and the memory configuration is not changed,
the system uses previously saved memory training parameters to train the
memory subsystems. System boot time is also reduced. If the memory
configuration is changed, the Retrain at Next boot is automatically enabled,
forces a single full memory training step, and then returns to Fast afterward.
When the option is set to Retrain at Next boot, the system performs the
one-time full memory training step at the next power on, and slows the
boot time on the next boot. When the option is set to Enabled, the system
performs the force full memory training steps every time the system powers
on. This option also slows the boot process each time.

Pre-operating system management applications 41


Table 43. Memory Settings details (continued)
Option Description
Memory Map Out This option controls the DIMM slots on the system. This option is set to
Enabled by default. It allows you to disable system installed DIMMs.
Correctable Error Logging It enables or disables correctable error logging. By default, this option is set
to Disabled.
DIMM Self-Healing (Post Package It enables or disables Post Packing Repair (PPR) on uncorrectable memory
Repair) on Uncorrectable Memory error. By default, this option is set to Enabled.
Error

Processor settings
Access processor settings.
1. To view the Processor Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then Processor Settings.

Table 44. Processor settings details


Option Description
Logical Processor Each processor core supports up to two logical processors. If this option is set to Enabled,
the BIOS displays all the logical processors. If this option is set to Disabled, the BIOS
displays only one logical processor per core. This option is set to Enabled by default.
CPU Interconnect Speed It allows you to govern the frequency of the communication links among the processors in
the system.
NOTE: The standard and basic bin processors support lower link frequencies.

The options available are Maximum data rate, 16.0 GT/s, 14.4 GT/s, and 12.8 GT/s.
By default, this option is set to Maximum data rate. The maximum data rate indicates
that the BIOS runs the communication links at the maximum frequency that the processors
support. You can also select specific frequencies that the processors support, which can
vary. For the best performance, you should select Maximum data rate. Any reduction
in the communication link frequency affects the performance of non-local memory access
and cache coherency traffic. In addition, it can slow access to non-local I/O devices from
a particular processor. If power-saving considerations outweigh performance, reduce the
frequency of the processor communication links. Before reducing the frequency, you must
localize the memory and I/O access to the nearest NUMA node to minimize the impact to
system performance.
Virtualization Technology It enables or disables the virtualization technology for the processor. By default, this option
is set to Enabled.
Directory Mode It enables or disables the directory mode. This option is set to Enabled by default.
Kernel DMA Protection By default, this option is set to Disabled. When the option is set to Enabled, the BIOS and
operating system use virtualization technology to enable direct memory access protection
for DMA-capable peripheral devices.
Adjacent Cache Line It optimizes the system for applications that need high utilization of sequential memory
Prefetch access. This option is set to Enabled by default. You can disable this option for
applications that need high utilization of random memory access.
Hardware Prefetcher It enables or disables the hardware prefetcher. This option is set to Enabled by default.
DCU Streamer Prefetcher It enables or disables the DCU streamer prefetcher. This option is set to Enabled by
default.
DCU IP Prefetcher It enables or disables the DCU IP prefetcher. This option is set to Enabled by default.
Sub NUMA Cluster It enables or disables the Sub-NUMA Cluster. This option is set to Disabled by default.

42 Pre-operating system management applications


Table 44. Processor settings details (continued)
Option Description
MADT Core Enumeration It specifies the MADT Core Enumeration. This option is set to default in Round Robin.
Linear option supports industry core enumeration whereas, Round Robin option supports
Dell optimized core enumeration.
UMA Based Clustering This field is read-only and shows as Quadrant when the Sub NUMA Cluster is disabled, or
displays as Disabled, when the Sub NUMA Cluster is either 2-way or 4-way.
UPI Prefetch It enables you to get the memory read started early on the DDR bus. The Ultra Path
Interconnect (UPI) Rx path spawns the speculative memory that is read to the Integrated
Memory Controller (iMC) directly. This option is set to Enabled by default.
XPT Prefetch This option is set to Enabled by default.
LLC Prefetch It enables or disables the LLC Prefetch on all threads. This option is set to Enabled by
default.
Dead Line LLC Alloc It enables or disables the Dead Line LLC Alloc. This option is set to Enabled by default.
You can enable this option to enter the dead lines in LLC or disable the option to not enter
the dead lines in LLC.
Directory AtoS It enables or disables the Directory AtoS. AtoS optimization reduces remote read latencies
for repeat read accesses without intervening writes. This option is set to Disabled by
default.
AVX P1 It enables you to reconfigure the processor TDP levels during POST based on the power
and thermal delivery capabilities of the system. TDP verifies the maximum heat that the
cooling system must dissipate. This option is set to Normal by default.
NOTE: This option is only available on certain processor SKUs.

Intel SST-BF It enables Intel SST-BF. This option is displayed if Performance Per Watt (operating
system) or Custom (when OSPM is enabled) system profiles are selected. This option
is set to Disabled by default.
Intel SST-CP It enables Intel SST-CP. This option displays if Performance Per Watt (operating
system) or Custom (when OSPM is enabled) system profiles are selected. This option
is displayed and selectable for each system profile mode. This option is set to Disabled by
default.
NOTE: This option is hidden if the processor installed does not support SST
capabilities.

x2APIC Mode It enables or disables x2APIC mode. This option is set to Enabled by default.
NOTE: For two processors with a 64 core configuration, x2APIC mode is not
switchable if 256 threads are enabled (BIOS settings: All CCD, cores, and logical
processors are enabled).

AVX ICCP Pre-Grant It enables or disables the AVX ICCP Pre-Grant License. This option is set to Disabled by
License default.

Table 45. Options for Dell Controlled Turbo Settings


Option Description
Dell Controlled Turbo It controls the turbo engagement. Enable this option only when System Profile is set to
Settings Performance or Custom, and CPU Power Management is set to Performance. This item
can be selected for each system profile mode. This option is set to Disabled by default.
NOTE: Depending on the number of installed processors, there might be up to two
processor listings.

Dell AVX Scaling It enables you to configure the Dell AVX scaling technology. This option is set to 0 by
Technology default. Enter the value from 0 to 12 bins. When the Dell-controlled Turbo feature is
enabled, the value that is entered decreases the Dell AVX Scaling Technology frequency.
Optimizer Mode It enables or disables the CPU performance. When this option is set to Auto, set the
CPU Power Management to Max Performance. When set to Enabled, enables the CPU

Pre-operating system management applications 43


Table 45. Options for Dell Controlled Turbo Settings (continued)
Option Description
Power Management settings. When set to Disabled, the CPU Power Management option
is disabled. This option is set to Auto by default.
Number of cores per It controls the number of enabled cores in each processor. This option is set to All by
Processor default.
CPU Physical Address It limits the CPU physical address to 46 bits to support older Hyper-V. When enabled,
Limit TME-MT is automatically disabled. By default, this option is set to Enabled.
AMP Prefetch This option enables one of the Mid-Level Cache (MLC) AMP hardware Prefetcher. This
option is set to Disabled by default.
Homeless Prefetch This option allows the L1 (DCU) to prefetch, when the FB is full. Auto maps to hardware
default setting. This option is set to Auto by default.
Uncore Frequency RAPL This setting controls whether the Running Average Power Limit (RAPL) balancer is enabled
or not. If enabled, it activates the uncore power budgeting. This option is set to Enabled by
default.
Processor Core Speed It specifies the maximum core frequency of the processor.
Processor Bus Speed It specifies the bus speed of the processor.
NOTE: The processor bus speed option displays only when both processors are
installed.

Local Machine Check It enables or disables the local machine check exception. This exception is an extension
Exception of the MCA Recovery mechanism. This exception provides the capability to deliver
Uncorrected Recoverable (UCR) Software Recoverable Action Required (SRAR) errors to
one or more specific logical processor threads that receive corrupted data. When enabled,
the UCR SRAR Machine Check Exception is delivered only to the affected thread instead
of the system. This feature supports operating system recovery for cases of multiple
recoverable faults that are detected close, which would otherwise result in a fatal machine
check event. The feature is available only on Advanced RAS processors. This option is set
to Disabled by default.
CPU Crash Log Support This field controls the Intel CPU Crash Log feature for collection of previous crash data
from shared SRAM of out-of-band Management Service Module at post reset. By default,
this option is set to Disabled.
Processor n Depending on the number of processors, there might be up to n processors listed.

Table 46. Processor n details


Option Description
Family-Model-Stepping It describes the family, model, and steppings of the processor as defined by
Intel.
Brand It specifies the brand name.
Level 2 Cache It lists the total L2 cache.
Level 3 Cache It provides the total L3 cache.
Number of Cores It specifies the number of cores per processor.
Microcode It displays the processor microcode version.

SATA settings
In the BIOS, access SATA Settings to view details.
1. To view the SATA Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then SATA Settings.

44 Pre-operating system management applications


Table 47. SATA settings
Option Description
Embedded SATA It sets the embedded SATA option to Off, AHCI mode, or RAID modes. By default, this
option is set to AHCI Mode.
NOTE: Change the Boot Mode setting to UEFI when necessary. Otherwise, set the field
to Non-RAID mode. In the RAID mode, the ESXi and Ubuntu operating systems are not
supported.

Security Freeze Lock Sends Security Freeze Lock command to the embedded SATA drives during POST. This
option is applicable only for AHCI Mode. By default, this option is set to Enabled.
Write Cache It enables or disables the command for the embedded SATA drives during POST. This option
is applicable only for AHCI Mode. This option is set to Disabled by default.
Port n This option sets the drive type of the selected device. For AHCI Mode, BIOS support is
always enabled.

Table 48. Port n


Options Descriptions
Model Drive model of the selected device.
Drive Type Type of drive that is attached to the SATA port.
Capacity Describes the total capacity of the drive. This field is undefined for removable media devices
such as optical drives.

NVMe settings
In the BIOS, access NVMe Settings to view details.
1. To view the NVMe Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then NVMe Settings.

Table 49. NVMe settings


Option Description
NVMe Mode It enables or disables the boot mode. By default, this option is set to Non-RAID mode.
BIOS NVMe Driver It sets the drive type to boot the NVMe driver. The available options are Dell Qualified
Drives and All Drives. By default, this option is set to Dell Qualified Drives.

Boot settings
You can use the Boot Settings screen to set the boot mode to either UEFI or BIOS. You can also specify the boot order. The
Boot Settings support only UEFI mode.
UEFI: The Unified Extensible Firmware Interface (UEFI) is a new interface between operating systems and platform firmware.
The interface consists of data tables with platform-related information, boot, and runtime service calls that are available to the
operating system and its loader.
The following benefits are available when the Boot Mode is set to UEFI:
● Support for drive partitions larger than 2 TB.
● Enhanced security (for example, UEFI Secure Boot).
● Faster boot time
Use UEFI boot mode only to boot from NVMe drives.
BIOS: The BIOS boot mode is the legacy boot mode that is maintained for backward compatibility.
1. To view the Boot Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then Boot Settings.

Pre-operating system management applications 45


Table 50. Boot setting details
Option Description
Boot Mode This option allows you to set the boot mode of the system. If the operating system supports
UEFI, you can set this option to UEFI. Setting this field to BIOS allows compatibility with
non-UEFI operating systems. This option is set to UEFI by default.
CAUTION: Switching the boot mode may prevent the system from booting when
the operating system is not installed in the same boot mode.

NOTE: Setting this field to UEFI disables the BIOS Boot Settings menu.

Boot Sequence Retry It enables or disables the Boot sequence retry feature, or resets the system. When this option
is set to Enabled and the system fails to boot, the system repeats the boot sequence after
30 seconds. When this option is set to Reset and the system fails to boot, the system reboots
immediately. This option is set to Enabled by default.
Generic USB Boot It enables or disables the generic USB boot placeholder. This option is set to Disabled by
default.
Hard-disk Drive It enables or disables the Hard-disk drive placeholder. This option is set to Disabled by default.
Placeholder
Clean all Sysprep When this option is set to None, the BIOS does nothing. When set to Yes, the BIOS deletes the
and SysPrepOrder SysPrep #### and SysPrepOrder variables. Once removal of the variables is complete, the
variables options reset to None. This setting is only available in UEFI Boot Mode and is set to None by
default.
UEFI Boot Settings It specifies the UEFI boot sequence. It enables or disables the UEFI boot options.
NOTE: This option controls the UEFI boot order. The first option in the list is attempted
first.

Choose system boot mode


System Setup enables you to specify the boot mode for installing your operating system. UEFI boot mode (the default) is an
enhanced 64-bit boot interface. When you have configured your system to boot to UEFI mode, it replaces the system BIOS.
1. From the System Setup Main Menu, click Boot Settings, and select Boot Mode.
2. Select the UEFI boot mode that you want the system to boot into.
CAUTION: If the operating system is not installed in the same boot mode, switching the boot mode may
prevent the system from booting.

3. After the system boots in the specified boot mode, install your operating system from that mode.
NOTE: Operating systems must be UEFI-compatible to be installed from the UEFI boot mode. DOS and 32-bit operating
systems do not support UEFI and can only be installed from the BIOS boot mode.
For the latest information about supported operating systems, see Server operating systems.

Change the boot order


To boot from a USB drive or an optical drive, you must change the boot order.
If you already have the BIOS set for Boot Mode, the following instructions may vary. Changing the drive boot sequence is only
supported in BIOS boot mode.
1. On the System Setup Main Menu screen, click System BIOS, Boot Settings, UEFI Boot, and then UEFI Boot
Sequence.
2. Use the arrow keys to select a boot device, and then use the plus + and minus - keys to move the device up or down in the
order.
3. Click Exit, and then click Yes to save the settings.
You can also enable or disable the boot order devices as needed.

46 Pre-operating system management applications


Network settings
In the BIOS, you can access Network Settings to view details. Network settings are not supported in BIOS boot mode.
1. To view the Network Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then Network Settings.

Table 51. Network setting options


Option Description
UEFI PXE Settings It controls the configuration of the UEFI PXE device.
Number of PXE It specifies the number of PXE devices. This option is set to 4 by default.
Devices
PXE Device n (n = 1 This option enables or disables the device. When enabled, a UEFI PXE boot option is created for
to 4) the device.
PXE Device n It controls the configuration of the PXE device.
Settings (n = 1 to 4)
UEFI HTTP Settings It controls the configuration of the UEFI HTTP device.
HTTP Device n (n = 1 This option enables or disables the device. When enabled, a UEFI HTTP boot option is created for
to 4) the device.
HTTP Device n It controls the configuration of the HTTP device.
Settings (n = 1 to 4)
UEFI iSCSI Settings It controls the configuration of the iSCSI device.
iSCSI Initiator Name It specifies the name of the iSCSI initiator in IQN format.
iSCSI Device1 This option enables or disables the iSCSI device. When disabled, a UEFI boot option is created for
the iSCSI device automatically. This option is set to Disabled by default.
iSCSI Device1 It controls the configuration of the iSCSI device.
Settings
UEFI NVMe-oF It controls the configuration of the NVMe-oF devices.
Settings
NVMe-oF This option enables or disables the NVMe-oF feature. When enabled, it allows you to configure
the host and target parameters that are needed for fabric connection. This option is set to
Disabled by default.
NVMe-oF Host NQN This field specifies the name of the NVMe-oF host NQN. Input is allowed using
the following format: nqn.yyyy-mm.<Reserved Domain Name>:<Unique String>.
Leave this field empty if you intend to use the system-generated value with the
nqn.1988-11.com.dell:<Model name>.<Model number>.<Service Tag> format.

NVMe-oF Host Id This field specifies a 16-byte value of the NVMe-oF host identifier that uniquely identifies this
host with the controller in the NVM subsystem. The input that is allowed is a hexadecimal-
encoded string that uses the 00112233-4455-6677-8899-aabbccddeeff format. To use
the system-generated value, leave the field empty.
NOTE: A value of all FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF is not allowed.

Host Security Key This field specifies the Host security key path.
Path
NVMe-oF SubSystem This field controls the parameters for the NVMe-oF subsystem n connections.
Settings

Integrated devices
In the BIOS, access Integrated Devices to view details.
1. To view the Integrated Devices screen, power on the system and press F2.

Pre-operating system management applications 47


2. Click System Setup Main Menu, System BIOS, and then Integrated Devices.

Table 52. Integrated Devices details


Option Description
User Accessible USB Ports It configures the user accessible USB ports. Select Only Back Ports On to disable
the front USB ports. Select All Ports Off to disable all front and back USB ports.
Select All Ports Off (Dynamic) to disable all front and back USB ports during
POST. You can enable or disable the front ports dynamically by authorized user
without resetting the system. This option is set to All Ports On by default.
The USB keyboard and mouse still function in certain USB ports during the boot
process, depending on the selection. After the boot process is complete, the USB
ports will be enabled or disabled as per the setting.
iDRAC Direct USB Port iDRAC exclusively manages the iDRAC Direct USB port with no host visibility. You
can set this option to ON or OFF. When set to OFF, iDRAC does not detect any
USB devices that are installed in this managed port. This option is set to On by
default.
Integrated Network Card1 It enables or disables the integrated network card. When this option is set to
Disabled, the card is not available to the operating system.
NOTE: If set to Disabled (operating system), the Integrated NICs might still be
available for shared network access by iDRAC.

Embedded NIC1 and NIC2 It enables or disables the operating system interface of the Embedded NIC1 and
NIC2 controller. If set to Disabled (OS), the NIC may still be available for shared
network access by the embedded management controller. Configure the Embedded
NIC1 and NIC2 option by using the NIC management utilities of the system. This
option is set to Enabled by default.
I/OAT DMA Engine It enables or disables the I/O Acceleration Technology (I/OAT) option. I/OAT are
DMA features that accelerate network traffic and lower CPU utilization. Enable this
option only if the hardware and software support the feature. By default, this option
is set to Disabled.
Embedded Video Controller It enables or disables the use of Embedded Video Controller as the primary display.
When set to Enabled, the Embedded Video Controller is the primary display even
if add-in graphic cards are installed. When set to Disabled, an add-in graphics card
is used as the primary display. The BIOS output displays to both the primary add-in
video and the embedded video during the POST and preboot environment. The
embedded video is disabled right before the operating system boots. This option is
set to Enabled by default.
NOTE: When multiple add-in graphics cards are installed in the system, the first
card that is discovered during the PCI enumeration is set as the primary video.
To control which card is recognized as is the primary video card, rearrange the
cards in the slots.

I/O Snoop HoldOff Response It selects the number of cycles PCI I/O can withhold snoop requests from the
CPU, to allow time to complete its own write to LLC. This setting can help improve
performance on workloads where throughput and latency are critical. The options
available are 256 Cycles, 512 Cycles, 1K Cycles, 2K Cycles, 4K Cycles, 8K
Cycles, 16K Cycles, 32K Cycles, 64K Cycles and 128K Cycles. This option is
set to 2K Cycles by default.
Current State of Embedded It displays the current state of the embedded video controller. The Current State
Video Controller of Embedded Video Controller option is a read-only field. If the Embedded Video
Controller is the only display option in the system and no other add-in graphics cards
are installed, the Embedded Video Controller is automatically used as the primary
display even if the Embedded Video Controller setting is set to Disabled.
SR-IOV Global Enable It enables or disables the BIOS configuration of Single Root I/O Virtualization (SR-
IOV) devices. This option is set to Disabled by default.
OS Watchdog Timer If your system stops responding, this watchdog timer aids in the recovery of
your operating system. When this option is set to Enabled, the operating system

48 Pre-operating system management applications


Table 52. Integrated Devices details (continued)
Option Description
initializes the timer. When this option is set to Disabled (the default), the timer does
affect on the system.
Empty Slot Unhide It enables or disables the root ports of all the empty slots that are accessible to the
BIOS and operating system. This option is set to Disabled by default.
Slot Disablement It enables or disables the available PCIe slots on your system. The slot disablement
feature controls the configuration of the PCIe cards that are installed in the specified
slot. Slots must be disabled only when the installed peripheral card prevents booting
into the operating system or causes delays in system startup. If the slot is disabled,
both the Option ROM and UEFI drivers are disabled. Only slots that are present on
the system are available for control. When this option is set to boot driver disabled,
both the Option ROM and UEFI driver from the slot do not run during POST. The
system does not boot from the card, and the preboot services are not available.
However, the card is available to the operating system. Slot n: Enables or disables or
only the boot driver is disabled for the PCIe slot n. This option is set to Enabled by
default.
Slot Bifurcation Auto Discovery Bifurcation Settings allows Platform Default Bifurcation, Auto
Discovery of Bifurcation, and Manual bifurcation Control.
This option is set to Platform Default Bifurcation by default. The slot bifurcation
field is accessible when set to Manual bifurcation Control and is unavailable when
set to Platform Default Bifurcation and Auto Discovery of Bifurcation.
NOTE: The slot bifurcation supports on PCIe slot only, does not support slot
types from Paddle card to Riser and Slimline connector to Riser.

Serial communications
The serial port is optional in VxRail. The Serial Communication option is applicable only if the serial COM port is installed in the
system.
1. To view the Serial Communication screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then Serial Communication.

Table 53. Serial communication details


Option Description
Serial Communication It enables the serial communication options. Selects serial communication devices (Serial Device
1 and Serial Device 2) in the BIOS. You can also enable the BIOS console redirection and specify
the port address. The options available for System without serial COM port (DB9) are On
without Console Redirection, On with Console Redirection, Off. By default, this option is
set to On without Console Redirection.
Serial Port Address It enables you to set the port address for serial devices. This option is set to either COM1 or
COM2 for the serial device (COM1=0x3F8,COM2=0x2F8) and set to COM1 by default.
NOTE: For the Serial Over LAN (SOL) feature, you can only use Serial Device 2. To use
console redirection by SOL, configure the same port address for console redirection and the
serial device.

NOTE: Every time the system boots, the BIOS syncs the serial MUX setting that is saved
in iDRAC. The serial MUX setting can be independently changed in iDRAC. Loading the BIOS
default settings from within the BIOS setup utility may not always revert the serial MUX
setting to the default setting of Serial Device 1.

External Serial This option allows you to associate the External Serial Connector to Serial Device 1, Serial
Connector Device 2, or the Remote Access Device. This option is set to Serial Device 1 by default.

Pre-operating system management applications 49


Table 53. Serial communication details (continued)
Option Description

NOTE: Only Serial Device 2 can be used for Serial Over LAN (SOL). To use console
redirection by SOL, configure the same port address for console redirection and the serial
device.

NOTE: Every time the system boots, the BIOS syncs the serial MUX setting that is saved
in iDRAC. The serial MUX setting can be independently changed in iDRAC. Loading the BIOS
default settings from within the BIOS setup utility may not always revert this setting to the
default setting of Serial Device 1.

Failsafe Baud Rate It specifies the failsafe baud rate for console redirection. The BIOS attempts to determine the
baud rate automatically. This failsafe baud rate is used only if the attempt fails, and the value
must not be changed. This option is set to 115200 by default.
Remote Terminal Type It sets the remote console terminal type. This option is set to VT100/VT220 by default.
Redirection After Boot It enables or disables the BIOS console redirection when the operating system is loaded. This
option is set to Enabled by default.

System profile settings


In the BIOS, access System Profile Settings to view details.
1. To view the System Profile Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then System Profile Settings.

Table 54. System profile settings


Option Description
System Profile It sets the system profile. Setting the System Profile option to a mode other than
Performance Per Watt (DAPC), has the BIOS automatically set the rest of the options.
You can only change the rest of the options if the mode is set to Custom. This option
is set to Performance Per Watt (DAPC) by default. Other options include Custom,
Performance, Performance Per Watt (OS), and Workstation Performance.
NOTE: All the parameters on the system profile setting screen are available only when
the System Profile option is set to Custom.

CPU Power Management It sets the CPU power management. This option is set to System DBPM (DAPC) by
default. Other options include Maximum Performance, OS DBPM.
Memory Frequency It sets the speed of the system memory. You can select Maximum Performance, Maximum
Reliability, or a specific speed. This option is set to Maximum Performance by default.
Turbo Boost It enables or disables the processor to operate in the turbo boost mode. This option is set to
Enabled by default.
Energy Efficient Turbo Energy-Efficient Turbo (EET) is a mode of operation where the processor core frequency
is adjusted within the turbo range based on the workload. This option is set to Enabled by
default.
C1E It enables or disables the processor to switch to a minimum performance state when it is idle.
This option is set to Enabled by default.
C States It enables or disables the processor to operate in all available power states. C States allow
the processor to enter lower power states when idle. When set to Enabled (operating
system controlled), or Autonomous (if hardware control is supported), the processor
operates in all available Power States to save power. The Enabled and Autonomous
settings may increase memory latency and frequency jitter. This option is set to Enabled
by default.
Memory Patrol Scrub It sets the memory patrol scrub mode. This option is set to Standard by default.
Memory Refresh Rate It sets the memory refresh rate to either 1x or 2x. This option is set to 1x by default.

50 Pre-operating system management applications


Table 54. System profile settings (continued)
Option Description
Uncore Frequency It enables you to select the Uncore Frequency option. Dynamic mode enables the
processor to optimize power resources across cores and uncores during runtime. The
Energy Efficiency Policy option influences the optimization of the uncore frequency to
either save power, or to optimize the performance.
Energy Efficient Policy It enables you to select the Energy Efficient Policy option. The CPU uses the setting to
manipulate the internal behavior of the processor and determines whether to target higher
performance or better power savings. This option is set to Balanced Performance by
default.
Monitor/Mwait It enables the Monitor/Mwait instructions in the processor. This option is set to Enabled for
all system profiles, except Custom by default.
NOTE: This option can be disabled when System Profile is set to Custom.

NOTE: When the C States option is set to Enabled in the Custom mode, changing the
Monitor/Mwait setting does not impact the system power or performance.

Workload Profile This option allows the user to specify the targeted workload of a server and allows
performance optimization that is based on the workload type. This option is set to Not
Configured by default.
CPU Interconnect Bus It enables or disables the CPU Interconnect Bus Link Power Management. This option is set
Link Power Management to Enabled by default.
PCI ASPM L1 Link Power It enables or disables the PCI ASPM L1 Link Power Management. This option is set to
Management Enabled by default.

System security
In the BIOS, access System Security to view details.
1. To view the System Security screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then System Security.

Table 55. System security details


Option Description
CPU AES-NI It improves the speed of applications by performing encryption and decryption by using the
Advanced Encryption Standard Instruction Set (AES-NI). This option is set to Enabled by default.
System Password It sets the system password. This option is read-only if the password jumper is not installed in the
system.
Setup Password It sets the setup password. This option is read-only if the password jumper is not installed in the
system.
Password Status It locks the system password. This option is set to Unlocked by default.
TPM Information It indicates the type of Trusted Platform Module, if present.

Table 56. TPM 2.0 security information


TPM information Description
TPM Security NOTE: The TPM menu is available only when the TPM module is installed.

It allows you to control the reporting mode of the TPM. When set to Off, the presence of the TPM
is not reported to the operating system. When set to On, the presence of the TPM is reported to the
operating system. The TPM Security option is set to Off by default.
When TPM 2.0 is installed, the TPM Security option is set to On or Off. This option is set to Off by
default.

Pre-operating system management applications 51


Table 56. TPM 2.0 security information (continued)
TPM information Description
TPM Information It indicates the type of Trusted Platform Module, if present.
TPM Firmware It indicates the firmware version of the TPM.
TPM Hierarchy It enables, disables, or clears the storage and endorsement hierarchies. When set to Enabled, the
storage and endorsement hierarchies can be used.
When set to Disabled, the storage and endorsement hierarchies cannot be used.
When set to Clear, the storage and endorsement hierarchies are cleared of any values, and then
reset to Enabled.
TPM Advanced It specifies TPM Advanced Settings details.
Settings

Table 57. System security details


Option Description
Intel TXT It enables you to set the Intel Trusted Execution Technology (TXT) option. Virtualization technology
and TPM Security must be enabled with Preboot measurements to enable the Intel TXT option. This
option is set to Off by default. It is set On for Secure Launch (Firmware Protection) support on
Windows 2022.
Memory It enables or disables the Intel Total Memory Encryption (TME) and Multitenant (Intel TME-MT).
Encryption When the option is set to Disabled, BIOS disables both TME and MK-TME technology. When the
option is set to Single Key, the BIOS enables the TME technology. When the option is set to
Multiple Keys, BIOS enables the TME-MT technology. This option is set to Disabled by default.
This setting can be enabled only if the CPU Physical Address Limit is disabled.
TME Encryption It allows the option to bypass the Intel Total Memory Encryption. This option is set to Disabled by
Bypass default.
Intel SGX It enables you to set the Intel Software Guard Extension (SGX) option. To enable the Intel SGX
option, the processor:
● Must be SGX capable.
● Memory population must be compatible (minimum x8 identical DIMM1 to DIMM8 per CPU
socket).
● Do not support persistent memory configuration.
● Memory operating mode must be set to Optimizer mode.
● Memory encryption must be enabled.
● Node interleaving must be disabled.
When this option is set to Off, BIOS disables the SGX technology. When this option is set to On, the
BIOS enables the SGX technology. This option is set to Off by default.
Power Button It enables or disables the power button on the front of the system. This option is set to Enabled by
default.
AC Power It sets how the system behaves after AC power is restored to the system. This option is set to Last
Recovery by default.
NOTE: The host system will not power on until the iDRAC Root of Trust (RoT) function is
completed. The host power-on delays by 90 seconds after the AC power is applied.

AC Power It sets the time delay for the system to power on after AC power is restored to the system. This
Recovery Delay option is set to Immediate by default. When this option is set to Immediate, there is no delay for
power-up. When this option is set to Random, the system creates a random delay for power-up.
When this option is set to User Defined, the system delay time is manually to power on.
User Defined It sets the User Defined Delay option when the User Defined option for AC Power Recovery
Delay (120 s to Delay is selected. The AC recovery time adds approximately 50 seconds to the iDRAC root of trust
600 s) time.
UEFI Variable This option provides various degrees of securing UEFI variables. When set to Standard (the
Access default), UEFI variables are accessible in the operating system per the UEFI specification. When

52 Pre-operating system management applications


Table 57. System security details (continued)
Option Description
set to Controlled, selected UEFI variables are protected in the environment. New UEFI boot entries
are placed at the end of the current boot order.
In-Band When set to Disabled, the Management Engine (ME), HECI devices, and the system IPMI devices
Manageability are hidden from the operating system. Hiding the ME and the devices from the operating system
Interface prevents changes to the ME power capping settings, and blocks access to all in-band management
tools. All managements should be managed through out-of-band. This option is set to Enabled by
default.
NOTE: The BIOS update requires HECI devices to be operational, and DUP updates require IPMI
interface to be operational. Set this setting to Enabled to avoid updating errors.

SMM Security It enables or disables the UEFI SMM security mitigation protections. It is set to Disabled by default.
Mitigation
Secure Boot It enables Secure Boot, where the BIOS authenticates each preboot image by using the certificates
in the Secure Boot Policy. Secure Boot is set to Disabled by default.
Secure Boot When the Secure Boot policy is set to Standard, the BIOS uses the system manufacturer key and
Policy certificates to authenticate preboot images. When the Secure Boot policy is set to Custom, the
BIOS uses the user-defined key and certificates. The secure Boot policy is set to Standard by
default.
Secure Boot Mode It configures how the BIOS uses the Secure Boot Policy Objects, such as PK, KEK, db, or dbx.
If the current mode is set to Deployed Mode, the available options are User Mode and Deployed
Mode. If the current mode is set to User Mode, the available options are User Mode, Audit Mode,
and Deployed Mode.
Below are the details of different boot modes available in the Secure Boot Mode option.
● User Mode: In User Mode, PK must be installed, and BIOS performs signature verification on
programmatic attempts to update policy objects. The BIOS allows unauthenticated programmatic
transitions between modes.
● Audit Mode: In Audit Mode, PK is not present. BIOS does not authenticate programmatic
update to the policy objects and transitions between modes. The BIOS performs a signature
verification on preboot images. The results are logged in the image Execution Information Table,
but runs the images whether they pass or fail verification. Audit Mode is useful for programmatic
determination of a working set of policy objects.
● Deployed Mode: Deployed Mode is the most secure mode. In Deployed Mode, PK must be
installed and the BIOS performs signature verification on programmatic attempts to update policy
objects. Deployed Mode restricts the programmatic mode transitions.
Secure Boot It specifies the list of certificates and hashes that secure boot uses to authenticate images.
Policy Summary
Secure Boot It configures the Secure Boot Custom Policy. To enable this option, set the Secure Boot Policy to
Custom Policy Custom option.
Settings

Create a system and setup password


Ensure that the password jumper is enabled. The password jumper enables or disables the system password and setup password
features. For more information, see the System board jumper settings section.
If the password jumper setting is disabled, the existing system password and setup password are deleted and you need not
provide the system password to boot the system.
1. To enter System Setup, press F2 immediately after turning on or rebooting your system.
2. On the System Setup Main Menu screen, click System BIOS and then System Security.
3. On the System Security screen, verify that Password Status is set to Unlocked.
4. In the System Password field, type your system password, and press Enter or Tab.
A password can have up to 32 characters.
5. Reenter the system password, and click OK.

Pre-operating system management applications 53


6. In the Setup Password field, type your setup password and press Enter or Tab.
7. Reenter the setup password, and click OK.
8. Press Esc to return to the System BIOS screen. Press Esc again.
A message prompts you to save the changes.
Password protection does not take effect until the system reboots.

Secure your system


If you have assigned a setup password, the system accepts your setup password as an alternate system password.
1. Power on or reboot your system.
2. Enter the system password and press Enter.
When Password Status is set to Locked, enter the system password and press Enter when prompted at reboot.
NOTE: If an incorrect system password is typed, the system displays a message and prompts you to reenter your password.
You have three attempts to type the correct password. After the third unsuccessful attempt, the system displays an error
message that the system has stopped functioning and must be turned off. Even after you turn off and restart the system,
the error message displays until the correct password is entered.

Delete or change system and setup passwords


If the Password Status is set to Locked, you cannot delete or change an existing system or setup password.
1. To enter System Setup, press F2 immediately after powering on or restarting your system.
2. On the System Setup Main Menu screen, click System BIOS, and then System Security.
3. On the System Security screen, verify that Password Status is set to Unlocked.
4. In the System Password field, alter or delete the existing system password, and then press Enter or Tab.
5. In the Setup Password field, alter or delete the existing setup password, and then press Enter or Tab.
If you change the system and setup password, a message prompts you to reenter the new password. If you delete the
system and setup password, a message prompts you to confirm the deletion.
6. Press Esc to return to the System BIOS screen, and then press Esc again. You are prompted to save the changes.
7. Select Setup Password, change, or delete the existing setup password and press Enter or Tab.
If you change the system password or setup password, a message prompts you to reenter the new password. If you delete
the system password or setup password, a message prompts you to confirm the deletion.

Enable or disable Setup Password


You can use the password status option with the Setup Password option to protect the system password from unauthorized
changes.
1. If Setup Password is set to Enabled, type the correct setup password before modifying the system setup options.
If you do not type the correct password in three attempts, the system displays the following message:
Invalid Password! Number of unsuccessful password attempts: <x> System Halted! Must power
down.

Even after you power off and restart the system, the error message is displayed until the correct password is typed.

2. If System Password is not set to Enabled and is not locked through the Password Status option, you can assign a
system password. For more information, see the System Security screen section.
You cannot disable or change an existing system password.

Redundant operating system control


1. To access the Redundant OS Control, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then Redundant OS Control.

54 Pre-operating system management applications


Table 58. Redundant operating system control details
Option Description
Redundant It enables you to select a backup disk from the following devices:
OS Location ● None
● IDSDM
● SATA Ports in AHCI mode
● BOSS PCIe Cards (Internal M.2 Drives)
● Internal USB

NOTE: RAID configurations and NVMe cards are not included, as the BIOS is not able to
distinguish between individual drives in those configurations.

● Internal SD card

Redundant This option is disabled if Redundant OS Location is set to None.


OS State When set to Visible, the backup disk is visible to the boot list and operating system. When set to
Hidden, the backup disk is disabled and is not visible to the boot list and the operating system. This
option is set to Visible by default.
BIOS disables the device in hardware, and is not accessible by the operating system.

Redundant This option is disabled if Redundant OS Location is set to None or if Redundant OS State is set to
OS Boot Hidden.
When set to Visible, the backup disk is visible to the boot list and operating system. When set to
Hidden, the backup disk is disabled and is not visible to the boot list and the operating system. This
option is set to Visible by default.
BIOS disables the device in hardware, and is not accessible by the operating system.

Access miscellaneous settings


In the BIOS, access Miscellaneous Settings to view details.
1. To view the Miscellaneous Settings screen, power on the system and press F2.
2. Click System Setup Main Menu, System BIOS, and then Miscellaneous Settings.
The following table describes the options available for miscellaneous settings:

Table 59. Miscellaneous Settings details


Option Description
System Time This option allows you to set the time on the system.
System Date Use this option to set the date on the system.
Time Zone This option allows you to select required Time Zone.
Daylight Savings Time This option allows you to enable or disable Daylight Savings Time. This option is set to
Disabled by default.
Asset Tag Provides the asset tag information and allows you to modify the tag information for
security and tracking purposes.
Keyboard NumLock Use this option to set whether the system boots with the NumLock enabled or disabled.
This option is set to On by default. This option does not apply to 84-key keyboards.
F1/F2 Prompt on Error Allows you to enable or disable the F1/F2 prompt on error. The F1/F2 prompt also includes
keyboard errors. This option is set to Enabled by default.
Load Legacy Video Option This option determines whether the system BIOS loads the legacy video (INT 10 h) option
ROM ROM from the video controller. This option is set to Disabled by default.

This option cannot be set to Enabled, when the Boot mode is UEFI and Secure Boot is
enabled.

Pre-operating system management applications 55


Table 59. Miscellaneous Settings details (continued)
Option Description
Dell Wyse P25/P45 BIOS Provides the option to enable or disable the Dell Wyse P25/P45 BIOS Access. This option
Access is set to Enabled by default.
Power Cycle Request Use this option to enable or disable the Power Cycle Request. This option is set to None
by default.

iDRAC settings
The iDRAC settings are an interface to set up and configure the iDRAC parameters by using UEFI. You can enable or disable
various iDRAC parameters by using the iDRAC settings.
NOTE: Accessing some of the features on the iDRAC settings needs the iDRAC Enterprise License upgrade.

For more information about using iDRAC, see Dell Integrated Dell Remote Access Controller User's Guide at Dell iDRAC manuals.

Device settings
The Device Settings enables you to configure device parameters such as storage controllers or network cards.

Service Tag settings


The Service Tag Settings enables you to configure the system Service Tag.

Dell Lifecycle Controller


Dell Lifecycle Controller provides advanced embedded systems management capabilities including system deployment,
configuration, update, maintenance, and diagnosis. Dell Lifecycle Controller is delivered as part of the iDRAC out-of-band
solution and Dell system embedded Unified Extensible Firmware Interface (UEFI) applications.
The Dell Lifecycle Controller provides advanced embedded system management throughout the life cycle of the system. The
Dell Lifecycle Controller is started during the boot sequence and functions independently of the operating system.
NOTE: Certain platform configurations may not support the full set of features of the Dell Lifecycle Controller.

For more information about setting up the Dell Lifecycle Controller, configuring hardware and firmware, and deploying the
operating system, see the Dell Lifecycle Controller documentation on the Dell Technologies Support Site.

Boot Manager
The Boot Manager option enables you to select boot options and diagnostic utilities.
To enter Boot Manager, power on the system and press F11.
The following table describes the available boot manager options:

Table 60. Options on the Boot Manager screen


Option Description
Continue Normal Boot The system attempts to boot to devices starting with the first item in the boot order. If
the boot attempt fails, the system goes to the next item in the boot order until the boot
is successful, or no more boot options are found.
One-shot Boot Menu Enables you to access the boot menu, where you can select a one-time boot device to
boot from.
Launch System Setup Enables you to access System Setup.
Launch Lifecycle Controller Exits the Boot Manager and invokes the Dell Lifecycle Controller program.

56 Pre-operating system management applications


Table 60. Options on the Boot Manager screen (continued)
Option Description
System Utilities Enables you to launch the System Utilities menu such as Launch Diagnostics, BIOS
update File Explorer, Reboot System.

PXE boot
You can use the Preboot Execution Environment (PXE) option to boot and configure the networked systems remotely.
To access the PXE boot option, boot the system and then press F12 during POST instead of using standard Boot Sequence
from BIOS Setup. It does not pull any menu or allows managing of network devices.

Pre-operating system management applications 57


6
Configuration information
This section outlines the minimum system configuration necessary to run power-on self-test (POST). It also describes the
system management configuration validation.
The components that are listed below are the minimum configuration to POST:
● One processor in processor socket 1
● One memory modules (DIMM) in slot A1
● One power supply unit
● System board + LOM/OCP card + RIO card

Configuration validation
When the system is powered on, information about the following is obtained from the CPLD, and backplane memory maps and
analyzed:
● Installed cables
● Risers
● Backplanes
● Power supplies
● Floating card (fPERC, adapter PERC , BOSS)
● Processor
This information forms a unique configuration. iDRAC maintains qualified configurations that are stored in a table. iDRAC
compares the new configuration to the configurations in the table.
One or more sensors are assigned to each of the configuration elements. During POST, any configuration validation error
is logged in the System Event Log (SEL) or LifeCycle (LC) log. The reported events are categorized in the following
configuration validation error table:

Table 61. Configuration validation error


Error Description Possible causes and Example
recommendations
Config A configuration element within the Wrong configuration Config Error: Backplane
Error closest match contains something cable CTRS_SRC_SA1 and BP-
that is unexpected and does DST_SA1
not match any Dell qualified
configuration. The elements that are reported in Config Error : SL
HWC8010 errors indicate that an Cable PLANAR_SL7 and
element is incorrectly assembled. CTRL_DST_PA1
Check the placement of the
element, such as the cable or
riser, in the system.
Config iDRAC found a configuration element A missing or damaged cable, Config Missing: Float card
Missing missing within the closest match device, or part front PERC/HBA adapter
detected. PERC/HBA

A missing element or cable is Config Missing : SL


reported in HWC8010 error logs. cable PLANAR_SL8 and
Install the missing element, such CTRL_DST_PA1
as the cable or riser.
Comm A configuration element is not System management sideband Comm Error: Backplane 2
Error responding to iDRAC using the communication
management interface while running
an inventory check.

58 Configuration information
Table 61. Configuration validation error (continued)
Error Description Possible causes and Example
recommendations
Unplug AC Power, reseat the
element, and replace the element
if the problem persists.

Error messages
The error messages that are displayed on the screen during POST or captured in the system event log (SEL), or LifeCycle (LC)
log.

Table 62. Error message HWC8010


Error code HWC8010
Message The System Configuration Check operation that is resulted in the following
issue involving the indicated component type

Arguments Riser, floating card (fPERC, adapter PERC, BOSS), backplane, processor, cable, or other components.
Detailed Description The issue that is identified in the message is observed in the System Configuration Check operation.
Recommended Do the following and retry the operation:
Response Action 1. Disconnect the input power.
2. Check for proper cable connection and component placement. If the issue persists, contact the
service provider.
Category System Health (HWC = Hardware Config)
Severity Critical
Trap/EventID 2329

Table 63. Error message HWC8011


Error code HWC8011
Message The System Configuration Check operation that is resulted in multiple
issues involving the indicated component type

Arguments Riser, floating card (fPERC, adapter PERC, BOSS), backplane, processor, cable, or other components.
Detailed Description Multiple issues are observed in the System Configuration Check operation.
Recommended Do the following and retry the operation:
Response Action 1. Disconnect the input power.
2. Check for proper cable connection and component placement. If the issue persists, contact the
service provider.
Category System Health (HWC = Hardware Config)
Severity Critical

Configuration information 59
7
Component replacement guidelines
You can add or replace hardware components on your VxRail such as solid state drives (SSDs), power supply units (PSUs),
system memory.
See the table Supported hardware components to know the components that you can replace. In addition to these components,
there are some hardware components that require you to contact the Dell Technologies support to arrange for repair or
replacement.
Before you proceed with the replacement, go to SolVe and generate the replacement procedure of the component that you
want to replace. For more information about how to use SolVe, see Using SolVe Online for VxRail procedures.
To ensure optimal performance, follow the guidelines that are mentioned in this section before installing or replacing any
component in your VxRail.

Use SolVe Online for VxRail procedures


To avoid potential data loss, always use SolVe Online for VxRail to generate procedures before you replace any hardware
components or upgrade software.
CAUTION: If you do not use SolVe Online for VxRail to generate procedures to replace hardware components or
perform software upgrades, data loss may occur for VxRail.
You must have a Dell Technologies Support account to use SolVe Online for VxRail.

Supported hardware components


See SolVe Online for VxRail for hardware-specific information.

Table 64. FRU and CRU components


Hardware Components Customer Replaceable Unit (CRU) Field Replaceable Unit (FRU)
BOSS-N1 Yes No
PCIe Network Interface Cards Yes No
Power Supply Unit Yes No
Processor No Yes
SSD (NVMe) Yes No
SSD (SAS or SATA) Yes No
Integrated Storage Controller Yes No
Card (HBA355i)
GPU Yes No
Air Shroud Yes No
Cooling Fan Yes No
System Board No Yes
System Memory Yes No
System Battery Yes No
Backplane Yes No

60 Component replacement guidelines


NOTE: The components that are mentioned in the table are a non-exhaustive list.

System memory guidelines


The VxRail VP-760 and VxRail VS-760 supports DDR5 registered DIMMs (RDIMMs).
Your system memory is organized into eight channels per processor (two memory sockets per channel), 16 memory sockets per
processor and 32 memory sockets per system.

Figure 21. Memory channels

The following table describes how the memory channels are organized:

Table 65. Memory channels


Processor Channel Channel B Channel C Channel D Channel E Channel F Channel G Channel H
A
Processor Slots A1 Slots A7 Slots A3 Slots A5 and Slots A4 and Slots A6 Slots A2 and Slots A8 and
1 and A9 and A15 and A11 A13 A12 and A14 A10 A16

Component replacement guidelines 61


Table 65. Memory channels (continued)
Processor Channel Channel B Channel C Channel D Channel E Channel F Channel G Channel H
A
Processor Slots B1 Slots B7 Slots B3 Slots B5 and Slots B4 and Slots B6 Slots B2 and Slots B8 and
2 and B9 and B15 and B11 B13 B12 and B14 B10 B16

The following table describes the supported memory matrix:

Table 66. Supported memory matrix


DIMM type Rank Capacity DIMM rated Operating Speed
voltage and
speed 1 DIMM per 2 DIMMs per
channel (DPC) channel (DPC)
RDIMM 1R 16 GB DDR5 (1.1 V), 4800 4800 MT/s 4400 MT/s
MT/s
2R 32 GB, 64 GB DDR5 (1.1 V), 4800 4800 MT/s 4400 MT/s
MT/s
4R 128 GB DDR5 (1.1 V), 4800 4800 MT/s 4400 MT/s
MT/s
8R 256 GB DDR5 (1.1 V), 4800 4800 MT/s 4400 MT/s
MT/s
1R 16 GB DDR5 (1.1 V), 5600 5600 MT/s 4400 MT/s
MT/s
2R 32 GB, 64 GB, 96 DDR5 (1.1 V), 5600 5600 MT/s 4400 MT/s
GB MT/s
4R 128 GB DDR5 (1.1 V), 5600 5600 MT/s 4400 MT/s
MT/s

NOTE: 256 GB RDIMM with 4800 MT/s is supported with VP-760 only.

NOTE: 5600 MT/s RDIMMs are applicable for fifth-generation Intel Xeon Scalable processors only.

NOTE: The processor may reduce the performance of the rated DIMM speed.

General memory module installation guidelines


To ensure optimal performance of your system, observe the following general guidelines when configuring your system memory.
If your system memory configuration fails to observe these guidelines, your system might not boot, stop responding during
memory configuration, or operate with reduced memory.
The memory bus may operate at speeds of 5600 MT/s or 4800 MT/s depending on the following factors:
● Selected system profile. For example, Performance, Performance Per Watt Optimized (OS), or Custom (can be run at high
speed or lower).
● Maximum supported DIMM speed of the processors.
● Maximum supported speed of the DIMMs.
NOTE: MT/s indicates DIMM speed in megatransfers per second.

NOTE: Fault Resilient Memory supports only eight and sixteen DIMMs per processor.

The system supports Flexible Memory Configuration, enabling the system to be configured and run in any valid chipset
architectural configuration. The following are the recommended guidelines for installing memory modules:
● All DIMMs must be DDR5.
● All DDR5 DIMMs must be in the same speed per processor socket.
● Mixing of DIMMs is not allowed.

62 Component replacement guidelines


● If memory modules with different speeds are installed, they operate at the speed of the slowest installed memory module.
● Populate memory module sockets only if a processor is installed.
○ For single-processor systems, sockets A1 to A16 are available.
○ For dual-processor systems, sockets A1 to A16 and sockets B1 to B16 are available.
○ A minimum of one DIMM must be populated for each installed processor.
● In Optimizer Mode, the DRAM controllers operate independently in the 64-bit mode and provide optimized memory
performance.

Table 67. Memory population information


Processor Memory population Memory population information
Single processor A{1}, A{2}, A{3}, A{4}, A{5}, A{6}, 1, 2, 4, 6, 8, 12 or 16 DIMMs are
A{7}, A{8}, A{9}, A{10}, A{11}, A{12}, allowed.
A{13}, A{14}, A{15}, A{16}
Dual processor (Start with processor1. A{1}, B{1}, A{2}, B{2}, A{3}, B{3}, A{4}, 2, 4, 8, 12, 16, 24 or 32 DIMMs are
Processor 1 and processor 2 population B{4}, A{5}, B{5}, A{6}, B{6}, A{7}, supported per system.
should match) B{7} A{8}, B{8}, A{9}, B{9}, A{10},
B{10}, A{11}, B{11}, A{12}, B{12}, A{13},
B{13}, A{14}, B{14}, A{15}, B{15},
A{16}, B{16}
● Populate all the sockets with white release tabs first, followed by the sockets with black release tabs.
● Unbalanced or odd memory configurations result in a performance loss, and the system may not identify the memory
modules being installed. Always populate memory channels identically with equal DIMMs for the best performance.
● Supported RDIMM configurations are 1, 2, 4, 6, 8, 12, or 16 DIMMs per processor.

Expansion card installation guidelines


The following figure shows the expansion card slot connectors on the VxRail VP-760 and VxRail VS-760 system board:

Figure 22. Expansion card slot connectors


1. Riser 4 slot 2. Riser 3 slot
3. Riser 2 slot 4. Riser 1 slot

Component replacement guidelines 63


Figure 23. Riser 1P - Full length (FL)
1. Slot 2

Figure 24. Riser 1R


1. Slot 1
2. Slot 2

64 Component replacement guidelines


Figure 25. Riser 1Q
1. Slot 1
2. Slot 2

Figure 26. Riser 2A


1. Slot 6
2. Slot 3

Component replacement guidelines 65


Figure 27. Riser 3A
1. Slot 5

Figure 28. Riser 3B


1. Slot 4
2. Slot 5

66 Component replacement guidelines


Figure 29. Riser 4P
1. Slot 7

Figure 30. Riser 4P - Full length (FL)


1. Slot 7

Component replacement guidelines 67


Figure 31. Riser 4Q
1. Slot 7
2. Slot 8

Figure 32. Riser 4R


1. Slot 7
2. Slot 8

NOTE: The expansion-card slots are not hot-swappable.

The following table provides guidelines for installing expansion cards to ensure proper cooling and mechanical fit. The expansion
cards with the highest priority should be installed first using the slot priority indicated. All the other expansion cards should be
installed in the card priority and slot priority order.

Table 68. Expansion card riser configurations


Configurations Expansion card PCIe Form factor Controlling Electrical bandwidth or
risers slots processor physical connector of the
slot
Config2. 4 x8 FH R1Q 1 Full height Processor 1 PCIe Gen5 x8 (x16 connector)
(Gen5) + 2x 8FH + 2
x16 LP 2 Full height Processor 1 PCIe Gen5 x8 (x16 connector)
R2A 3 Low profile Processor 1 PCIe Gen4 x16 (x16 connector)

68 Component replacement guidelines


Table 68. Expansion card riser configurations (continued)
Configurations Expansion card PCIe Form factor Controlling Electrical bandwidth or
risers slots processor physical connector of the
slot
6 Low profile Processor 2 PCIe Gen4 x16 (x16 connector)
R3B 4 Full height Processor 2 PCIe Gen4 x8 (x16 connector)
5 Full height Processor 2 PCIe Gen4 x8 (x16 connector)
R4Q 7 Full height Processor 2 PCIe Gen5 x8 (x16 connector)
8 Full height Processor 2 PCIe Gen5 x8 (x16 connector)
Config3-2. 2 x16 LP + R1P 2 Full height Processor 1 PCIe Gen5 x16 (x16 connector)
2 x8 FH + 2 x16 DW (DW)
(Gen5)
R2A 3 Low profile Processor 1 PCIe Gen4 x16 (x16 connector)
6 Low profile Processor 2 PCIe Gen4 x16 (x16 connector)
R3B 4 Full height Processor 2 PCIe Gen4 x8 (x16 connector)
5 Full height Processor 2 PCIe Gen4 x8 (x16 connector)
R4P 7 Full height Processor 2 PCIe Gen5 x16 (x16 connector)
(DW)
Config5-1. 2 x16 LP + R1R 1 Full height Processor 1 PCIe Gen4 x16 (x16 connector)
2 x16 FH + 2x16 FH
(Gen5) 2 Full height Processor 1 PCIe Gen5 x16 (x16 connector)
R2A 3 Low profile Processor 1 PCIe Gen4 x16 (x16 connector)
6 Low profile Processor 2 PCIe Gen4 x16 (x16 connector)
R3A 5 Full height Processor 2 PCIe Gen4 x16 (x16 connector)
R4P 7 Full height Processor 2 PCIe Gen5 x16 (x16 connector)
Config6. 2 x16 LP + 2 R2A 3 Low profile Processor 1 PCIe Gen4 x16 (x16 connector)
x8 FH (Gen5)
6 Low profile Processor 2 PCIe Gen4 x16 (x16 connector)
R4Q 7 Full height Processor 2 PCIe Gen5 x8 (x16 connector)
8 Full height Processor 2 PCIe Gen5 x8 (x16 connector)
Config9. 3 x8 FH R1Q 1 Full height Processor 1 PCIe Gen5 x8 (x16 connector)
(Gen5) + 1 x16 LP
2 Full height Processor 1 PCIe Gen5 x8 (x16 connector)
R2A 3 Low profile Processor 1 PCIe Gen4 x16 (x16 connector)
R4R 7 Full height Processor 1 PCIe Gen5 x8 (x16 connector)
Config10-2. 1 x16 DW R1P 2 Full height Processor 1 PCIe Gen5 x16 (x16 connector)
(Gen5) + 2 x16 LP + (DW)
1 x8 FH (Gen5) +1 x16
FH (Gen5) R2A 3 Low profile Processor 1 PCIe Gen4 x16 (x16 connector)
6 Low profile Processor 1 PCIe Gen4 x16 (x16 connector)
R4R 7 Full height Processor 1 PCIe Gen5 x8 (x16 connector)
Config13. 1 x16 LP R2A 3 Low profile Processor 1 PCIe Gen5 x16 (x16 connector)
(Gen5)

VxRail VP-760 supports the following riser configurations:


● Configuration 2: R1Q+R2A+R3B+R4Q
● Configuration 3-2: R1P+R2A+R3B+R4P (FL)
● Configuration 5-1: R1R+R2A+R3A+R4P (HL)
● Configuration 6: R2A+R4Q

Component replacement guidelines 69


● Configuration 9: R1Q+R2A+R4R
● Configuration 10-2: R1P+R2A+R4R (FL)
VxRail VS-760 supports the following riser configurations:
● Configuration 6: R2A+R4Q
● Configuration 13: R2A

Table 69. Configuration 2: R1Q+R2A+R3B+R4Q


Card type Slot priority Maximum number of cards
Inventec (VGA) 8, 4 1
Inventec (Serial) 8, 4 1
Inventec (LOM Card) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Mellanox (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Foxconn (BOSS) Integrated slot 1
Foxconn (Front PERC11 H755) Integrated slot 1
Foxconn (Front PERC11 HBA355i) Integrated slot 1
NVIDIA (GPU A2) 7, 8, 4, 5, 1, 2 6
Mellanox (NIC: 100Gb) 6, 3 2
Broadcom (NIC: 100Gb) 6, 3 2
Intel (NIC: 100Gb) 6, 3 2
Mellanox (NIC: 25Gb) 5, 4, 7, 2, 1, 8 6
Mellanox (NIC: 25Gb) 6, 3 2
Intel (NIC: 25Gb) 5, 4, 7, 1, 2, 8 6
Intel (NIC: 25Gb) 6, 3 2
Intel (NIC: 25Gb) 5, 4, 7, 1, 2 5
Broadcom (Emulex) (HBA: FC64) 5, 4, 7, 1, 2, 8 6
Broadcom (Emulex) (HBA: FC64) 6, 3 2
Broadcom (Emulex) (HBA: FC32) 5, 4, 7, 1, 2, 8 6
Broadcom (Emulex) (HBA: FC32) 6, 3 2
Qlogic (Marvell) (HBA: FC32) 5, 4, 7, 1, 2, 8 6
Qlogic (Marvell) (HBA: FC32) 6, 3 2
Broadcom (NIC: 25Gb) 5, 4, 7, 1, 2, 8 6
Broadcom (NIC: 25Gb) 6, 3 2
Broadcom (NIC: 10Gb) 5, 4, 7, 1, 2, 8 6
Broadcom (NIC: 10Gb) 6, 3 2
Intel (NIC: 10Gb) 5, 4, 7, 2, 1, 8 6
Intel (NIC: 10Gb) 6, 3 2

70 Component replacement guidelines


Table 70. Configuration 3-2: R1P+R2A+R3B+R4P (FL)
Card type Slot priority Maximum number of cards
Inventec (VGA) 4 1
Inventec (Serial) 4 1
Inventec (LOM Card) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Mellanox (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Foxconn (BOSS) Integrated slot 1
Foxconn (Front PERC11 H755) Integrated slot 1
Foxconn (Front PERC11 HBA355i) Integrated slot 1
NVIDIA (GPU H100) 7, 2 2
NVIDIA (GPU L40S) 7, 2 2
NVIDIA (GPU L40) 7, 2 2
NVIDIA (GPU L4) 7, 2 2
NVIDIA (GPU A40) 7, 2 2
NVIDIA (GPU A30) 7, 2 2
NVIDIA (GPU A16) 7, 2 2
Mellanox (NIC: 100Gb) 7, 2 2
Mellanox (NIC: 100Gb) 6, 3 2
Broadcom (NIC: 100Gb) 6, 3 2
Broadcom (NIC: 100Gb) 7, 2 2
Intel (NIC: 100Gb) 6, 3 2
Intel (NIC: 100Gb) 7, 2 2
Intel (NIC: 25Gb) 7, 2 2
Broadcom (NIC: 25Gb) 7, 2 2
Mellanox (NIC: 25Gb) 5, 4, 7, 2 4
Mellanox (NIC: 25Gb) 6, 3 2
Intel (NIC: 25Gb) 5, 4, 7, 2 4
Intel (NIC: 25Gb) 6, 3 2
Broadcom (Emulex) (HBA: FC64) 5, 4, 7, 2 4
Broadcom (Emulex) (HBA: FC64) 6, 3 2
Broadcom (Emulex) (HBA: FC32) 5, 4, 7, 2 4
Broadcom (Emulex) (HBA: FC32) 6, 3 2
Qlogic (Marvell) (HBA: FC32) 5, 4, 7, 2 4
Qlogic (Marvell) (HBA: FC32) 6, 3 2
Broadcom (NIC: 25Gb) 5, 4, 7, 2 4

Component replacement guidelines 71


Table 70. Configuration 3-2: R1P+R2A+R3B+R4P (FL) (continued)
Card type Slot priority Maximum number of cards
Broadcom (NIC: 25Gb) 6, 3 2
Broadcom (NIC: 10Gb) 5, 4, 7, 2 4
Broadcom (NIC: 10Gb) 6, 3 2
Broadcom (NIC: 10Gb) 5, 4, 7, 6, 3, 2 6
Intel (NIC: 10Gb) 5, 4, 7, 2 4
Intel (NIC: 10Gb) 6, 3 2

Table 71. Configuration 5-1: R1R+R2A+R3A+R4P (HL)


Card type Slot priority Maximum number of cards
Inventec (LOM card) Integrated slot 1
Broadcom (OCP: 100Gb) Integrated slot 1
Mellanox (OCP: 100Gb) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Mellanox (OCP: 25Gb) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Foxconn (BOSS-N1) Integrated slot 1
NVIDIA (GPU L4) 7, 5, 1, 2 4
Intel (GPU ATS-M) 7, 5, 1, 2 4
NVIDIA (GPU A2) 7, 5, 1, 2 4
Mellanox (FH NIC: 100Gb, 2P, Q56) 7, 5, 1, 2 4
Mellanox (LP NIC: 100Gb, 2P, Q56) 6, 3 2
Broadcom (FH NIC: 100Gb) 7, 5, 1, 2 4
Broadcom (LP NIC: 100Gb) 6, 3 2
Intel (FH NIC: 100Gb) 7, 5, 1, 2 4
Intel (LP NIC: 100Gb) 6, 3 2
Intel (FH, 2P, COMMs card: 100Gb) 7, 5, 1, 2 4
Intel (NIC: 25Gb) 7, 5, 1, 2 4
Intel (NIC: 25Gb) 7, 5, 1, 2 4
Broadcom (NIC: 25Gb) 7, 5, 1, 2 4
Intel (NIC: 25Gb) 6, 3 2
Mellanox (NIC: 25Gb) 7, 5, 1, 2 4
Mellanox (NIC: 25Gb) 6, 3 2

72 Component replacement guidelines


Table 71. Configuration 5-1: R1R+R2A+R3A+R4P (HL) (continued)
Card type Slot priority Maximum number of cards
Intel (NIC: 25Gb) 7, 5, 1, 2 4
Intel (NIC: 25Gb) 6, 3 2
Broadcom (Emulex) (HBA: FC64) 7, 5, 1, 2 4
Broadcom (Emulex) (HBA: FC64) 6, 3 2
Broadcom (Emulex) (HBA: FC32) 7, 5, 1, 2 4
Broadcom (Emulex) (HBA: FC32) 6, 3 2
Qlogic (Marvell) (HBA: FC32) 7, 5, 1, 2 4
Qlogic (Marvell) (HBA: FC32) 6, 3 2
Broadcom (NIC: 25Gb) 7, 5, 1, 2 4
Broadcom (NIC: 25Gb) 6, 3 2
Broadcom (NIC: 10Gb) 7, 5, 1, 2 4
Broadcom (NIC: 10Gb) 6, 3 2
Broadcom (NIC: 10Gb) 7, 5, 1, 2 4
Broadcom (NIC: 10Gb) 6, 3 2
Intel (NIC: 10Gb) 7, 5, 1, 2 4
Intel (NIC: 10Gb) 6, 3 2
Intel (NIC: 10Gb) 7, 5, 1, 2 4
Intel (NIC: 10Gb) 6, 3 2

Table 72. Configuration 6: R2A+R4Q


Card type Slot priority Maximum number of cards
Inventec (VGA) 8 1
Inventec (Serial) 8 1
Inventec (LOM Card) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Mellanox (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Foxconn (BOSS) Integrated slot 1
Foxconn (Front PERC11 H755) Integrated slot 1
Foxconn (Front PERC11 HBA355i) Integrated slot 1
Mellanox (NIC: 100Gb) 6, 3 2
Broadcom (NIC: 100Gb) 6, 3 2
Intel (NIC: 100Gb) 6, 3 2
Mellanox (NIC: 25Gb) 7, 8 2
Mellanox (NIC: 25Gb) 6, 3 2
Intel (NIC: 25Gb) 7, 8 2

Component replacement guidelines 73


Table 72. Configuration 6: R2A+R4Q (continued)
Card type Slot priority Maximum number of cards
Intel (NIC: 25Gb) 7 1
Intel (NIC: 25Gb) 6, 3 2
Broadcom (Emulex) (HBA: FC64) 7, 8 2
Broadcom (Emulex) (HBA: FC64) 6, 3 2
Broadcom (Emulex) (HBA: FC32) 7, 8 2
Broadcom (Emulex) (HBA: FC32) 6, 3 2
Qlogic (Marvell) (HBA: FC32) 7, 8 2
Qlogic (Marvell) (HBA: FC32) 6, 3 2
Broadcom (NIC: 25Gb) 7, 8 2
Broadcom (NIC: 25Gb) 6, 3 2
Broadcom (NIC: 10Gb) 7, 8 2
Broadcom (NIC: 10Gb) 6, 3 2
Broadcom (NIC: 10Gb) 7, 3, 6 3
Intel (NIC: 10Gb) 7, 8 2
Intel (NIC: 10Gb) 6, 3 2

Table 73. Configuration 9: R1Q+R2A+R4R


Card type Slot priority Maximum number of cards
Inventec (VGA) 8 1
Inventec (Serial) 8 1
Inventec (LOM Card) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Mellanox (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Foxconn (BOSS) Integrated slot 1
Foxconn (Front PERC11 H755) Integrated slot 1
Foxconn (Front PERC11 HBA355i) Integrated slot 1
NVIDIA (GPU A2) 7, 1, 2 3
Mellanox (NIC: 100Gb) 3 1
Broadcom (NIC: 100Gb) 3 1
Intel (NIC: 100Gb) 3 1
Mellanox (NIC: 25Gb) 7, 1, 2 3
Mellanox (NIC: 25Gb) 3 1
Intel (NIC: 25Gb) 7, 1, 2 3
Intel (NIC: 25Gb) 3 1
Broadcom (Emulex) (HBA: FC64) 7, 1, 2 3

74 Component replacement guidelines


Table 73. Configuration 9: R1Q+R2A+R4R (continued)
Card type Slot priority Maximum number of cards
Broadcom (Emulex) (HBA: FC64) 3 1
Broadcom (Emulex) (HBA: FC32) 7, 1, 2 3
Broadcom (Emulex) (HBA: FC32) 3 1
Qlogic (Marvell) (HBA: FC32) 7, 1, 2 3
Qlogic (Marvell) (HBA: FC32) 3 1
Broadcom (NIC: 25Gb) 7, 1, 2 3
Broadcom (NIC: 25Gb) 3 1
Broadcom (NIC: 10Gb) 7, 1, 2 3
Broadcom (NIC: 10Gb) 3 1
Broadcom (NIC: 10Gb) 7, 1, 3, 2 4
Intel (NIC: 10Gb) 7, 1, 2 3
Intel (NIC: 10Gb) 3 1

Table 74. Configuration 10-2: R1P+R2A+R4R (FL)


Card type Slot priority Maximum number of cards
Inventec (VGA) 8 1
Inventec (Serial) 8 1
Inventec (LOM Card) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Mellanox (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Foxconn (BOSS) Integrated slot 1
Foxconn (Front PERC11 H755) Integrated slot 1
Foxconn (Front PERC11 HBA355i) Integrated slot 1
NVIDIA (GPU H100) 2 1
NVIDIA (GPU L40S) 2 1
NVIDIA (GPU L40) 2 1
NVIDIA (GPU L4) 2 1
NVIDIA (GPU A40) 2 1
NVIDIA (GPU A30) 2 1
NVIDIA (GPU A16) 2 1
Mellanox (NIC: 100Gb) 2 1
Mellanox (NIC: 100Gb) 3 1
Broadcom (NIC: 100Gb) 3 1
Broadcom (NIC: 100Gb) 2 1
Intel (NIC: 100Gb) 3 1

Component replacement guidelines 75


Table 74. Configuration 10-2: R1P+R2A+R4R (FL) (continued)
Card type Slot priority Maximum number of cards
Intel (NIC: 100Gb) 2 1
Intel (NIC: 25Gb) 2 1
Broadcom (NIC: 25Gb) 2 1
Mellanox (NIC: 25Gb) 7, 2 2
Mellanox (NIC: 25Gb) 3 1
Intel (NIC: 25Gb) 7, 2 2
Intel (NIC: 25Gb) 3 1
Broadcom (Emulex) (HBA: FC64) 7, 2 2
Broadcom (Emulex) (HBA: FC64) 3 1
Broadcom (Emulex) (HBA: FC32) 7, 2 2
Broadcom (Emulex) (HBA: FC32) 3 1
Qlogic (Marvell) (HBA: FC32) 7, 2 2
Qlogic (Marvell) (HBA: FC32) 3 1
Broadcom (NIC: 25Gb) 7, 2 2
Broadcom (NIC: 25Gb) 3 1
Broadcom (NIC: 10Gb) 7, 2 2
Broadcom (NIC: 10Gb) 3 1
Broadcom (NIC: 10Gb) 7, 3, 2 3
Intel (NIC: 10Gb) 7, 2 2
Intel (NIC: 10Gb) 3 1

Table 75. Configuration 13: R2A


Card type Slot priority Maximum number of cards
Inventec (LOM Card) Integrated slot 1
Intel (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 25Gb) Integrated slot 1
Mellanox (OCP: 25Gb) Integrated slot 1
Broadcom (OCP: 10Gb) Integrated slot 1
Intel (OCP: 10Gb) Integrated slot 1
Foxconn (BOSS-N1) Integrated slot 1
Foxconn (Front PERC11 HBA355i) 3 1
NVIDIA (OCP: 100Gb) Integrated slot 1
Broadcom (OCP: 100Gb) Integrated slot 1

Drive backplane
This section provides an overview of the supported drive backplanes.
Depending on your VxRail model, it supports one of the following drive backplanes:

76 Component replacement guidelines


● VxRail VP-760 supports 24 x 2.5-inch SAS, SATA, or NVMe drive backplane.
● VxRail VS-760 supports a 12 x 3.5-inch SAS or SATA drive backplane.

Figure 33. 12 x 3.5-inch drive backplane


1. BP_DST_SB1
2. BP_DST_SA1
3. BP_PWR_1 (backplane power cable to system board)

Figure 34. 24 x 2.5-inch drive backplane (front view)

Figure 35. 24 x 2.5-inch drive backplane (top view)


1. BP_CTRL 2. BP_PWR_1 (backplane power and signal cable to system
board)
3. BP_DST_PA1 (PCIe/NVMe connector) 4. BP_PWR_2 (backplane power and signal cable to system
board)
5. BP_ DST_PB1 (PCIe/NVMe connector) 6. BP_PWR_CTRL
7. BP_ DST_PA2 (PCIe/NVMe connector) 8. BP_ DST_PB2 (PCIe/NVMe connector)
9. BP_DST_SB1 10. BP_SRC_SA2
11. BP_DST_SA1

Component replacement guidelines 77


8
Jumpers and connectors
This topic provides information about jumpers and switches. It also describes the connectors on the various boards in the
system. Jumpers on the system board help to disable system components and reset the passwords. To install components and
cables correctly, you must know the connectors on the system board.

System board jumpers and connectors


This section provides an overview of the system board jumpers and connectors of the VxRail VP-760 and VxRail VS-760.

Figure 36. System board jumpers and connectors

Table 76. System board jumpers and connectors


Item Connector Description
1. Rear_I/O_connector Rear I/O connector
2. J_R3_PCIE_PWR Riser 3 power connector
3. IO_RISER3 (CPU2) Riser 3
4 B9, B1, B15, B7, B11, B3, B13, B5 DIMM for CPU 2 channels A, B, C, D

78 Jumpers and connectors


Table 76. System board jumpers and connectors (continued)
Item Connector Description
5. SL10_PCH_SA1 1 SATA connector 10
6. IO_RISER2_A (CPU1) and IO_RISER2_B (CPU2) Riser 2
7. TPM TPM connector
8. OCP OCP NIC 3.0 connector
9. SL13_CPU1_PB7 PCIe connector 13
10. BATTERY Coin cell battery
11. LOM_Connector LOM connector
12. Internal USB Internal USB connector
13. SL11_CPU1_PB7 PCIe connector 11
14. IO_RISER1 (CPU1) Riser 1
15. SIG_PWR_0 Power connector 0 - use for BP only
16. BOSS_PWR BOSS card power
17. PSU1_SIG PUCK sideband signal for Riser 1 GPU
18. SL12_PCH_PA6 PCIe connector12
19. FRONT_VIDEO Front VGA
20. PWR1_A For power cable
21. PWR1_B For Riser 1 GPU power
22. CPU 1 Processor 1
23. A9, A1, A15, A7, A11, A3, A13, A5 DIMM for CPU 1 channels A, B, C, D
24. SL8_CPU1_PA4 PCIe connector 8
25. RGT_CP Right control panel connector
26. FAN_2U6 Fan 6 connector
27. SIG_PWR_2 Power connector 2 - use for BP only
28. SL7_CPU1_PB4 PCIe connector 7
29. FAN_2U5 Fan 5 connector
30. SL4_CPU1_PB2 PCIe connector 4
31. FAN_2U4 Fan 4 connector
32. SL3_CPU1_PA2 PCIe connector 3
33. SIG_PWR_1 Power connector 2 - use for BP only
34. SL6_CPU2_PA3 PCIe connector 6
35. FAN_2U3 Fan 3 connector
36. SL5_CPU2_PB3 PCIe connector 5
37. FAN_2U2 Fan 2 connector
38. SL2_CPU2_PB1 PCIe connector 2
39. FAN_2U1 Fan 1 connector
40. SL1_CPU2_PA1 PCIe connector 1
41. PWRD_EN and NVRAM_CLR Jumper

Jumpers and connectors 79


Table 76. System board jumpers and connectors (continued)
Item Connector Description
42. LFT_CP Left control panel connector
43. A8, A16, A2, A10, A6, A14, A4, A12 DIMMs for CPU 1 channels H, G, F, E
44. CPU 2 Processor 2
45. B8, B16, B2, B10, B6, B14, B4, B12 DIMMs for CPU 2 channels H, G, F, E
46. PWR2_B For Riser 4 GPU power
47. PWR2_A For power cable
48. PSU2_SIG PUCK sideband signal for Riser 4 GPU
49. IO_RISER4 (CPU2) Riser 4
50. SL9_CPU2_PA5 1 PCIe connector 9
51. BAT_SIG Battery signal connector

NOTE: The platform supports Maximum (MAX) and Mainstream (MS) system boards.
● 1 SL9_CPU2_PA5 and SL10_PCH_SA1 connectors are available only on MAX system board.
● MS system board supports CPU TDP < 250 W.
● MAX system board supports CPU TDP => 250 W.

System board jumper settings


For information about resetting the password jumper to disable a password, see Disable system and software password
features.

Table 77. System board jumper settings


Jumper Setting Description
PWRD_EN The BIOS password feature is enabled.

The BIOS password feature is disabled. The BIOS password is


now disabled, and you are not allowed to set a new password.
NVRAM_CLR The BIOS configuration settings are retained at system boot.

The BIOS configuration settings are cleared at system boot.

Use caution when changing the BIOS settings. The BIOS interface is designed for advanced users.
CAUTION: Changes to the settings could prevent your system from starting correctly and may result in data
loss.

Disable system and software password features


The software security features of the system include a system password and a setup password. The password jumper enables or
disables password features and clears any existing passwords.
Only certified service technicians can perform many of the repairs. You should perform only troubleshooting and simple repairs
as authorized in your product documentation, or as directed by the online or telephone service and support team.
CAUTION: The warranty does not cover damage due to servicing that Dell has not authorized. Read and follow
the safety instructions that are shipped with your product.

80 Jumpers and connectors


1. Power off the system and all the peripherals that are attached.
2. Disconnect the system power cable from the electrical outlet.
3. Disconnect all the peripherals that are attached to the system.
4. Remove the system cover.
5. Move the jumper on the system board from pins 2 and 4, to pins 4 and 6.
NOTE: The existing passwords are not erased until the system boots with the jumper on pins 4 and 6.

6. After moving the jumpers, replace the system cover.


7. Reconnect the peripherals, and connect the system to the electrical outlet.
8. Power on the system.
9. Power the system off.
10. Disconnect the system power cable from the electrical outlet.
11. Disconnect all the peripherals that are attached to the system.
12. Remove the system cover.
13. Move the jumper on the system board from pins 4 and 6, to pins 2 and 4.
NOTE: If a new system or setup password is entered with the jumper remaining on pins 4 and 6, the system disables the
password the next time it boots.

14. After moving the jumpers, replace the system cover.


15. Reconnect the peripherals, and connect the system to the electrical outlet.
16. Power on the system.
17. When prompted, assign a new system or setup password.

Jumpers and connectors 81


9
System diagnostics and indicator codes
The diagnostic indicators on the system front panel display system status during system startup.
The following section contains information about the chassis LEDs, and the indicator codes for your VxRail.

Status LED indicators


The status LED indicators are located on the chassis, and they indicate the condition of the system. If any error occurs, the
indicators turn solid amber in color.
The following figure and table describes the status LED indicators and their corrective actions:

Figure 37. Status LED indicators

Table 78. Status LED indicators and descriptions


Icon Description Condition Corrective action
Drive The indicator turns solid ● Check the system event log to determine if the drive has an
indicator amber if there is a drive error.
error. ● Run the appropriate Online Diagnostics test. Restart the
system and run embedded diagnostics (ePSA).
● If the drives are configured in a RAID array, restart the
system, and enter the host adapter configuration utility.
Temperature The indicator turns solid Ensure that none of the following conditions exist:
indicator amber if the system ● A cooling fan has been removed or has failed.
experiences a thermal error ● System cover, air shroud, or back filler bracket have been
(for example, the ambient removed.
temperature is out of range
● The ambient temperature is too high.
or there is a fan failure).
● External airflow is obstructed.
If the problem persists, see Dell Technologies Support.
Electrical The indicator turns solid Check the system event log or system messages for the
indicator amber if the system specific issue. If it is due to a problem with the PSU, check
experiences an electrical the LED on the PSU. Reseat the PSU. If the problem persists,
error (for example, voltage see Dell Technologies Support.
out of range, or a failed
power supply unit (PSU) or
voltage regulator).

82 System diagnostics and indicator codes


Table 78. Status LED indicators and descriptions (continued)
Icon Description Condition Corrective action
Memory The indicator turns solid Check the system event log or system messages for the
indicator amber if a memory error location of the failed memory. Reseat the memory module. If
occurs. the problem persists, see Dell Technologies Support.
PCIe The indicator turns solid Restart the system. Update any required drivers for the PCIe
indicator amber if a PCIe card card. Reinstall the card. If the problem persists, see Dell
experiences an error. Technologies Support.

System health and system ID indicator codes


The system health and system ID indicator are located on the left control panel of the system.
The following figure and table describes the system health and system ID indicator codes:

Figure 38. System health and system ID indicator

Table 79. System health and system ID indicator codes


System health and System ID Description
indicator code
Solid blue The system is powered on, is healthy, and the system ID mode is not active. Press the
System health and System ID button to switch to System ID mode.
Blinking blue The System ID mode is active. Press the System health and System ID button to
switch to System health mode.
Solid amber The system is in Fail-safe mode. If the problem persists, see Dell Technologies
Support.
Blinking amber Indicates that the system is experiencing a fault. Check the System event log for
specific error messages. For event and error message information, go to Quick

System diagnostics and indicator codes 83


Table 79. System health and system ID indicator codes (continued)
System health and System ID Description
indicator code
Resource Locator, click Look Up, Error Code. Enter the error code, and then click
Look it up.

iDRAC Direct LED indicator codes


The iDRAC Direct LED indicator lights up to indicate that the port is connected and is being used as a part of the iDRAC
subsystem.
You can configure iDRAC Direct by using a USB to micro USB (type AB) cable, which you can connect to your laptop or tablet.
Cable length should not exceed 0.91 m (3 ft). Cable quality can affect the performance.
The following table describes the iDRAC Direct LED indicator codes when the iDRAC Direct port is active:

Table 80. iDRAC Direct LED indicator codes


iDRAC Direct LED indicator code Condition
Solid green for two seconds The laptop or tablet is connected.
Blinking green (on for two seconds and off for two The connected laptop or table is recognized.
seconds)
LED Indicator off The laptop or tablet is unplugged.

iDRAC Quick Sync 2 indicator codes


iDRAC Quick Sync 2 module (optional) is on the left control panel of the system.
The following figure and table describes the conditions and corrective actions for the iDRAC Quick Sync 2 indicators:

Figure 39. iDRAC Quick Sync 2 indicator

Table 81. iDRAC Quick Sync 2 indicators and descriptions


iDRAC Quick Sync 2 Condition Corrective action
indicator code
Off (default state) Indicates that the iDRAC Quick Sync 2 If the LED fails to power on, reseat the left control
feature is powered off. Press the iDRAC panel flex cable and check. If the problem persists,
Quick Sync 2 button to power on the see Dell Technologies Support.
iDRAC Quick Sync 2 feature.
Solid white Indicates that iDRAC Quick Sync 2 is If the LED fails to power off, restart the system. If
ready to communicate. Press the iDRAC the problem persists, see Dell Technologies Support.
Quick Sync 2 button to power off.
Blinks white rapidly Indicates data transfer activity. If the indicator continues to blink indefinitely, see Dell
Technologies Support.
Blinks white slowly Indicates that the firmware update is in If the indicator continues to blink indefinitely, see Dell
progress. Technologies Support.

84 System diagnostics and indicator codes


Table 81. iDRAC Quick Sync 2 indicators and descriptions (continued)
iDRAC Quick Sync 2 Condition Corrective action
indicator code
Blinks white five times Indicates that the iDRAC Quick Sync 2 Check if the iDRAC Quick Sync 2 feature is
rapidly and then powers off feature is disabled. disabled by iDRAC. If the problem persists, see Dell
Technologies Support.
Solid amber Indicates that the system is in fail-safe Restart the system. If the problem persists, see Dell
mode. Technologies Support.
Blinking amber Indicates that the iDRAC Quick Sync 2 Restart the system. If the problem persists, see Dell
hardware is not responding properly. Technologies Support.

NIC indicator codes


Each NIC on the back of the system has indicators that provide information about the activity and link status. The activity LED
indicator shows if data is flowing through the NIC. The link LED indicator shows the speed of the connected network.
The following figure and table describes the condition of each NIC indicator:

Figure 40. NIC indicator


1. Link LED indicator
2. Activity LED indicator

Table 82. NIC indicator codes


NIC indicator codes Condition
The link and activity indicators are off. NIC is not connected to the network.
The link indicator is green, and the activity NIC is connected to a valid network at its maximum port speed, and data is being
indicator is blinking green. sent or received.
The link indicator is amber, and the NIC is connected to a valid network at less than its maximum port speed, and data
activity indicator is blinking green. is being sent or received.
The link indicator is green, and the activity NIC is connected to a valid network at its maximum port speed, and data is not
indicator is off. being sent or received.
The link indicator is amber, and the NIC is connected to a valid network at less than its maximum port speed, and data
activity indicator is off. is not being sent or received.
The link indicator is blinking green, and NIC identity is enabled through the NIC configuration utility.
activity is off.

PSU indicator codes


AC and DC PSUs have an illuminated translucent handle that serves as an indicator. The indicator shows if power is present or if
a power fault has occurred.
The following figure and table describes the conditions of the PSU indicators:

System diagnostics and indicator codes 85


Figure 41. PSU
1. AC PSU handle
2. Socket
3. Release latch

Table 83. PSU status indicator codes


Power indicator Condition
codes
Green A valid power source is connected to the PSU, and the PSU is operational.
Blinking amber An issue with the PSU.
Not powered on Power is not connected to the PSU.
Blinking green PSU firmware update is in process. Do not disconnect the power cable or unplug the PSU when
updating firmware.
CAUTION: If the firmware update is interrupted, the PSUs do not function.

Blinking green and When hot-plugging a PSU, it blinks green five times at a rate of 4 Hz and powers off. It indicates a PSU
powers off mismatch due to efficiency, feature set, health status, or supported voltage.
If two PSUs are installed, verify that:
● Both PSUs have the same type of label. For example, Extended Power Performance (EPP) label.
● The PSUs are of the same type and have the same maximum output power.
Do not mix PSUs from previous generations of PowerEdge servers, even if the PSUs have the same
power rating.
CAUTION: Mixed PSUs may cause a PSU mismatch condition or failure to power on the
system.
When correcting a PSU mismatch, replace the PSU with the blinking indicator. Do not swap the PSU to
make a matched pair.
CAUTION: If the PSU is swapped, an erroneous condition may occur and cause an
unexpected system shutdown.
To change from a high output configuration to a low output configuration or conversely, you must
power off the system. AC PSUs support both 240 V and 120 V input voltages except for Titanium
PSUs, which support only 240 V.
CAUTION: When two identical PSUs receive different input voltages, they can output
different wattages and trigger a mismatch.

Drive indicator codes


The LEDs on the drive carrier indicate the state of each drive.
Each drive carrier has two LEDs:
● An activity LED (green)

86 System diagnostics and indicator codes


● A status LED (bicolor, green, and amber)
Whenever you access the drive, the activity LED blinks.
The following figure and table describes the condition of the drive indicators:

Figure 42. Drive indicators


1. Drive activity LED indicator
2. Drive status LED indicator
3. Drive capacity label
If the drive is in the AHCI mode, the status LED indicator does not power on. Storage Spaces Direct manages the drive status
indicator behavior. Not all drive status indicators may be used.

Table 84. Drive indicator codes


Drive status indicator code Condition
Blinks green twice per second An identified drive is preparing for removal.
Not powered on The drive is ready for removal.
NOTE: The drive status indicator remains off until all drives are initialized
after the system is powered on. Drives are not ready for removal during
this time.

Blinks green, amber, and then powers off An unexpected drive failure has occurred.
Blinks amber four times per second The drive has failed.
Blinks green slowly The drive is rebuilding.
Solid green The drive is online.
Blinks green for three seconds, amber for The rebuild has stopped.
three seconds, and then powers off after six
seconds.

Use system diagnostics


If you experience an issue with the system, run the system diagnostics before contacting Dell for technical support. You can run
system diagnostics is to test the system hardware without using additional equipment or risking data loss. If you are unable to fix
the issue, support personnel use the diagnostics results to help you solve the issue.

System diagnostics and indicator codes 87


Dell Embedded System Diagnostics
The Dell Embedded System Diagnostics is also known as Enhanced Pre-boot System Assessment (ePSA) diagnostics.
If you experience an issue with the system, run the system diagnostics before contacting Dell for technical support. The system
diagnostics test allows you to troubleshoot the system hardware without requiring more equipment or risking data loss. If you
are unable to fix the issue yourself, service and support personnel can use the diagnostics results to help resolve the issue.
The Dell Embedded System Diagnostics provide a set of options for particular device groups or devices that allow you to:
● Run tests automatically or in an interactive mode.
● Repeat the tests.
● Display or save test results.
● Run thorough tests to introduce additional test options to provide extra information about the failed device(s).
● View status messages that indicate the tests are completed successfully.
● View error messages that indicate issues that are encountered during testing.
The following test options are available:
● Run the Dell Embedded System Diagnostics from the Boot Manager. For more information, see Run the Dell Embedded
System Diagnostics from Boot Manager.
● Run the Dell Embedded System Diagnostics from the Dell Lifecycle Controller. For more information, see Run the Dell
Embedded System Diagnostics from the Dell Lifecycle Controller.

Run the Dell Embedded System Diagnostics from Boot Manager


If your system does not boot, run the Dell Embedded System Diagnostics (ePSA) from the Boot Manager.
1. During the boot process, press F11.
2. Use the up arrow and down arrow keys to select System Utilities > Launch Diagnostics.

Run the Dell Embedded System Diagnostics from the Dell Lifecycle
Controller
If your system does not boot, run the Dell Embedded System Diagnostics (ePSA) from the Dell Lifecycle Controller.
1. During the boot cycle, press F10.
2. Select Hardware Diagnostics > Run Hardware Diagnostics.
The ePSA Pre-boot System Assessment window displays, lists the devices that are detected in the system, and then runs
the diagnostic test on the detected devices.

System diagnostic controls


This section describes the details of the options available on the System diagnostic controls screen.
The following table describes the system diagnostic control details:

Table 85. System diagnostic controls


Menu Description
Configuration Displays the configuration and status information of all detected devices.
Results Displays the results of all tests that are run.
System health Provides the current overview of the system performance.
Event log Displays a time-stamped log of the results of all tests run on the system. This is
displayed if at least one event description is recorded.

88 System diagnostics and indicator codes


System board diagnostic LED indicators
The system board LED indicators provide the status of the system when it is powered on, which help identify POST and
hardware issues.
For information about the different LED indicator sequences and description, see the interactive LED pattern decoder tool:
Blink.

Enhanced Preboot System Assessment


If you experience an issue with the system, run the system diagnostics before contacting Dell for technical assistance. The
system diagnostics test allows you to troubleshoot the system hardware without requiring more equipment or risking data loss.
If you are unable to fix the issue yourself, service and support personnel can use the diagnostics results to help resolve the
issue.

Dell Embedded System Diagnostics


The Embedded System Diagnostics, also known as Enhanced Preboot System Assessment (ePSA) diagnostics, provides a set of
options for particular device groups or devices that allow you to:
● Run tests automatically or in an interactive mode.
● Repeat tests
● Display or save test results.
● Introduce more test options for extra information about the failed devices, run a thorough test.
● View status messages that inform you if tests are completed successfully.
● View error messages that inform you of issues encountered during testing.
The following test options are available:
● Run the Dell Embedded System Diagnostics from the Boot Manager
● Run the Embedded System Diagnostics from the Dell Lifecycle Controller
To run the Dell Embedded System Diagnostics from the Boot Manager, see the Run the Embedded System Diagnostics from
Boot Manager section.
To run the Embedded System Diagnostics from the Dell Lifecycle Controller, see the Run the Embedded System Diagnostics
from the Dell Lifecycle Controller section.

System diagnostics and indicator codes 89


10
Additional support
Dell offers recycle services and automated support with secure connect gateway.
Product take back and recycling services are offered for this product in certain countries. To dispose of system components,
see How to Recycle.

Automated support with secure connect gateway


Secure connect gateway automates technical support for your Dell server, storage, and networking devices.
Secure connect gateway provides the following benefits:
● Automated issue detection: The secure connect gateway monitors your devices and automatically detects hardware issues,
both proactively and predictively.
● Automated case creation: When an issue is detected, the secure connect gateway automatically opens a support case with
Dell Technical Support.
● Automated diagnostic collection: Secure connect gateway automatically collects system state information from your devices
and uploads it securely to Dell. Dell Technical Support uses this information to troubleshoot the issue.
● Proactive contact: A Dell Technical Support agent contacts you about the support case and helps you resolve the issue.
The benefits vary depending on the Dell Service entitlement that you purchased. For more information, see Secure Connect
Gateway.

90 Additional support

You might also like