PowerFlex+4 0+Administration-Downloadable+Content
PowerFlex+4 0+Administration-Downloadable+Content
0
ADMINISTRATION
STUDENT GUIDE
DOWNLOADABLE CONTENT
DOWNLOADABLE CONTENT
PowerFlex 4.0 Administration Student Guide
Configure Hosts 36
Configure Storage Target (SDT) Service 36
Add an NVMe Target Using PowerFlex Manager 37
Register NVMe Host Initiator 37
Add an NVMe Host Using PowerFlex Manager 38
Export/Share Filesystems 80
Components of File System Storage 80
Create a File System for SMB Share 82
Create a File System for NFS Export 82
Reconfiguring MDMs 86
MDM in a PowerFlex Cluster 86
MDM Roles 86
Reconfiguring MDM Roles 89
Reconfigure MDM Roles in a PowerFlex Cluster 90
Appendix 109
Glossary 132
Logical Groups
After the Protection Domain is created, the administrator can add SDSs,
fault sets, storage pools, and acceleration pools to the Protection Domain.
Replication can also be set up to ensure that the data is protected and
saved to a remote cluster.
Rename:
Change the name of a Protection Domain. The administrator can start the
Protection Domain with a test name. After validation, the administration
renames the Protection Domain to match production environment naming
conventions for the PowerFlex system.
Network Throttling:
Change the settings for network data transfer between SDSs. Within a
Protection Domain, the SDS nodes transfer data between themselves.
Data consists of user data being replicated as part of the RAID protection,
and data copied for internal rebalancing and recovery from failures.
When you inactivate a Protection Domain, the data remains on the SDSs.
It is preferable to remove a Protection Domain when the user no longer
needs it.
Prerequisites:
Ensure that all SDSs, storage pools, acceleration pools, and fault sets
have been removed from the Protection Domain before removing the
Protection Domain from the system.
Fault Sets
Fault Sets are logical entities that contain a group of SDSs within a
protection domain. A Fault Set is defined for a set of servers that are likely
to fail together, for example an entire rack full of servers.
Fault Sets Data Mirroring Example: FS-1 data available even if the fault set fails. FS-2
data mirrored in Node 06.
PowerFlex maintains a copy of all data chunks within the Fault Set on
SDSs outside of itself. Mirroring Fault Set data ensures that there is
always another copy available even if all of the servers within the defined
Fault Set fail simultaneously.
• Ensure that a Storage Pool and Fault Sets (with a minimum of three
Fault units) exist or add new ones.
• Fault Sets must be created before adding SDSs to the system. An
SDS can only be added to a Fault Set during its creation.
Steps:
1. On the menu bar, click Block then Fault Sets.
2. In the right pane, click +Create Fault Set.
3. In the Create Fault Set dialog box, enter a name and select the
protection domain, and click Create.
Ensure that any configured SDSs have been removed from the Fault Set
before attempting to delete it.
Steps:
1. On the menu bar, click Block then Fault Sets.
2. From the list of Fault Sets, select the Fault Set to be deleted.
3. Select More Actions then Delete.
4. In the Delete Fault Set dialog box, verify that the desired fault set will
be deleted, and click Delete.
To try creating and deleting the Fault Sets, use the simulation below.
For more information, see the Add Storage Pools section of the
PowerFlex 4.0 Administration Guide.
4: Use the zero-padded policy when the storage pool data layout is fine
granularity. The zero-padded policy ensures that every read from an area
that is previously not written to returns zeros.
5: For fine granularity storage pools, inline compression allows you to gain
more effective capacity.
For more information, refer to the Configure Storage Pool Settings section
of the PowerFlex 4.0 Administration Guide.
Enable the background device scanner to check for errors on the devices
in the specified storage pool.
For more information, see the Enable the Background Device Scanner
section of the PowerFlex 4.0 Administration Guide.
To learn about priority settings for storage pools, review the numbered
items.
2: Limit the number of allowed concurrent I/Os to the value entered in the
Concurrent I/O limit field.
3: Limit the number of allowed concurrent I/Os to the values entered in the
Concurrent I/O limit and Bandwidth I/O limit fields, regardless of user I/O.
For more information, see the Configure IOPS and Bandwidth section of
the PowerFlex 4.0 Administration Guide.
Acceleration Pools
The Storage Data Server (SDS) manages the capacity of a single server
and acts as a back-end for data access. An SDS is installed on each
servers contributing storage devices to PowerFlex. These devices are
accessed through the SDS.
Before adding an SDS, ensure that the following prerequisites are met:
3: When devices are added to an SDS, PowerFlex checks that the device
is clean before adding it. If the device is not clean, an error message is
returned, and the command fails for that device. If you would like to
overwrite existing data on the device by forcing the command, set Force
Clean SDS to YES. Select YES with caution because all data on the
device will be destroyed.
1: Read and write test will be run on the device before its capacity is used
when Test and Activate Device is selected.
2: Devices will be tested, but not used when Test Only is selected.
3: The device capacity will be used without any device testing when
Activate without test is selected.
4: This value is the maximum test run time in seconds. The test stops
when it reaches either this limit, or the time it takes to complete 128 MB of
data read/write, whichever is first. When Activate without test is
selected, this timeout is ignored.
• Two nodes from the same protection domain cannot be put into
maintenance mode at the same time.
• Protected and Instant Maintenance Modes cannot simultaneously be
active within a single protection domain.
• All SDSs concurrently in protected maintenance mode must belong to
the same fault set per protection domain.
For more information, see the Instant Maintenance Mode section of the
PowerFlex 4.0 Technical Overview.
Advantages:
Disadvantages:
For more information, see the Protected Maintenance Mode section of the
PowerFlex 4.0 Technical Overview.
When the system Auto-Aborts PMM, the system generates an alert and
changes the state to Maintenance Aborted by System. This state is
changed only after the system finishes aborting the Enter PMM phase.
Users can exit the Maintenance Aborted by System state by starting a
new PMM or IMM or selecting the Exit Maintenance Mode option in the
More Actions menu.
To try putting an SDS into maintenance mode, use the simulation below.
The web version of this content contains an interactive activity.
PowerFlex Manager enables you to put a node in service mode when you
must perform maintenance operations on the node. When you put a node
in service mode, you can specify whether you are performing short-term
maintenance or long-term maintenance work.
Configure Hosts
The NVMe target provides I/O and discovery services to NVMe hosts
configured on the PowerFlex system.
Once the NVMe targets have been configured, add hosts to PowerFlex,
and then map volumes to the hosts. Connect hosts to NVMe targets,
preferably using the discovery feature.
Hosts are entities that consume PowerFlex storage for application usage.
There are two methods of consuming PowerFlex block storage: using the
SDC kernel driver or using NVMe over TCP connectivity. A host is either
an SDC or an NVMe host.
• Ensure that you have the host NVMe Qualified Name (NQN). If you do
not know the NQN, see the host operating system documentation.
• Ensure that the host is connected to the Ethernet switch.
• Ensure that the host is configured with the correct VLAN ID and routing
rules.
To try adding an NVMe host using PowerFlex Manager, use the simulation
below.
After storage devices are added to the storage pool, a PowerFlex volume
can be created. A PowerFlex volume is considered to be similar to a
Logical Unit Number (LUN) from a physical storage array. To start
allocating volumes, the system requires that there be at least three SDS
nodes. Each SDS should be in a separate fault unit, and each device has
a minimum of 240 GB free storage capacity. For users and applications on
hosts to have access to a volume, the volume must be mapped to an SDC
host.
A volume name must contain fewer than 32 characters, contain only
alphanumeric and punctuation characters, and be unique. The size of the
volume that is created can range from a minimum of 8 GB to a maximum
of 1 PB.
To try adding volumes, use the simulation below.
The web version of this content contains an interactive activity.
Mapping exposes the volume to the specified host, creating a block device
on the host. You can map a volume to one or more hosts. Volumes can be
mapped to either an SDC or NVMe host, but not both simultaneously.
Ensure that you know which type of hosts are being used for each volume,
to avoid mixing host types.
To try mapping volumes, use the simulation below.
The web version of this content contains an interactive activity.
Setting bandwidth and IOPS (input/output operations per second) limits for
volumes lets you control the quality of service. Bandwidth and IOPS limits
are set on a per-host basis.
Ensure that the volumes are mapped before you set the limits.
To try setting volume limits, use the simulation below.
The web version of this content contains an interactive activity.
When volumes are no longer need, they can be removed from PowerFlex.
Before they can be removed, they must be unmapped from the host they
are connected to.
To try unmapping volumes, use the simulation below.
The web version of this content contains an interactive activity.
PowerFlex Snapshot
Regular Snapshot
PowerFlex enables you to create snapshots and some of the key features
of regular snapshots include:
Secure Snapshot
For more information, see the Dell PowerFlex 4.0.x Administration Guide
and select the Snapshots section.
Setting bandwidth and IOPS limits for snapshots lets you control the
quality of service. Bandwidth and IOPS limits are set on a per-host basis.
Ensure that the snapshots are mapped before you set the limits.
A snapshot lock prevents a snapshot from being deleted. You can lock
auto snapshots (snapshots that are created through a snapshot policy
automatically) through the auto-removal process to avoid deletion. You
can unlock the snapshots later so that they are automatically removed.
Delete a Snapshot
Ensure that the snapshot that you are removing is not mapped to any
hosts. If the snapshot is mapped, unmap it before removing it. In addition,
ensure that the snapshot is not the source volume of any snapshot policy.
You must remove the volume from the snapshot policy before you can
remove the snapshot.
Snapshots and their source volume are organized into a Volume Tree or
vTree. A vTree includes the root volume and all the descendant snapshots
resulting from that volume. You can migrate a volume tree (vTree) for a
snapshot to a different storage pool. Volumes undergoing migration
remain available for I/O.
NOTE: vTree migration is a long process and can take days or weeks,
depending on the size of the vTree.
Snapshot Policies
To review detailed steps for creating snapshot policies using SCLI, see
the Dell Technologies Infohub Site.
For more information, see the Dell PowerFlex 4.0.x Administration Guide
and select the Snapshot Policies section.
Snapshot Policies contain a few attributes and elements, which allows the
system to automatically run snapshots for specified volumes based on
specified retention schedules.
To try creating snapshots and setting IOPS limits for the snapshot, use the
simulation below.
to the target. The target system accumulates received data in the target
journal until a complete consistent image is received and can be applied to
the target volumes.
Configure Replication
The following process assumes admin user level credentials for command
line access to both systems:
1. Log in to the source system using the SCLI:
scli --login --username <M&O UI user> --password
<M&O UI password> --management_system_ip <M&O UI IP>
2. Extract the root certificate:
scli --extract_root_ca --certificate_file
<FILE_PATH>
3. Copy the certificate file to the target system (using SCP or similar tool).
4. On the target system, perform steps 1 and 2.
5. Copy the target system's certificate file to the source system.
6. On the source system, add the certificate for the target system using
the SCLI:
scli --add_trusted_ca
<PATH_TO_LOCAL_COPY_OF_TARGET_CERT>
7. On the target system, add the source system's certificate using the
SCLI:
scli --add_trusted_ca
<PATH_TO_LOCAL_COPY_OF_SOURCE_CERT>
Configure Replication
The amount of capacity that is needed for the journal is based on the
following factors:
• Before configuring journal capacity, ensure that there is enough space
in the storage pool.
• When the total storage capacity in the system increases, a small
percentage is needed for the journal capacity.
• As application workload increases, more journal capacity must be
added accordingly.
• The journal capacity depends on the change rate of the dataset, and
the Recovery Point Objective (RPO). Data writes are accumulated in
the journal until half the RPO time is reached, to ensure a consistent
copy is maintained between the volumes.
• The journal capacity must sustain an outage, as determined by the
application WAN bandwidth requirement multiplied by the expected
WAN outage. If an application has a heavy I/O load, larger capacity
should be used. Similarly, if a longer outage is expected, a greater
capacity should be allocated.
Example:
If there are replicated volumes in more than one storage pool in the
protection domain, this calculation should be repeated for each storage
pool. The allocated journal capacity in the protection domain must at least
equal the sum of the size per application pool.
Customer Scenario:
Configure Replication
On the source system, add the connection information of the target
system (remote site) to enable replication between the peer systems. Prior
to performing the steps below, the system ID of both source and target
systems must be obtained.
The system ID is displayed immediately after login to the SCLI. It can also
be obtained by running the command scli --query_all.
Steps:
1. On the source system, on the menu bar, click Protection > Peer
Systems.
2. Click Add Peer System.
3. In the Add Peer System dialog box, enter the connection information of
the peer system:
a. Enter the peer system's name. Enter the system ID of the remote
site.
b. Accept the default, or enter the port number that will be used to
connect the systems.
c. Enter the MDM IP address of the remote site.
d. Either enter the remote system's virtual IP address, or enter both
primary and secondary MDM IP addresses, using the Add IP
option.
e. Click Add Peer to initiate a connection with the peer system.
4. Repeat steps 1–3 on the target system.
Configure Replication
Important:
• Volumes in an RCG pair must be exactly the same size.
• Protection Domains must be configured on both source
and target systems.
When configuring an RCG pair, a Recovery Point Object is set. This RPO
defines the maximum amount of time during which data can be lost.
Setting a low RPO ensures that minimal data is lost should the data
transfer from source to target be interrupted.
Tip: The data loss exposure is half the RPO value. If one
minute is set as the RPO, no more than 30 seconds of data
will be lost. Dell highly recommends that the RPO is set low
to ensure minimal data loss. The minimum amount of time
this feature allows is 15 seconds.
To learn about the replication, I/O workflow, watch the video below.
• Restore replication
• Test Failover
• Stop a failover test
• Freeze an RCG
• Unfreeze an RCG
• Activate an RCG
• Terminate replication in an RCG
• Delete an RCG
• Pause creation of an initial copy
• Resume creation of initial copy
• Set copying priority
• Unpair from RCG
A NAS server must be running on the system before file storage can be
provisioned. Management of file services can be performed by selecting
the File tab within PowerFlex Manager.
1:
The NAS Servers page contains details about existing NAS servers and is
used to:
2:
The File Systems page contains details about existing file systems and is
used to:
3:
The SMB Shares page contains details about existing SMB shares and is
used to create, modify, for remove SMB shares.
4:
The NFS Exports page contains details about existing NFS exports and is
used to create, modify, or remove NFS Exports. Host access is also
managed from this page.
To learn the other details of creating a NAS server, review the numbered
items.
1: When selecting the sharing protocol that the NAS server uses, the
options are SMB, NFSv3, and NFSv4. If SMB and an NFS protocol are
both selected, the NAS server is enabled to support multiprotocol.
2: With SMB selected, the wizard would look for the type of windows
server. The two options are Join to the Active Directory Domain or
Standalone.
• At least one NTP server must be configured (two NTP servers per
domain is the recommendation).
• A Unix Directory Service (UDS) is configured.
• One or more DNS servers are configured.
• An Active Directory (AD) or custom realm must be added for Kerberos
authentication.
5: For user mapping a default account can be enabled for both a Windows
and Linux user, or automatic user mapping can be selected.
PowerFlex file system includes quota support. File system quotas allow
administrators to place limits on the amount of space that can be
consumed to regulate file system storage consumption. Quotas are
supported on SMB, NFS, FTP, NDMP, and multiprotocol file systems.
PowerFlex file supports user quotas, quota trees, and user quotas on tree
quotas. All three types of quotas can co-exist on the same file system and
can be used together to achieve fine-grained control over storage usage.
When multiple limits apply, the limit users reach first is what is considered
the limit.
Quotas are disabled by default. The administrators can set quotas on a file
system from the File > File System > [selected file system] > Details >
Quotas tab. Then, set the following type of quota for a file system:
TYPE DESCRIPTION
User quotas User quotas are set at a file system level to limit
the amount of space a user may consume on a
file system.
In this scenario, the user is writing the data to a directory with a tree quota
on it. The file system usage is increasing and climbing towards the soft
limit.
In this scenario, the user is allowed to use space until a grace period has
been reached. Then the user is alerted when the soft limit is reached until
the grace period is over.
• The quota grace period is used to set a specific grace period to each
tree quota on a file system.
• The grace period counts down the time between the soft and hard
limits. And it alerts the user about the time remaining before the hard
limit is met.
In this scenario, the grace period expires and the user cannot write to the
file system until more space has been added, even if the hard limit has not
been met.
• Administrators can set an expiration date for the grace period. The
default is seven days, alternatively, it can be set to an infinite amount
of time and the grace period does not expire.
• Once the grace period expiration date is met, the grace period is not
applied to the file system directory.
In this scenario, the hard limit is reached for a quota tree, and no user can
write data to the tree or the file system until more space becomes
available.
To try configuring NAS Quotas for file systems in PowerFlex Manager, use
the simulation below.
Export/Share Filesystems
File System represents a set of storage resources that provide network file
storage.
To learn about the components of file system storage, select each tab.
NAS Server
File System
Share or Export
Snapshots
Snapshots are point-in-time captures which save the state of the file
system including all files and data within it. Snapshots can be used to
restore the entire file system to a previous state.
Up to 126 snapshots per file system can be taken. Manual snapshots that
are created with PowerFlex Manager are retained for one week after
creation (unless configured otherwise).
The default access type for file snapshots is read-only. File Snapshot
Access Types:
NDMP
For more information about using Dell NetWorker for the NDMP backups,
see the NetWorker Administration Guide.
Reconfiguring MDMs
The Meta Data Manager (MDM) is the authority that controls and tracks
the association of physical to logical storage and presents it for use by
clients and applications. It also maintains data protection based on a
mirroring strategy and ensures that storage demand is distributed evenly
across available resources.
MDM Roles
To learn more about the MDM roles, select each of the tabs below.
MDM
Primary MDM
A primary MDM is the MDM in the cluster that controls the SDSs and
SDCs. The primary MDM contains and updates the MDM repository, the
database that stores the SDS configuration, and how data is distributed
between the SDSs in the system. This repository is constantly replicated
to the secondary MDMs, so they can take over with no delay. Every MDM
cluster has one primary MDM.
Secondary MDM
A secondary MDM is an MDM in the cluster that is ready to take over the
primary MDM role if ever necessary. In a three-node cluster, there is one
Tie-breaker MDM
Standby MDM
Manager MDM
PowerFlex does not encrypt the data that is stored on SDS devices.
CloudLink is installed to encrypt storage devices before they are added to
the PowerFlex Storage Pool.
CloudLink encrypts the SDS devices with unique keys that are controlled
by enterprise security administrators. CloudLink Center provides
centralized, policy-based management for these keys, enabling single-
screen security monitoring and management across one or more
PowerFlex deployments.
Step 1
On the menu bar, click Settings and then click License Management and
select other Software Licenses.
Step 2
Step 3
Deploy CloudLink
PowerFlex Manager
Backup
Restore
./restore-PFMP.sh [BACKUP_LOCATION]
[ENCRYPTION_PASSWORD] {CIFS_USERNAME]
[CIFS_PASSWORD}
The restore process prints status information until the restore is
complete.
Restoring an earlier configuration restarts PowerFlex Manager and
deletes data that is created after the backup file to which you are
restoring. Any running jobs could be terminated as well.
For a cluster, the backup operation backs up the primary vCenter Server
instance. Administrators reconstruct the cluster after the restore operation
completes successfully.
• Go to the VMware Tech Zone and search for the vCenter Server
Backup and Restore documentation to learn how to perform the
backup and restore.
iDRAC
The backup file is exported to either a local drive, network share drive, or
to a local file using HTTP or HTTPS file transfer: Configuration > Server
Configuration Profile > Export.
Network Switches
Customers are responsible for providing the backup location and also
maintaining the backup for switches. It is recommended that customers
store their backup in a separate shared location.
CloudLink Center
The default location for CloudLink backup configuration data storage is the
local desktop. However, an administrator can change the backup storage
location to a configured FTP or SFTP server. CloudLink Center can be set
to automatically generate backups at a designated schedule. CloudLink
Center backups can also be generated manually.
The external passwords that are configured with PowerFlex must be kept
in sync with those configured on the components themselves. The
components include iDRAC, Server BIOS, vCenter (and ESXi), and host
passwords.
When the administrator first logs into PowerFlex Manager, they must set
their password. Administrators can also change their password at any time
after the first login.
1. Click the user icon in the upper right corner of PowerFlex Manager.
2. Click Change password.
• Local Users - create, modify, delete, or reset the password for a local
user. Review the User Roles available in PowerFlex Manager.
• LDAP Users and Groups - Add LDAP users or groups or modify
LDAP users or groups.
• Directory Services - Add, modify, and remove a directory service. The
PowerFlex Manager accesses the directory services to authenticate
users.
wanted. Use the Dell PowerFlex Manager 4.0.x application Online Help to
review how to accomplish adding users as an administrator.
The steps to change the password of the CloudLink secadmin user using
the CloudLink Center and PowerFlex Manager are:
• ASM deployer
• iDRAC life cycle
• Dell PowerSwitch
• VMware ESXi
• CloudLink Center
• VMware ESXi
• Cisco Nexus switch
• NAS
Zero-Padding
Each Storage Pool can work in one of the following modes:
Zero-padding enabled: Ensures that every read from an
area previously not written to returns zeros. Some
applications might depend on this behavior. Zero padding
also ensures that reading from a volume will not return
information that was previously deleted from the volume.
This behavior incurs some performance overhead on the
first write to every area of the volume since the area needs
to be filled with zeros first. FG is always zero padded.
Zero-padding disabled (default only for MG): A read from
an area previously not written to will return unknown
content. This content might change on subsequent reads.
Zero padding must be enabled if you plan to use any other
application that assumes that when reading from areas not
written to before, the storage will return zeros or consistent
data.
Modify an SDS
3: IP registration box - this is where the MDMs from the target system are
entered.
4: IP registration list - this is where the registered MDMs from the target
system are listed.
CloudLink Center
CloudLink Center is a policy-based key manager that
delivers encryption keys to the storage data servers
(SDS) devices and monitors security-related events on
the SDS machines.
• The CloudLink Agent is installed only on the storage data server (SDS)
nodes. Encryption Agents work with the operating system of the node
where the SDS is installed to encrypt local devices.
• The CloudLink Encryption Agent orchestrates the encryption and
decryption process by enabling underlying encryption technology
present in the SDS operating system.
• The CloudLink Encryption Agent communicates with CloudLink Center
to request encryption keys and provide updates on security events.
The steps to change iDRAC password using the iDRAC web interface are:
The table summarizes the activities that are performed by each user role.
3: Local protection domain that contains the storage pool being replicated.
Expiration Time
The expiration time determines the retention interval
IOPS
IOPS (input/output operations per second) is the standard unit of
measurement for the maximum number of reads and writes.
Secured Flag
The secure flag determines if the expiration field is active.