100% found this document useful (1 vote)
283 views

Vmax Allflash

VMAX All flash

Uploaded by

prabhs3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
283 views

Vmax Allflash

VMAX All flash

Uploaded by

prabhs3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 132

Dell EMC VMAX All Flash Product Guide

VMAX 250F, 450F, 850F, 950F


with HYPERMAX OS
Revision 13
September 2019
Copyright © 2016-2019 Dell Inc. or its subsidiaries. All rights reserved.

Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.

Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.

Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com

2 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CONTENTS

Figures 7

Tables 9

Preface 11
Revision history................................................................................................. 17

Chapter 1 VMAX All Flash with HYPERMAX OS 19


Introduction to VMAX All Flash with HYPERMAX OS....................................... 20
VMAX All Flash hardware specifications...............................................22
Software packages .......................................................................................... 23
HYPERMAX OS................................................................................................ 25
What's new in HYPERMAX OS 5977.1125.1125.................................... 25
HYPERMAX OS emulations..................................................................26
Container applications .........................................................................28
Data protection and integrity................................................................ 31
Inline compression................................................................................38

Chapter 2 Management Interfaces 41


Management interface versions........................................................................ 42
Unisphere for VMAX......................................................................................... 42
Workload Planner.................................................................................42
FAST Array Advisor..............................................................................43
Unisphere 360...................................................................................................43
Solutions Enabler.............................................................................................. 43
Mainframe Enablers.......................................................................................... 44
Geographically Dispersed Disaster Restart (GDDR)..........................................44
SMI-S Provider................................................................................................. 45
VASA Provider.................................................................................................. 45
eNAS management interface ........................................................................... 45
Storage Resource Management (SRM)............................................................ 45
vStorage APIs for Array Integration.................................................................. 46
SRDF Adapter for VMware vCenter Site Recovery Manager............................ 47
SRDF/Cluster Enabler ......................................................................................47
Product Suite for z/TPF................................................................................... 47
SRDF/TimeFinder Manager for IBM i................................................................48
AppSync........................................................................................................... 48

Chapter 3 Open Systems Features 51


HYPERMAX OS support for open systems....................................................... 52
Backup and restore using ProtectPoint and Data Domain................................. 52
Backup.................................................................................................52
Restore................................................................................................ 53
ProtectPoint agents.............................................................................54
Features used for ProtectPoint backup and restore.............................54
ProtectPoint and traditional backup.....................................................54
More information................................................................................. 54

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 3
Contents

VMware Virtual Volumes...................................................................................55


vVol components................................................................................. 55
vVol scalability..................................................................................... 55
vVol workflow...................................................................................... 56

Chapter 4 Mainframe Features 59


HYPERMAX OS support for mainframe............................................................ 60
IBM Z Systems functionality support................................................................ 60
IBM 2107 support.............................................................................................. 61
Logical control unit capabilities..........................................................................61
Disk drive emulations........................................................................................ 62
Cascading configurations..................................................................................62

Chapter 5 Provisioning 63
Thin provisioning...............................................................................................64
Pre-configuration for thin provisioning.................................................64
Thin devices (TDEVs).......................................................................... 65
Thin device oversubscription................................................................66
Open Systems-specific provisioning.................................................... 66
CloudArray as an external tier........................................................................... 68

Chapter 6 Native local replication with TimeFinder 69


About TimeFinder..............................................................................................70
Interoperability with legacy TimeFinder products................................. 71
Targetless snapshots............................................................................ 71
Secure snaps........................................................................................72
Provision multiple environments from a linked target........................... 72
Cascading snapshots............................................................................73
Accessing point-in-time copies............................................................ 73
Mainframe SnapVX and zDP............................................................................. 73

Chapter 7 Remote replication 75


Native remote replication with SRDF................................................................ 76
SRDF 2-site solutions........................................................................... 77
SRDF multi-site solutions..................................................................... 79
Interfamily compatibility.......................................................................80
SRDF device pairs................................................................................80
Dynamic device personalities............................................................... 84
SRDF modes of operation.................................................................... 85
SRDF groups........................................................................................86
Director boards, links, and ports...........................................................87
SRDF consistency................................................................................ 87
Data migration..................................................................................... 88
More information................................................................................. 89
SRDF/Metro.....................................................................................................90
Deployment options............................................................................. 90
SRDF/Metro Resilience....................................................................... 90
Disaster recovery facilities................................................................... 92
More information................................................................................. 93
RecoverPoint.................................................................................................... 93
Remote replication using eNAS.........................................................................94

4 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Contents

Chapter 8 Blended local and remote replication 95


Integration of SRDF and TimeFinder.................................................................96
R1 and R2 devices in TimeFinder operations..................................................... 96
SRDF/AR..........................................................................................................96
SRDF/AR 2-site configurations............................................................97
SRDF/AR 3-site configurations............................................................98
TimeFinder and SRDF/A................................................................................... 99
TimeFinder and SRDF/S................................................................................... 99

Chapter 9 Data Migration 101


Overview......................................................................................................... 102
Data migration for open systems..................................................................... 103
Non-Disruptive Migration overview.................................................... 103
Open Replicator..................................................................................107
PowerPath Migration Enabler.............................................................108
Data migration using SRDF/Data Mobility.......................................... 109
Space and zero-space reclamation..................................................... 109
Data migration for mainframe..........................................................................109
Volume migration using z/OS Migrator............................................... 110
Dataset migration using z/OS Migrator...............................................110

Chapter 10 CloudArray for VMAX All Flash 113


About CloudArray.............................................................................................114
CloudArray physical appliance..........................................................................115
Cloud provider connectivity............................................................................. 115
Dynamic caching.............................................................................................. 115
Security and data integrity...............................................................................115
Administration..................................................................................................115

Appendix A Mainframe Error Reporting 117


Error reporting to the mainframe host..............................................................118
SIM severity reporting..................................................................................... 118
Environmental errors........................................................................... 119
Operator messages.............................................................................122

Appendix B Licensing 125


eLicensing....................................................................................................... 126
Capacity measurements......................................................................127
Open systems licenses.....................................................................................128
License suites..................................................................................... 128
Individual licenses............................................................................... 132
Ecosystem licenses.............................................................................132

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 5
Contents

6 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
FIGURES

1 VMAX All Flash scale up and out........................................................................................21


2 D@RE architecture, embedded.........................................................................................33
3 D@RE architecture, external............................................................................................ 33
4 Inline compression and over-subscription..........................................................................38
5 Data flow during a backup operation to Data Domain........................................................53
6 Auto-provisioning groups.................................................................................................. 67
7 SnapVX targetless snapshots............................................................................................72
8 SnapVX cascaded snapshots.............................................................................................73
9 zDP operation................................................................................................................... 74
10 R1 and R2 devices .............................................................................................................81
11 R11 device in concurrent SRDF......................................................................................... 82
12 R21 device in cascaded SRDF........................................................................................... 83
13 R22 devices in cascaded and concurrent SRDF/Star........................................................84
14 Migrating data and removing a secondary (R2) array....................................................... 88
15 SRDF/Metro.....................................................................................................................90
16 Disaster recovery for SRDF/Metro................................................................................... 92
17 SRDF/AR 2-site solution...................................................................................................97
18 SRDF/AR 3-site solution...................................................................................................98
19 Non-Disruptive Migration zoning .................................................................................... 104
20 Open Replicator hot (or live) pull.....................................................................................108
21 Open Replicator cold (or point-in-time) pull.................................................................... 108
22 z/OS volume migration.................................................................................................... 110
23 z/OS Migrator dataset migration...................................................................................... 111
24 CloudArray deployment for VMAX All Flash......................................................................114
25 z/OS IEA480E acute alert error message format (call home failure)............................... 122
26 z/OS IEA480E service alert error message format (Disk Adapter failure)........................122
27 z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented
against unrelated resource)............................................................................................. 123
28 z/OS IEA480E service alert error message format (mirror-2 resynchronization)............ 123
29 z/OS IEA480E service alert error message format (mirror-1 resynchronization)............. 123
30 eLicensing process.......................................................................................................... 126

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 7
Figures

8 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
TABLES

1 Typographical conventions used in this content................................................................ 15


2 Revision history................................................................................................................. 17
3 Symbol legend for VMAX All Flash software features/software package..........................23
4 VMAX All Flash software features by model......................................................................23
5 HYPERMAX OS emulations...............................................................................................26
6 eManagement resource requirements...............................................................................28
7 eNAS configurations by array .......................................................................................... 30
8 Unisphere tasks................................................................................................................ 42
9 vVol architecture component management capability.......................................................55
10 vVol-specific scalability ....................................................................................................56
11 Logical control unit maximum values................................................................................. 61
12 Maximum LPARs per port..................................................................................................61
13 RAID options..................................................................................................................... 64
14 SRDF 2-site solutions........................................................................................................77
15 SRDF multi-site solutions ................................................................................................. 79
16 SIM severity alerts........................................................................................................... 119
17 Environmental errors reported as SIM messages............................................................. 119
18 VMAX All Flash product title capacity types.................................................................... 127
19 VMAX All Flash license suites ......................................................................................... 128
20 Individual licenses for open systems environment............................................................132
21 Individual licenses for open systems environment............................................................132

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 9
Tables

10 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Preface

As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not be
supported by all versions of the software or hardware currently in use. The product release notes
provide the most up-to-date information on product features.
Contact your Dell EMC representative if a product does not function properly or does not function
as described in this document.
Note: This document was accurate at publication time. New versions of this document might
be released on Dell EMC Online Support (https://www.dell.com/support/home). Check to
ensure that you are using the latest version of this document.
Purpose
This document introduces the features of the VMAX All Flash 250F, 450F, 850F, 950F arrays
running HYPERMAX OS 5977.
Audience
This document is intended for use by customers and Dell EMC representatives.
Related documentation
The following documentation portfolios contain documents related to the hardware platform and
manuals needed to manage your software and storage system configuration. Also listed are
documents for external components that interact with the VMAX All Flash array.
Hardware platform documents:
Dell EMC VMAX All Flash Site Planning Guide for VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Provides planning information regarding the purchase and installation of a VMAX 250F, 450F,
850F, 950F with HYPERMAX OS.

Dell EMC VMAX Best Practices Guide for AC Power Connections


Describes the best practices to assure fault-tolerant power to a VMAX3 Family array or VMAX
All Flash array.

Dell EMC VMAX Power-down/Power-up Procedure


Describes how to power-down and power-up a VMAX3 Family array or VMAX All Flash array.

Dell EMC VMAX Securing Kit Installation Guide


Describes how to install the securing kit on a VMAX3 Family array or VMAX All Flash array.

E-Lab™ Interoperability Navigator (ELN)


Provides a web-based interoperability and solution search portal. You can find the ELN at
https://elabnavigator.EMC.com.

Unisphere documents:
EMC Unisphere for VMAX Release Notes
Describes new features and any known limitations for Unisphere for VMAX .

EMC Unisphere for VMAX Installation Guide


Provides installation instructions for Unisphere for VMAX.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 11
Preface

EMC Unisphere for VMAX Online Help


Describes the Unisphere for VMAX concepts and functions.

EMC Unisphere for VMAX Performance Viewer Installation Guide


Provides installation instructions for Unisphere for VMAX Performance Viewer.

EMC Unisphere for VMAX Database Storage Analyzer Online Help


Describes the Unisphere for VMAX Database Storage Analyzer concepts and functions.

EMC Unisphere 360 for VMAX Release Notes


Describes new features and any known limitations for Unisphere 360 for VMAX.

EMC Unisphere 360 for VMAX Installation Guide


Provides installation instructions for Unisphere 360 for VMAX.

EMC Unisphere 360 for VMAX Online Help


Describes the Unisphere 360 for VMAX concepts and functions.

Solutions Enabler documents:


Dell EMC Solutions Enabler, VSS Provider, and SMI-S Provider Release Notes
Describes new features and any known limitations.
Dell EMC Solutions Enabler Installation and Configuration Guide
Provides host-specific installation instructions.

Dell EMC Solutions Enabler CLI Reference Guide


Documents the SYMCLI commands, daemons, error codes and option file parameters provided
with the Solutions Enabler man pages.

Dell EMC Solutions Enabler Array Controls and Management CLI User Guide
Describes how to configure array control, management, and migration operations using
SYMCLI commands for arrays running HYPERMAX OS and PowerMaxOS.

Dell EMC Solutions Enabler Array Controls and Management CLI User Guide
Describes how to configure array control, management, and migration operations using
SYMCLI commands for arrays running Enginuity.

Dell EMC Solutions Enabler SRDF Family CLI User Guide


Describes how to configure and manage SRDF environments using SYMCLI commands.

SRDF Interfamily Connectivity Information


Defines the versions of PowerMaxOS, HYPERMAX OS and Enginuity that can make up valid
SRDF replication and SRDF/Metro configurations, and can participate in Non-Disruptive
Migration (NDM).

Dell EMC Solutions Enabler TimeFinder SnapVX CLI User Guide


Describes how to configure and manage TimeFinder SnapVX environments using SYMCLI
commands.

Dell EMC Solutions Enabler SRM CLI User Guide


Provides Storage Resource Management (SRM) information related to various data objects
and data handling facilities.

Dell EMC SRDF/Metro vWitness Configuration Guide


Describes how to install, configure and manage SRDF/Metro using vWitness.

12 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Preface

Dell EMC Events and Alerts for PowerMax and VMAX User Guide
Documents the SYMAPI daemon messages, asynchronous errors and message events,
SYMCLI return codes, and how to configure event logging.

Embedded NAS (eNAS) documents:


EMC VMAX Embedded NAS Release Notes
Describes the new features and identify any known functionality restrictions and performance
issues that may exist in the current version.

EMC VMAX Embedded NAS Quick Start Guide


Describes how to configure eNAS on a VMAX3 or VMAX All Flash storage system.

EMC VMAX Embedded NAS File Auto Recovery with SRDF/S


Describes how to install and use EMC File Auto Recovery with SRDF/S.

Dell EMC PowerMax eNAS CLI Reference Guide


A reference for command line users and script programmers that provides the syntax, error
codes, and parameters of all eNAS commands.

ProtectPoint documents:
Dell EMC ProtectPoint Solutions Guide
Provides ProtectPoint information related to various data objects and data handling facilities.

Dell EMC File System Agent Installation and Adminstration Guide


Shows how to install, configure and manage the ProtectPoint File System Agent.

Dell EMC Database Application Agent Installation and Administration Guide


Shows how to install, configure, and manage the ProtectPoint Database Application Agent.

Dell EMC Microsoft Application Agent Installation and Administration Guide


Shows how to install, configure, and manage the ProtectPoint Microsoft Application Agent.

Note: ProtectPoint has been renamed to Storage Direct and it is included in PowerProtect,
Data Protection Suite for Apps, or Data Protection Suite Enterprise Software Edition.
Mainframe Enablers documents:
Dell EMC Mainframe Enablers Installation and Customization Guide
Describes how to install and configure Mainframe Enablers software.

Dell EMC Mainframe Enablers Release Notes


Describes new features and any known limitations.

Dell EMC Mainframe Enablers Message Guide


Describes the status, warning, and error messages generated by Mainframe Enablers
software.

Dell EMC Mainframe Enablers ResourcePak Base for z/OS Product Guide
Describes how to configure VMAX system control and management using the EMC Symmetrix
Control Facility (EMCSCF).

Dell EMC Mainframe Enablers AutoSwap for z/OS Product Guide


Describes how to use AutoSwap to perform automatic workload swaps between VMAX
systems when the software detects a planned or unplanned outage.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 13
Preface

Dell EMC Mainframe Enablers Consistency Groups for z/OS Product Guide
Describes how to use Consistency Groups for z/OS (ConGroup) to ensure the consistency of
data remotely copied by SRDF in the event of a rolling disaster.

Dell EMC Mainframe Enablers SRDF Host Component for z/OS Product Guide
Describes how to use SRDF Host Component to control and monitor remote data replication
processes.

Dell EMC Mainframe Enablers TimeFinder SnapVX and zDP Product Guide
Describes how to use TimeFinder SnapVX and zDP to create and manage space-efficient
targetless snaps.

Dell EMC Mainframe Enablers TimeFinder/Clone Mainframe Snap Facility Product Guide
Describes how to use TimeFinder/Clone, TimeFinder/Snap, and TimeFinder/CG to control
and monitor local data replication processes.

Dell EMC Mainframe Enablers TimeFinder/Mirror for z/OS Product Guide


Describes how to use TimeFinder/Mirror to create Business Continuance Volumes (BCVs)
which can then be established, split, re-established and restored from the source logical
volumes for backup, restore, decision support, or application testing.
Dell EMC Mainframe Enablers TimeFinder Utility for z/OS Product Guide
Describes how to use the TimeFinder Utility to condition volumes and devices.

Geographically Dispersed Disaster Recovery (GDDR) documents:


Dell EMC GDDR for SRDF/S with ConGroup Product Guide
Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate
business recovery following both planned outages and disaster situations.

Dell EMC GDDR for SRDF/S with AutoSwap Product Guide


Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate
business recovery following both planned outages and disaster situations.

Dell EMC GDDR for SRDF/Star Product Guide


Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate
business recovery following both planned outages and disaster situations.

Dell EMC GDDR for SRDF/Star with AutoSwap Product Guide


Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate
business recovery following both planned outages and disaster situations.

Dell EMC GDDR for SRDF/SQAR with AutoSwap Product Guide


Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate
business recovery following both planned outages and disaster situations.

Dell EMC GDDR for SRDF/A Product Guide


Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate
business recovery following both planned outages and disaster situations.

Dell EMC GDDR Message Guide


Describes the status, warning, and error messages generated by GDDR.

Dell EMC GDDR Release Notes


Describes new features and any known limitations.

z/OS Migrator documents:

14 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Preface

Dell EMC z/OS Migrator Product Guide


Describes how to use z/OS Migrator to perform volume mirror and migrator functions as well
as logical migration functions.

Dell EMC z/OS Migrator Message Guide


Describes the status, warning, and error messages generated by z/OS Migrator.

Dell EMC z/OS Migrator Release Notes


Describes new features and any known limitations.

z/TPF documents:
Dell EMC ResourcePak for z/TPF Product Guide
Describes how to configure VMAX system control and management in the z/TPF operating
environment.

Dell EMC SRDF Controls for z/TPF Product Guide


Describes how to perform remote replication operations in the z/TPF operating environment.

Dell EMC TimeFinder Controls for z/TPF Product Guide


Describes how to perform local replication operations in the z/TPF operating environment.
Dell EMC z/TPF Suite Release Notes
Describes new features and any known limitations.

Typographical conventions
Dell EMC uses the following type style conventions in this document:

Table 1 Typographical conventions used in this content

Bold Used for names of interface elements, such as names of windows,


dialog boxes, buttons, fields, tab names, key names, and menu paths
(what the user specifically selects or clicks)

Italic Used for full titles of publications referenced in text


Monospace Used for:
l System code
l System output, such as an error message or script
l Pathnames, filenames, prompts, and syntax
l Commands and options

Monospace italic Used for variables


Monospace bold Used for user input

[] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{} Braces enclose content that the user must specify, such as x or y or


z

... Ellipses indicate nonessential information omitted from the example

Where to get help


Dell EMC support, product, and licensing information can be obtained as follows:

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 15
Preface

Product information
Dell EMC technical support, documentation, release notes, software updates, or information
about Dell EMC products can be obtained at https://www.dell.com/support/home
(registration required) or https://www.dellemc.com/en-us/documentation/vmax-all-flash-
family.htm.

Technical support
To open a service request through the Dell EMC Online Support (https://www.dell.com/
support/home) site, you must have a valid support agreement. Contact your Dell EMC sales
representative for details about obtaining a valid support agreement or to answer any
questions about your account.

Additional support options


l Support by Product — Dell EMC offers consolidated, product-specific information on the
Web at: https://support.EMC.com/products
The Support by Product web pages offer quick links to Documentation, White Papers,
Advisories (such as frequently used Knowledgebase articles), and Downloads, as well as
more dynamic content, such as presentations, discussion, relevant Customer Support
Forum entries, and a link to Dell EMC Live Chat.
l Dell EMC Live Chat — Open a Chat or instant message session with a Dell EMC Support
Engineer.

eLicensing support
To activate your entitlements and obtain your VMAX license files, visit the Service Center on
Dell EMC Online Support (https://www.dell.com/support/home), as directed on your License
Authorization Code (LAC) letter emailed to you.
l For help with missing or incorrect entitlements after activation (that is, expected
functionality remains unavailable because it is not licensed), contact your Dell EMC
Account Representative or Authorized Reseller.
l For help with any errors applying license files through Solutions Enabler, contact the Dell
EMC Customer Support Center.
l If you are missing a LAC letter, or require further instructions on activating your licenses
through the Online Support site, contact Dell EMC's worldwide Licensing team at
[email protected] or call:
n North America, Latin America, APJK, Australia, New Zealand: SVC4EMC
(800-782-4362) and follow the voice prompts.
n EMEA: +353 (0) 21 4879862 and follow the voice prompts.

Your comments
Your suggestions help us improve the accuracy, organization, and overall quality of the
documentation. Send your comments and feedback to: [email protected]

16 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Preface

Revision history
The following table lists the revision history of this document.

Table 2 Revision history

Revision Description and/or change Operating


system

13 Revised content: PowerMax OS


5978.444.444
l Updated links

12 Updated with new and changed features related to the latest release of the PowerMax OS
PowerMax OS 5978.444.444

11 Revised content: HYPERMAX OS


5977.1125.1125
l Clarify the recommended maximum distance between arrays using SRDF/S

10 Revised content: HYPERMAX OS


5977.1125.1125
l Update system capacities

09 Revised content: HYPERMAX OS


5977.1125.1125
l Update section on using ProtectPoint for backup and restore operations
l Add hardware compression to table of SRDF features

08 Revised content: HYPERMAX OS


5977.1125.1125
l Update descriptions of the All Flash arrays
l Add PowerMax and PowerMaxOS to the SRDF chapter

07 New content: HYPERMAX OS


5977. 1125.1125
l RecoverPoint
l VMAX 950F support
l Secure snaps
l Data at Rest Encryption

06 Revised content: HYPERMAX OS


5997.952. 892
l Power consumption and heat dissipation numbers for the VMAX 250F
l SRDF/Metro array witness overview

05 New content: HYPERMAX OS


5997.952. 892
l VMAX 250F support
l Inline Compression
l Mainframe support
l Non disruptive migration
l Virtual Witness (vWitness)

04 Removed "RPQ" requirement from Third Party racking. HYPERMAX


5977.810.784

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 17
Preface

Table 2 Revision history (continued)

Revision Description and/or change Operating


system

03 Updated Licensing appendix. HYPERMAX


5977.810.784

02 Updated values in the power and heat dissipation specification table. HYPERMAX OS
5977.691.684 +
Q1 2016 Service
Pack

01 First release of the VMAX All Flash with EMC HYPERMAX OS 5977 for VMAX HYPERMAX OS
450F, 450FX, 850F, and 850FX. 5977.691.684 +
Q1 2016 Service
Pack

18 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 1
VMAX All Flash with HYPERMAX OS

This chapter introduces VMAX All Flash systems and the HYPERMAX OS operating environment.

l Introduction to VMAX All Flash with HYPERMAX OS............................................................20


l Software packages ...............................................................................................................23
l HYPERMAX OS.....................................................................................................................25

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 19
VMAX All Flash with HYPERMAX OS

Introduction to VMAX All Flash with HYPERMAX OS


VMAX All Flash is a range of storage arrays that use only high-density flash drives. The range
contains four models that combine high scale, low latency and rich data services:
l VMAX 250F with a maximum capacity of 1.16 PBe (Petabytes effective)
l VMAX 450F with a maximum capacity of 2.3 PBe
l VMAX 850F with a maximum capacity of 4.4 PBe
l VMAX 950F with a maximum capacity of 4.42 PBe
Each VMAX All Flash array is made up of one or more building blocks known as V-Bricks (in an
open systems array) or zBricks (in a mainframe array). A V-Brick or zBrick consists of:
l An engine with two directors (the redundant data storage processing unit)
l Flash capacity in Drive Array Enclosures (DAEs):
n VMAX 250F: Two 25-slot DAEs with a minimum base capacity of 13TBu
n VMAX 450F, VMAX 850F: Two 120-slot DAEs with a minimum base capacity of 53TBu
n VMAX 950F (open or mixed systems): Two 120-slot DAEs with a minimum base capacity of
53TBu
n VMAX 950F (mainframe systems): Two 120-slot DAEs with a minimum base capacity of
13TBu
l Multiple software packages are available: F and FX packages for open system arrays and zF
and zFX for mainframe arrays.
Customers can increase the initial configuration by adding 11 TBu (250F) or 13 TBu (450F, 850F,
950F) capacity packs that bundle all required flash capacity and software. In open system arrays,
capacity packs are known as Flash capacity packs. In mainframe arrays, they are known as
zCapacity packs. In addition, customers can also scale out the initial configuration by adding
additional V-Bricks or zBricks to increase performance, connectivity, and throughput.
l VMAX 250F All Flash arrays scale from one to two V-Bricks
l VMAX 450F All Flash arrays scale from one to four V-Bricks/zBricks
l VMAX 850F/950F All Flash arrays scale from one to eight V-Bricks/zBricks
Independent and linear scaling of both capacity and performance enables VMAX All Flash to be
extremely flexible at addressing varying workloads. The following illustrates scaling opportunities
for VMAX All Flash open system arrays.

20 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

Figure 1 VMAX All Flash scale up and out

Flash pack START SMALL, GET BIG


S 11/13 TBu* LINEAR SCALE TB’s AND IOPS
c EASY TO SIZE, CONFIG, ORDER
a Flash pack
l 11/13 TBu*
e

u V-Brick V-Brick V-Brick


p 11/53 TBu* 11/53 TBu* 11/53 TBu*

Scale out

* Depending on the VMAX model

The All Flash arrays:


l Use the powerful Dynamic Virtual Matrix Architecture.
l Deliver high levels of performance and scale. For example, VMAX 950F arrays deliver 6.74M
IOPS (RRH) with less than 0.5 ms latency at 150 GB/sec bandwidth. VMAX 250F, 450F, 850F,
950F arrays deliver consistently low response times (< 0.5ms).
l Provide mainframe (VMAX 450F, 850F, 950F) and open systems (including IBM i) host
connectivity for mission critical storage needs
l Use the HYPERMAX OS hypervisor to provide file system storage with eNAS and embedded
management services for Unisphere. Embedded Network Attached Storage (eNAS) on page
29 and Embedded Management on page 28, have more information on these features.
l Provide data services such as:
n SRDF remote replication technology with the latest SRDF/Metro functionality
n SnapVX local replication services based on SnapVX infrastructure
n Data protection and encryption
n Access to hybrid cloud
About TimeFinder on page 70, About CloudArray on page 114 have more information on these
features.
l Use the latest Flash drive technology in V-Bricks/zBricks and capacity packs to deliver a top-
tier, diamond service level.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 21
VMAX All Flash with HYPERMAX OS

VMAX All Flash hardware specifications


Detailed specifications of the VMAX All Flash hardware, including capacity, cache memory, I/O
protocols, and I/O connections are available at:
l VMAX 250F and 950F: https://www.emc.com/collateral/specification-sheet/h16051-vmax-
all-flash-250f-950f-ss.pdf
l VMAX 450F and 850F: https://www.emc.com/collateral/specification-sheet/h16052-vmax-
all-flash-450f-850f-ss.pdf

22 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

Software packages
VMAX All Flash arrays are available with multiple software packages (F/FX for open system arrays,
and zF/zFX for mainframe arrays) containing standard and optional features.

Table 3 Symbol legend for VMAX All Flash software features/software package

Standard feature with that model/software Optional feature with that model/software
package. package.

Table 4 VMAX All Flash software features by model

Software/Feature VMAX model and software packages

250F 450F 850F, 950F

F FX F FX zF zF F FX zF zF See:
X X

HYPERMAX OS HYPERMAX OS on page


25

Embedded Managementa Management Interfaces


on page 41

Mainframe Essentials Plus Mainframe Features on


page 59

SnapVX About TimeFinder on


page 70

AppSync Starter Pack AppSync on page 48

Compression Inline compression on


page 38

Non-Disruptive Migration Non-Disruptive


Migration overview on
page 103

SRDF Remote replication on


page 75

SRDF/Metro SRDF/Metro on page


90

Embedded Network Embedded Network


Attached Storage (eNAS) Attached Storage
(eNAS) on page 29

Unisphere 360 Unisphere 360 on page


43

SRM Storage Resource


Management (SRM) on
page 45

Data at Rest Encryption Data at Rest Encryption


(D@RE) on page 31

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 23
VMAX All Flash with HYPERMAX OS

Table 4 VMAX All Flash software features by model (continued)

Software/Feature VMAX model and software packages

250F 450F 850F, 950F

F FX F FX zF zF F FX zF zF See:
X X

CloudArray Enabler CloudArray for


VMAX All Flash on page
113

PowerPath® b b b PowerPath Migration


Enabler on page 108

AppSync Full Suite AppSync on page 48

ProtectPoint agents on
ProtectPoint
page 54

Mainframe SnapVX and


AutoSwap and zDP
zDP on page 73

Geographically
Dispersed Disaster
GDDR
Restart (GDDR) on page
44

a. eManagement includes: embedded Unisphere, Solutions Enabler, and SMI-S.


b. The FX package includes 75 PowerPath licenses.. Additional licensesare available separately..

24 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

HYPERMAX OS
This section highlights the features of the HYPERMAX OS.

What's new in HYPERMAX OS 5977.1125.1125


This section describes new functionality and features that are provided by HYPERMAX OS
5977.1125.1125 for VMAX All Flash arrays.
RecoverPoint
HYPERMAX OS 5977.1125.1125 introduces support for RecoverPoint on VMAX storage arrays.
RecoverPoint is a comprehensive data protection solution that is designed to provide production
data integrity at local and remote sites. RecoverPoint also provides the ability to recover data from
any point in time using journaling technology.
RecoverPoint on page 93 provides more information.
Secure snaps
Secure snaps is an enhancement to the current snapshot technology. Secure snaps prevent
administrators or other high-level users from intentionally or unintentionally deleting snapshot
data. In addition, secure snaps are also immune to automatic failure resulting from running out of
Storage Resource Pool (SRP) or Replication Data Pointer (RDP) space on the array.
Secure snaps on page 72 provides more information.
Support for VMAX 950F
The VMAX 950F All Flash array is designed to meet the needs of high-end enterprise space. VMAX
950F scales from one to eight V-Bricks/zBricks and provides a maximum of 4PB effective
capacity.
Introduction to VMAX All Flash with HYPERMAX OS on page 20 provides more information.
Data at Rest Encryption
Data at Rest Encryption (D@RE) now supports the OASIS Key Management Interoperability
Protocol (KMIP) and can integrate with external servers that also support this protocol. This
release has been validated to interoperate with the following KMIP-based key managers:
l Gemalto SafeNet KeySecure
l IBM Security Key Lifecycle Manager
Data at Rest Encryption on page 31 provides more information.
Mixed FBA/CKD drive support for VMAX 950F arrays
HYPERMAX OS 5977.1125.1125 introduces support for mixed FBA and CKD drive configurations.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 25
VMAX All Flash with HYPERMAX OS

HYPERMAX OS emulations
HYPERMAX OS provides emulations (executables) that perform specific data service and control
functions in the HYPERMAX environment. The following table lists the available emulations.

Table 5 HYPERMAX OS emulations

Area Emulation Description Protocol Speeda

Back-end DS Back-end connection in the SAS 12 Gb/s (VMAX 250F)


array that communicates SAS 6 Gb/s (VMAX 450F,
with the drives, DS is also 850F, and 950F)
known as an internal drive
controller.

DX Back-end connections that FC 16 or 8 Gb/s


are not used to connect to
hosts. Used by
ProtectPoint and Cloud
Array.

ProtectPoint links Data


Domain to the array. DX
ports must be configured
for the FC protocol.

Management IM Separates infrastructure N/A


tasks and emulations. By
separating these tasks,
emulations can focus on
I/O-specific work only,
while IM manages and
executes common
infrastructure tasks, such
as environmental
monitoring, Field
Replacement Unit (FRU)
monitoring, and vaulting.

ED Middle layer used to N/A


separate front-end and
back-end I/O processing. It
acts as a translation layer
between the front-end,
which is what the host
knows about, and the back-
end, which is the layer that
reads, writes, and
communicates with
physical storage in the
array.

Host connectivity FA - Fibre Channel Front-end emulation that: FC - 16 or 8 Gb/s


SE - iSCSI l Receives data from the SE - 10 Gb/s
EF - FICON b host (network) and EF - 16 Gb/s
commits it to the array

26 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

Table 5 HYPERMAX OS emulations (continued)

Area Emulation Description Protocol Speeda

l Sends data from the


array to the host/
network

Remote replication RF - Fibre Channel Interconnects arrays for RF - 8 Gb/s SRDF


SRDF.
RE - GbE RE - 1 GbE SRDF
RE - 10 GbE SRDF

a. The 8 Gb/s module auto-negotiates to 2/4/8 Gb/s and the 16 Gb/s module auto-negotiates to 16/8/4 Gb/s using
optical SFP and OM2/OM3/OM4 cabling.
b. Only on VMAX 450F, 850F, and 950F arrays.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 27
VMAX All Flash with HYPERMAX OS

Container applications
HYPERMAX OS provides an open application platform for running data services. It includes a
lightweight hypervisor that enables multiple operating environments to run as virtual machines on
the storage array.
Application containers are virtual machines that provide embedded applications on the storage
array. Each container virtualizes the hardware resources that are required by the embedded
application, including:
l Hardware needed to run the software and embedded application (processor, memory, PCI
devices, power management)
l VM ports, to which LUNs are provisioned
l Access to necessary drives (boot, root, swap, persist, shared)

Embedded Management
The eManagement container application embeds management software (Solutions Enabler, SMI-S,
Unisphere for VMAX) on the storage array, enabling you to manage the array without requiring a
dedicated management host.
With eManagement, you can manage a single storage array and any SRDF attached arrays. To
manage multiple storage arrays with a single control pane, use the traditional host-based
management interfaces: Unisphere and Solutions Enabler. To this end, eManagement allows you to
link-and-launch a host-based instance of Unisphere.
eManagement is typically preconfigured and enabled at the factory. However, starting with
HYPERMAX OS 5977.945.890, eManagement can be added to arrays in the field. Contact your
support representative for more information.
Embedded applications require system memory. The following table lists the amount of memory
unavailable to other data services.

Table 6 eManagement resource requirements

VMAX All Flash CPUs Memory Devices supported


model

VMAX 250F 4 16 GB 200K

VMAX 450F 4 16 GB 200K

VMAX 850F, 950F 4 20 GB 400K

Virtual machine ports


Virtual machine (VM) ports are associated with virtual machines to avoid contention with physical
connectivity. VM ports are addressed as ports 32-63 on each director FA emulation.
LUNs are provisioned on VM ports using the same methods as provisioning physical ports.
A VM port can be mapped to one VM only. However, a VM can be mapped to multiple ports.

28 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

Embedded Network Attached Storage (eNAS)


eNAS is fully integrated into the VMAX All Flash array. eNAS provides flexible and secure multi-
protocol file sharing (NFS 2.0, 3.0, 4.0/4.1, and CIFS/SMB 3.0) and multiple file server identities
(CIFS and NFS servers). eNAS enables:
l File server consolidation/multi-tenancy
l Built-in asynchronous file level remote replication (File Replicator)
l Built-in Network Data Management Protocol (NDMP)
l VDM Synchronous replication with SRDF/S and optional automatic failover manager File Auto
Recovery (FAR)
l Anti-virus
eNAS provides file data services for:
l Consolidating block and file storage in one infrastructure
l Eliminating the gateway hardware, reducing complexity and costs
l Simplifying management
Consolidated block and file storage reduces costs and complexity while increasing business agility.
Customers can leverage data services across block and file storage including storage provisioning,
dynamic Host I/O Limits, and Data at Rest Encryption.

eNAS solutions and implementation


eNAS runs on standard array hardware and is typically pre-configured at the factory. There is a
one-off setup of the Control Station and Data Movers, containers, control devices, and required
masking views as part of the factory pre-configuration. Additional front-end I/O modules are
required to implement eNAS. However, starting with HYPERMAX OS 5977.945.890, eNAS can be
added to arrays in the field. Contact your support representative for more information.
eNAS uses the HYPERMAX OS hypervisor to create virtual instances of NAS Data Movers and
Control Stations on VMAX All Flash controllers. Control Stations and Data Movers are distributed
within the VMAX All Flash array based upon the number of engines and their associated mirrored
pair.
By default, VMAX All Flash arrays have:
l Two Control Station virtual machines
l Two or more Data Mover virtual machines. The number of Data Movers differs for each model
of the array. All configurations include one standby Data Mover.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 29
VMAX All Flash with HYPERMAX OS

eNAS configurations
The storage capacity required for arrays supporting eNAS is at least 680 GB. This table lists eNAS
configurations and front-end I/O modules.

Table 7 eNAS configurations by array

Component Description VMAX 250F VMAX 450F VMAX 850F,


950F

Data moversa Maximum 4 4 8b


virtual machine number

Max 512 TB 512 TB 512 TB


capacity/DM

Logical coresc 12/24 12/24 16/32/48/64b

Memory (GB)c 48/96 48/96 48/96/144/192


b

I/O modules 12 12d 24d


(Max)c

Control Station Logical cores 2 2 2


virtual machines
Memory (GB) 8 8 8
(2)

NAS Capacity/ Maximum 1.15 PB 1.5 PB 3.5 PB


Array

a. Data movers are added in pairs and must have the same configuration.
b. The 850F and 950F can be configured through Sizer with a maximum of four data movers.
However, six and eight data movers can be ordered by RPQ. As the number of data movers
increases, the maximum number of I/O cards , logical cores, memory, and maximum
capacity also increases.
c. For 2, 4, 6, and 8 data movers, respectively.
d. A single 2-port 10GbE Optical I/O module is required by each Data Mover for initial All-Flash
configurations. However, that I/O module can be replaced with a different I/O module
(such as a 4-port 1GbE or 2-port 10GbE copper) using the normal replacement capability
that exists with any eNAS Data Mover I/O module. In addition, additional I/O modules can
be configured through a I/O module upgrade/add as long as standard rules are followed (no
more than 3 I/O modules per Data Mover, all I/O modules must occupy the same slot on
each director on which a Data Mover resides).

Replication using eNAS


The replication methods available for eNAS file systems are:
l Asynchronous file system level replication using VNX Replicator for File.
l Synchronous replication with SRDF/S using File Auto Recovery (FAR) with the optional File
Auto Recover Manager (FARM).
l Checkpoint (point-in-time, logical images of a production file system) creation and
management using VNX SnapSure.
Note: SRDF/A, SRDF/Metro, and TimeFinder are not available with eNAS.

30 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

Data protection and integrity


HYPERMAX OS provides facilities to ensure data integrity and to protect data in the event of a
system failure or power outage:
l RAID levels
l Data at Rest Encryption
l Data erasure
l Block CRC error checks
l Data integrity checks
l Drive monitoring and correction
l Physical memory error correction and error verification
l Drive sparing and direct member sparing
l Vault to flash

RAID levels
VMAX All Flash arrays provide the following RAID levels:
l VMAX 250F: RAID5 (7+1) (Default), RAID5 (3+1) and RAID6 (6+2)
l VMAX 450F, 850F, 950F: RAID5 (7+1) and RAID6 (14+2)

Data at Rest Encryption


Securing sensitive data is an important IT issue, that has regulatory and legislative implications.
Several of the most important data security threats relate to protection of the storage
environment. Drive loss and theft are primary risk factors. Data at Rest Encryption (D@RE)
protects data by adding back-end encryption to an entire array.
D@RE provides hardware-based encryption for VMAX All Flash arrays using I/O modules that
incorporate AES-XTS inline data encryption. These modules encrypt and decrypt data as it is being
written to or read from a drive. This protects your information from unauthorized access even
when drives are removed from the array.
D@RE can use either an internal embedded key manager, or one of these external, enterprise-
grade key managers:
l SafeNet KeySecure by Gemalto
l IBM Security Key Lifecycle Manager
D@RE accesses an external key manager using the Key Management Interoperability Protocol
(KMIP). The EMC E-Lab Interoperability Matrix (https://www.emc.com/products/
interoperability/elab.htm) lists the external key managers for each version of HYPERMAX OS.
When D@RE is active, all configured drives are encrypted, including data drives, spares, and drives
with no provisioned volumes. Vault data is encrypted on Flash I/O modules.
D@RE provides:
l Secure replacement for failed drives that cannot be erased.
For some types of drive failures, data erasure is not possible. Without D@RE, if the failed drive
is repaired, data on the drive may be at risk. With D@RE, deletion of the applicable keys makes
the data on the failed drive unreadable.
l Protection against stolen drives.
When a drive is removed from the array, the key stays behind, making data on the drive
unreadable.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 31
VMAX All Flash with HYPERMAX OS

l Faster drive sparing.


The drive replacement script destroys the keys associated with the removed drive, quickly
making all data on that drive unreadable.
l Secure array retirement.
Simply delete all copies of keys on the array, and all remaining data is unreadable.
In addition, D@RE:
l Is compatible with all array features and all supported drive types or volume emulations
l Provides encryption without degrading performance or disrupting existing applications and
infrastructure

Enabling D@RE
D@RE is a licensed feature that is installed and configured at the factory. Upgrading an existing
array to use D@RE is possible, but is disruptive. The upgrade requires re-installing the array, and
may involve a full data back up and restore. Before upgrading, plan how to manage any data
already on the array. Dell EMC Professional Services offers services to help you implement D@RE.

D@RE components
Embedded D@RE (Figure 2 on page 33) uses the following components, all of which reside on
the primary Management Module Control Station (MMCS):
l RSA Embedded Data Protection Manager (eDPM)— Embedded key management platform,
which provides onboard encryption key management functions, such as secure key generation,
storage, distribution, and audit.
l RSA BSAFE® cryptographic libraries— Provides security functionality for RSA eDPM Server
(embedded key management) and the EMC KTP client (external key management).
l Common Security Toolkit (CST) Lockbox— Hardware- and software-specific encrypted
repository that securely stores passwords and other sensitive key manager configuration
information. The lockbox binds to a specific MMCS.
External D@RE (Figure 3 on page 33) uses the same components as embedded D@RE, and adds
the following:
l EMC Key Trust Platform (KTP)— Also known as the KMIP Client, this component resides on
the MMCS and communicates with external key managers using the OASIS Key Management
Interoperability Protocol (KMIP) to manage encryption keys.
l External Key Manager— Provides centralized encryption key management capabilities such as
secure key generation, storage, distribution, audit, and enabling Federal Information
Processing Standard (FIPS) 140-2 level 3 validation with High Security Module (HSM).
l Cluster/Replication Group— Multiple external key managers sharing configuration settings
and encryption keys. Configuration and key lifecycle changes made to one node are replicated
to all members within the same cluster or replication group.

32 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

Figure 2 D@RE architecture, embedded

Storage
Configuration
Host Management
SAN
IP

RSA
Director Director eDPM Client
IO IO IO IO
Module Module Module Module

Unencrypted
RSA data
eDPM Server
Management
traffic
Encrypted
data
Unique key per physical drive

Figure 3 D@RE architecture, external

Storage
Configuration
Host Management
SAN
Key Trust Platform (KTP) IP

Key management External


Director Director (KMIP)
KMIP Client
Key Manager
IO IO IO IO
Module Module Module Module MMCS

Unencrypted
data
Management
traffic
Encrypted
data
TLS-authenticated
Unique key per physical drive KMIP traffic

External Key Managers


D@RE's external key management is provided by Gemalto SafeNet KeySecure and IBM Security
Key Lifecycle Manager. Keys are generated and distributed using industry standards (NIST 800-57
and ISO 11770). With D@RE, there is no need to replicate keys across volume snapshots or remote
sites. D@RE external key managers can be used with either:
l FIPS 140-2 level 3 validated HSM, in the case of Gemalto SafeNet KeySecure
l FIPS 140-2 level 1 validated software, in the case of IBM Security Key Lifecycle Manager

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 33
VMAX All Flash with HYPERMAX OS

Encryption keys must be highly available when they are needed, and tightly secured. Keys, and the
information required to use keys (during decryption), must be preserved for the lifetime of the
data. This is critical for encrypted data that is kept for many years.
Key accessibility is vital in high-availability environments. D@RE caches the keys locally. So
connection to the Key Manager is necessary only for operations such as the initial installation of
the array, replacement of a drive, or drive upgrades.
Lifecycle events involving keys (generation and destruction) are recorded in the array's Audit Log.
Key protection
The local keystore file is encrypted with a 256-bit AES key derived from a randomly generated
password file. This password file is secured in the Common Security Toolkit (CST) Lockbox, which
uses RSA BSAFE technology. The Lockbox is protected using MMCS-specific stable system values
(SSVs) of the primary MMCS. These are the same SSVs that protect Secure Service Credentials
(SSC).
Compromising the MMCS’s drive or copying Lockbox/keystore files off the array causes the SSV
tests to fail. Compromising the entire MMCS only gives an attacker access if they also
successfully compromise SSC.
There are no backdoor keys or passwords to bypass D@RE security.
Key operations
D@RE provides a separate, unique Data Encryption Key (DEK) for each physical drive in the array,
including spare drives. To ensure that D@RE uses the correct key for a given drive:
l DEKs stored in the array include a unique key tag and key metadata when they are wrapped
(encrypted) for use by the array.
This information is included with the key material when the DEK is wrapped (encrypted) for
use in the array.
l During encryption I/O, the expected key tag associated with the drive is supplied separately
from the wrapped key.
l During key unwrap, the encryption hardware checks that the key unwrapped correctly and that
it matches the supplied key tag.
l Information in a reserved system LBA (Physical Information Block, or PHIB) verifies the key
used to encrypt the drive and ensures the drive is in the correct location.
l During initialization, the hardware performs self-tests to ensure that the encryption/
decryption logic is intact.
The self-test prevents silent data corruption due to encryption hardware failures.

Audit logs
The audit log records major activities on an array, including:
l Host-initiated actions
l Physical component changes
l Actions on the MMCS
l D@RE key management events
l Attempts blocked by security controls (Access Controls)
The Audit Log is secure and tamper-proof so event contents cannot be altered. Users with the
Auditor access can view, but not modify, the log.

Data erasure
Dell EMC Data Erasure uses specialized software to erase information on arrays. It mitigates the
risk of information dissemination, and helps secure information at the end of the information
lifecycle. Data erasure:

34 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

l Protects data from unauthorized access


l Ensures secure data migration by making data on the source array unreadable
l Supports compliance with internal policies and regulatory requirements
Data Erasure overwrites data at the lowest application-addressable level to drives. The number of
overwrites is configurable from three (the default) to seven with a combination of random
patterns on the selected arrays.
An optional certification service is available to provide a certificate of erasure. Drives that fail
erasure are delivered to customers for final disposal.
For individual Flash drives, Secure Erase operations erase all physical flash areas on the drive
which may contain user data.
The available data erasure services are:
l Dell EMC Data Erasure for Full Arrays — Overwrites data on all drives in the system when
replacing, retiring or re-purposing an array.
l Dell EMC Data Erasure/Single Drives — Overwrites data on individual drives.
l Dell EMC Disk Retention — Enables organizations that must retain all media to retain failed
drives.
l Dell EMC Assessment Service for Storage Security — Assesses your information protection
policies and suggests a comprehensive security strategy.
All erasure services are performed on-site in the security of the customer’s data center and include
a Data Erasure Certificate and report of erasure results.

Block CRC error checks


HYPERMAX OS provides:
l Industry-standard, T10 Data Integrity Field (DIF) block cyclic redundancy check (CRC) for
track formats.
For open systems, this enables host-generated DIF CRCs to be stored with user data by the
arrays and used for end-to-end data integrity validation.
l Additional protections for address/control fault modes for increased levels of protection
against faults. These protections are defined in user-definable blocks supported by the T10
standard.
l Address and write status information in the extra bytes in the application tag and reference tag
portion of the block CRC.

Data integrity checks


HYPERMAX OS validates the integrity of data at every possible point during the lifetime of that
data. From the time data enters an array, it is continuously protected by error detection metadata.
This metadata is checked by hardware and software mechanisms any time data is moved within
the array. This allows the array to provide true end-to-end integrity checking and protection
against hardware or software faults.
The protection metadata is appended to the data stream, and contains information describing the
expected data location as well as the CRC representation of the actual data contents. The
expected values to be found in protection metadata are stored persistently in an area separate
from the data stream. The protection metadata is used to validate the logical correctness of data
being moved within the array any time the data transitions between protocol chips, internal
buffers, internal data fabric endpoints, system cache, and system drives.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 35
VMAX All Flash with HYPERMAX OS

Drive monitoring and correction


HYPERMAX OS monitors medium defects by both examining the result of each disk data transfer
and proactively scanning the entire disk during idle time. If a block on the disk is determined to be
bad, the director:
1. Rebuilds the data in the physical storage, if necessary.
2. Rewrites the data in physical storage, if necessary.
The director keeps track of each bad block detected on a drive. If the number of bad blocks
exceeds a predefined threshold, the array proactively invokes a sparing operation to replace the
defective drive, and then alerts Customer Support to arrange for corrective action, if necessary.
With the deferred service model, immediate action is not always required.

Physical memory error correction and error verification


HYPERMAX OS corrects single-bit errors and reports an error code once the single-bit errors
reach a predefined threshold. In the unlikely event that physical memory replacement is required,
the array notifies Customer Support, and a replacement is ordered.

Drive sparing and direct member sparing


When HYPERMAX OS 5977 detects a drive is about to fail or has failed, it starts a direct member
sparing (DMS) process. Direct member sparing looks for available spares within the same engine
that are of the same block size, capacity and speed, with the best available spare always used.
With direct member sparing, the invoked spare is added as another member of the RAID group.
During a drive rebuild, the option to directly copy the data from the failing drive to the invoked
spare drive is available. The failing drive is removed only when the copy process is complete. Direct
member sparing is automatically initiated upon detection of drive-error conditions.
Direct member sparing provides the following benefits:
l The array can copy the data from the failing RAID member (if available), removing the need to
read the data from all of the members and doing the rebuild. Copying to the new RAID member
is less CPU intensive.
l If a failure occurs in another member, the array can still recover the data automatically from
the failing member (if available).
l More than one spare for a RAID group is supported at the same time.

Vault to flash
VMAX All Flash arrays initiate a vault operation when the system is powered down, goes offline, or
if environmental conditions occur, such as the loss of a data center due to an air conditioning
failure.
Each array comes with Standby Power Supply (SPS) modules. On a power loss, the array uses the
SPS power to write the system mirrored cache to flash storage. Vaulted images are fully
redundant; the contents of the system mirrored cache are saved twice to independent flash
storage.

The vault operation


When a vault operation starts:
l During the save part of the vault operation, the VMAX All Flash array stops all I/O. When the
system mirrored cache reaches a consistent state, directors write the contents to the vault
devices, saving two copies of the data. The array then completes the power down, or, if power
down is not required, remains in the offline state.

36 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

l During the restore part of the operation, the array's startup program initializes the hardware
and the environmental system, and restores the system mirrored cache contents from the
saved data (while checking data integrity).
The system resumes normal operation when the SPS modules have sufficient charge to complete
another vault operation, if required. If any condition is not safe, the system does not resume
operation and notifies Customer Support for diagnosis and repair. This allows Customer Support to
communicate with the array and restore normal system operations.

Vault configuration considerations


l To support vault to flash, the VMAX All Flash arrays require the following number of flash I/O
modules:
n VMAX 250F two to six per engine/V-Brick
n VMAX 450F four to eight per engine/V-Brick/zBrick
n VMAX 850F, 950F four to eight per engine/V-Brick/zBrick
l The size of the flash module is determined by the amount of system cache and metadata
required for the configuration.
l The vault space is for internal use only and cannot be used for any other purpose when the
system is online.
l The total capacity of all vault flash partitions is sufficient to keep two logical copies of the
persistent portion of the system mirrored cache.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 37
VMAX All Flash with HYPERMAX OS

Inline compression
HYPERMAX OS 5977.945.890 introduced support for inline compression on VMAX All Flash
arrays. Inline compression compresses data as it is written to flash drives.
Inline compression is a feature of storage groups. When enabled (this is the default setting), new
I/O to a storage group is compressed when written to disk, while existing data on the storage
group starts to compress in the background. After turning off compression, new I/O is no longer
compressed, and existing data remains compressed until it is written again, at which time it
decompresses.
Inline compression, deduplication, and over-subscription complement each other. Over-
subscription allows presenting larger than needed devices to hosts without having the physical
drives to fully allocate the space represented by the thin devices (Thin device oversubscription on
page 66 has more information on over-subscription). Inline compression further reduces the data
footprint by increasing the effective capacity of the array.
The example in Figure 4 on page 38 shows this. Here, 1.3 PB of host attached devices (TDEVs) is
over-provisioned to 1.0 PB of back-end (TDATs), that reside on 1.0 PB of Flash drives. Following
data compression, the data blocks are compressed, by a ratio of 2:1, reducing the number of Flash
drives by half. Basically, with compression enabled, the array requires half as many drives to
support a given front-end capacity.
Figure 4 Inline compression and over-subscription

TDEVs TDATs Flash drives


Front-end Back-end 1.0 PB
SSD SSD SSD SSD
1.3 PB 1.0 PB SSD SSD SSD SSD
SSD SSD SSD SSD

Over-subscription ratio: 1.3:1

TDEVs TDATs Flash drives


Front-end Back-end 0.5 PB
SSD SSD SSD SSD
1.3 PB 1.0 PB SSD SSD SSD SSD
SSD SSD SSD SSD

Over-subscription ratio: 1.3:1 Compression ratio: 2:1

Compression is pre-configured on new VMAX All Flash arrays at the factory. Existing VMAX All
Flash arrays in the field can have compression added to them. Contact your Support
Representative for more information.
Further characteristics of compression are:
l All supported data services, such as SnapVX, SRDF, and encryption are supported with
compression.

38 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
VMAX All Flash with HYPERMAX OS

l Compression is available on open systems (FBA) only (including eNAS). It is not available for
CKD arrays, including those with a mix of FBA and CKD devices. Any open systems array with
compression enabled, cannot have CKD devices added to it.
l ProtectPoint operations are still supported to Data Domain arrays, and CloudArray can run on a
compression-enabled array as long as it is in a separate SRP.
l Compression is switched on and off through Solutions Enabler and Unisphere.
l Compression efficiency can be monitored for SRPs, storage groups, and volumes.
l Activity Based Compression: the most active tracks are held in cache and not compressed until
they move from cache to disk. This feature helps improve the overall performance of the array
while reducing wear on the flash drives.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 39
VMAX All Flash with HYPERMAX OS

40 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 2
Management Interfaces

This chapter introduces the tools for managing arrays.

l Management interface versions............................................................................................ 42


l Unisphere for VMAX..............................................................................................................42
l Unisphere 360....................................................................................................................... 43
l Solutions Enabler...................................................................................................................43
l Mainframe Enablers...............................................................................................................44
l Geographically Dispersed Disaster Restart (GDDR).............................................................. 44
l SMI-S Provider..................................................................................................................... 45
l VASA Provider...................................................................................................................... 45
l eNAS management interface ................................................................................................45
l Storage Resource Management (SRM)................................................................................ 45
l vStorage APIs for Array Integration...................................................................................... 46
l SRDF Adapter for VMware vCenter Site Recovery Manager.................................................47
l SRDF/Cluster Enabler .......................................................................................................... 47
l Product Suite for z/TPF........................................................................................................47
l SRDF/TimeFinder Manager for IBM i.................................................................................... 48
l AppSync................................................................................................................................48

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 41
Management Interfaces

Management interface versions


The following components provide management capabilities for HYPERMAX OS 5977.1125.1125:
l Unisphere for VMAX V8.4
l Solutions Enabler V8.4
l Mainframe Enablers V8.2
l GDDR V5.0
l SMI-S V8.4
l SRDF/CE V4.2.1
l SRA V6.3
l VASA Provider V8.4

Unisphere for VMAX


Unisphere for VMAX is a web-based application that provides provisioning, management, and
monitoring of arrays.
With Unisphere you can perform the following tasks:

Table 8 Unisphere tasks

Section Allows you to:

Home View and manage functions such as array usage, alert settings,
authentication options, system preferences, user authorizations, and
link and launch client registrations.

Storage View and manage storage groups and storage tiers.

Hosts View and manage initiators, masking views, initiator groups, array
host aliases, and port groups.

Data Protection View and manage local replication, monitor and manage replication
pools, create and view device groups, and monitor and manage
migration sessions.

Performance Monitor and manage array dashboards, perform trend analysis for
future capacity planning, and analyze data.

Databases Troubleshoot database and storage issues, and launch Database


Storage Analyzer.

System View and display dashboards, active jobs, alerts, array attributes, and
licenses.

Support View online help for Unisphere tasks.

Unisphere also has a Representational State Transfer (REST) API. With this API you can access
performance and configuration information, and provision storage arrays. You can use the API in
any programming environment that supports standard REST clients, such as web browsers and
programming platforms that can issue HTTP requests.

Workload Planner
Workload Planner displays performance metrics for applications. Use Workload Planner to:

42 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Management Interfaces

l Model the impact of migrating a workload from one storage system to another.
l Model proposed new workloads.
l Assess the impact of moving one or more workloads off of a given array running HYPERMAX
OS.
l Determine current and future resource shortfalls that require action to maintain the requested
workloads.

FAST Array Advisor


The FAST Array Advisor wizard guides you through the steps to determine the impact on
performance of migrating a workload from one array to another.
If the wizard determines that the target array can absorb the added workload, it automatically
creates all the auto-provisioning groups required to duplicate the source workload on the target
array.

Unisphere 360
Unisphere 360 is an on-premise management solution that provides a single window across arrays
running HYPERMAX OS at a single site. Use Unisphere 360 to:
l Add a Unisphere server to Unisphere 360 to allow for data collection and reporting of
Unisphere management storage system data.
l View the system health, capacity, alerts and capacity trends for your Data Center.
l View all storage systems from all enrolled Unisphere instances in one place.
l View details on performance and capacity.
l Link and launch to Unisphere instances running V8.2 or higher.
l Manage Unisphere 360 users and configure authentication and authorization rules.
l View details of visible storage arrays, including current and target storage

Solutions Enabler
Solutions Enabler provides a comprehensive command line interface (SYMCLI) to manage your
storage environment.
SYMCLI commands are invoked from a management host, either interactively on the command
line, or using scripts.
SYMCLI is built on functions that use system calls to generate low-level I/O SCSI commands.
Configuration and status information is maintained in a host database file, reducing the number of
enquiries from the host to the arrays.
Use SYMCLI to:
l Configure array software (For example, TimeFinder, SRDF, Open Replicator)
l Monitor device configuration and status
l Perform control operations on devices and data objects
Solutions Enabler also has a Representational State Transfer (REST) API. Use this API to access
performance and configuration information, and provision storage arrays. It can be used in any
programming environments that supports standard REST clients, such as web browsers and
programming platforms that can issue HTTP requests.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 43
Management Interfaces

Mainframe Enablers
The Dell EMC Mainframe Enablers are software components that allow you to monitor and manage
arrays running HYPERMAX OS in a mainframe environment:
l ResourcePak Base for z/OS
Enables communication between mainframe-based applications (provided by Dell EMC or
independent software vendors) and PowerMax/VMAX arrays.
l SRDF Host Component for z/OS
Monitors and controls SRDF processes through commands executed from a host. SRDF
maintains a real-time copy of data at the logical volume level in multiple arrays located in
physically separate sites.
l Dell EMC Consistency Groups for z/OS
Ensures the consistency of data remotely copied by SRDF feature in the event of a rolling
disaster.
l AutoSwap for z/OS
Handles automatic workload swaps between arrays when an unplanned outage or problem is
detected.
l TimeFinder SnapVX
With Mainframe Enablers V8.0 and higher, SnapVX creates point-in-time copies directly in the
Storage Resource Pool (SRP) of the source device, eliminating the concepts of target devices
and source/target pairing. SnapVX point-in-time copies are accessible to the host through a
link mechanism that presents the copy on another device. TimeFinder SnapVX and
HYPERMAX OS support backward compatibility to traditional TimeFinder products, including
TimeFinder/Clone, TimeFinder VP Snap, and TimeFinder/Mirror.
l Data Protector for z Systems (zDP™)
With Mainframe Enablers V8.0 and higher, zDP is deployed on top of SnapVX. zDP provides a
granular level of application recovery from unintended changes to data. zDP achieves this by
providing automated, consistent point-in-time copies of data from which an application-level
recovery can be conducted.
l TimeFinder/Clone Mainframe Snap Facility
Produces point-in-time copies of full volumes or of individual datasets. TimeFinder/Clone
operations involve full volumes or datasets where the amount of data at the source is the same
as the amount of data at the target. TimeFinder VP Snap leverages clone technology to create
space-efficient snaps for thin devices.
l TimeFinder/Mirror for z/OS
Allows the creation of Business Continuance Volumes (BCVs) and provides the ability to
ESTABLISH, SPLIT, RE-ESTABLISH and RESTORE from the source logical volumes.
l TimeFinder Utility
Conditions SPLIT BCVs by relabeling volumes and (optionally) renaming and recataloging
datasets. This allows BCVs to be mounted and used.

Geographically Dispersed Disaster Restart (GDDR)


GDDR automates business recovery following both planned outages and disaster situations,
including the total loss of a data center. Using the VMAX All Flash architecture and the foundation
of SRDF and TimeFinder replication families, GDDR eliminates any single point of failure for
disaster restart plans in mainframe environments. GDDR intelligence automatically adjusts disaster
restart plans based on triggered events.
GDDR does not provide replication and recovery services itself. Rather GDDR monitors and
automates the services that other Dell EMC products and third-party products provide that are

44 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Management Interfaces

required for continuous operations or business restart. GDDR facilitates business continuity by
generating scripts that can be run on demand. For example, scripts to restart business applications
following a major data center incident, or resume replication following unplanned link outages.
Scripts are customized when invoked by an expert system that tailors the steps based on the
configuration and the event that GDDR is managing. Through automatic event detection and end-
to-end automation of managed technologies, GDDR removes human error from the recovery
process and allows it to complete in the shortest time possible.
The GDDR expert system is also invoked to automatically generate planned procedures, such as
moving compute operations from one data center to another. This is the gold standard for high
availability compute operations, to be able to move from scheduled DR test weekend activities to
regularly scheduled data center swaps without disrupting application workloads.

SMI-S Provider
Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI
standard for storage management. This initiative has developed a standard management interface
that resulted in a comprehensive specification (SMI-Specification or SMI-S).
SMI-S defines the open storage management interface, to enable the interoperability of storage
management technologies from multiple vendors. These technologies are used to monitor and
control storage resources in multivendor or SAN topologies.
Solutions Enabler components required for SMI-S Provider operations are included as part of the
SMI-S Provider installation.

VASA Provider
The VASA Provider enables VMAX All Flash management software to inform vCenter of how
VMFS storage, including vVols, is configured and protected. These capabilities are defined by Dell
EMC and include characteristics such as disk type, type of provisioning, storage tiering and remote
replication status. This allows vSphere administrators to make quick and informed decisions about
virtual machine placement. VASA offers the ability for vSphere administrators to complement their
use of plugins and other tools to track how devices hosting VMFS volume are configured to meet
performance and availability needs.

eNAS management interface


You manage eNAS block and file storage using the Unisphere File Dashboard. Link and launch
enables you to run the block and file management GUI within the same session.
The configuration wizard helps you create storage groups (automatically provisioned to the Data
Movers) quickly and easily. Creating a storage group creates a storage pool in Unisphere that can
be used for file level provisioning tasks.

Storage Resource Management (SRM)


SRM provides comprehensive monitoring, reporting, and analysis for heterogeneous block, file,
and virtualized storage environments.
Use SRM to:
l Visualize applications to storage dependencies
l Monitor and analyze configurations and capacity growth
l Optimize your environment to improve return on investment

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 45
Management Interfaces

Virtualization enables businesses to simplify management, control costs, and guarantee uptime.
However, virtualized environments also add layers of complexity to the IT infrastructure that
reduce visibility and can complicate the management of storage resources. SRM addresses these
layers by providing visibility into the physical and virtual relationships to ensure consistent service
levels.
As you build out a cloud infrastructure, SRM helps you ensure storage service levels while
optimizing IT resources — both key attributes of a successful cloud deployment.
SRM is designed for use in heterogeneous environments containing multi-vendor networks, hosts,
and storage devices. The information it collects and the functionality it manages can reside on
technologically disparate devices in geographically diverse locations. SRM moves a step beyond
storage management and provides a platform for cross-domain correlation of device information
and resource topology, and enables a broader view of your storage environment and enterprise
data center.
SRM provides a dashboard view of the storage capacity at an enterprise level through Watch4net.
The Watch4net dashboard view displays information to support decisions regarding storage
capacity.
The Watch4net dashboard consolidates data from multiple ProSphere instances spread across
multiple locations. It gives a quick overview of the overall capacity status in the environment, raw
capacity usage, usable capacity, used capacity by purpose, usable capacity by pools, and service
levels.

vStorage APIs for Array Integration


VMware vStorage APIs for Array Integration (VAAI) optimize server performance by offloading
virtual machine operations to arrays running HYPERMAX OS.
The storage array performs the select storage tasks, freeing host resources for application
processing and other tasks.
In VMware environments, storage arrays supports the following VAAI components:
l Full Copy — (Hardware Accelerated Copy) Faster virtual machine deployments, clones,
snapshots, and VMware Storage vMotion® operations by offloading replication to the storage
array.
l Block Zero — (Hardware Accelerated Zeroing) Initializes file system block and virtual drive
space more rapidly.
l Hardware-Assisted Locking — (Atomic Test and Set) Enables more efficient meta data
updates and assists virtual desktop deployments.
l UNMAP — Enables more efficient space usage for virtual machines by reclaiming space on
datastores that is unused and returns it to the thin provisioning pool from which it was
originally drawn.
l VMware vSphere Storage APIs for Storage Awareness (VASA).
VAAI is native in HYPERMAX OS and does not require additional software, unless eNAS is also
implemented. If eNAS is implemented on the array, support for VAAI requires the VAAI plug-in for
NAS. The plug-in is available from the Dell EMC support website.

46 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Management Interfaces

SRDF Adapter for VMware vCenter Site Recovery Manager


Dell EMC SRDF Adapter is a Storage Replication Adapter (SRA) that extends the disaster restart
management functionality of VMware vCenter Site Recovery Manager 5.x to arrays running
HYPERMAX OS.
SRA allows Site Recovery Manager to automate storage-based disaster restart operations on
storage arrays in an SRDF configuration.

SRDF/Cluster Enabler
Cluster Enabler (CE) for Microsoft Failover Clusters is a software extension of failover clusters
functionality. Cluster Enabler enables Windows Server 2012 (including R2) Standard and
Datacenter editions running Microsoft Failover Clusters to operate across multiple connected
storage arrays in geographically distributed clusters.
SRDF/Cluster Enabler (SRDF/CE) is a software plug-in module to Dell EMC Cluster Enabler for
Microsoft Failover Clusters software. The Cluster Enabler plug-in architecture consists of a CE
base module component and separately available plug-in modules, which provide your chosen
storage replication technology.
SRDF/CE supports:
l Synchronous and asynchronous mode (SRDF modes of operation on page 85 summarizes
these modes)
l Concurrent and cascaded SRDF configurations (SRDF multi-site solutions on page 79
summarizes these configurations)

Product Suite for z/TPF


The Dell EMC Product Suite for z/TPF is a suite of components that monitor and manage arrays
running HYPERMAX OS from a z/TPF host. z/TPF is an IBM mainframe operating system
characterized by high-volume transaction rates with significant communications content. The
following software components are distributed separately and can be installed individually or in any
combination:
l SRDF Controls for z/TPF
Monitors and controls SRDF processes with functional entries entered at the z/TPF Prime
CRAS (computer room agent set).
l TimeFinder Controls for z/TPF
Provides a business continuance solution consisting of TimeFinder SnapVX, TimeFinder/Clone,
and TimeFinder/Mirror.
l ResourcePak for z/TPF
Provides PowerMax and VMAX configuration and statistical reporting and extended features
for SRDF Controls for z/TPF and TimeFinder Controls for z/TPF.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 47
Management Interfaces

SRDF/TimeFinder Manager for IBM i


Dell EMC SRDF/TimeFinder Manager for IBM i is a set of host-based utilities that provides an IBM
i interface to SRDF and TimeFinder.
This feature allows you to configure and control SRDF or TimeFinder operations on arrays
attached to IBM i hosts, including:
l SRDF: Configure, establish and split SRDF devices, including:
n SRDF/A
n SRDF/S
n Concurrent SRDF/A
n Concurrent SRDF/S
l TimeFinder:
n Create point-in-time copies of full volumes or individual data sets.
n Create point-in-time snaphots of images.
Extended features
SRDF/TimeFinder Manager for IBM i extended features provide support for the IBM independent
ASP (IASP) functionality.
IASPs are sets of switchable or private auxiliary disk pools (up to 223) that can be brought online/
offline on an IBM i host without affecting the rest of the system.
When combined with SRDF/TimeFinder Manager for IBM i, IASPs let you control SRDF or
TimeFinder operations on arrays attached to IBM i hosts, including:
l Display and assign TimeFinder SnapVX devices.
l Execute SRDF or TimeFinder commands to establish and split SRDF or TimeFinder devices.
l Present one or more target devices containing an IASP image to another host for business
continuance (BC) processes.
Access to extended features control operations include:
l From the SRDF/TimeFinder Manager menu-driven interface.
l From the command line using SRDF/TimeFinder Manager commands and associated IBM i
commands.

AppSync
Dell EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring,
and cloning critical Microsoft and Oracle applications and VMware environments. After defining
service plans, application owners can protect, restore, and clone production data quickly with
item-level granularity by using the underlying Dell EMC replication technologies. AppSync also
provides an application protection monitoring service that generates alerts when the SLAs are not
met.
AppSync supports the following applications and storage arrays:
l Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware VMFS and
NFS datastores and File systems.
l Replication Technologies—SRDF, SnapVX, RecoverPoint, XtremIO Snapshot, VNX Advanced
Snapshots, VNXe Unified Snapshot, and ViPR Snapshot.

48 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Management Interfaces

Note: For VMAX All Flash arrays, AppSync is available in a starter bundle. The AppSync Starter
Bundle provides the license for a scale-limited, yet fully functional version of AppSync. For
more information, refer to the AppSync Starter Bundle with VMAX All Flash Product Brief
available on the Dell EMC Online Support Website.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 49
Management Interfaces

50 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 3
Open Systems Features

This chapter introduces the open systems features of VMAX All Flash arrays.

l HYPERMAX OS support for open systems............................................................................52


l Backup and restore using ProtectPoint and Data Domain..................................................... 52
l VMware Virtual Volumes....................................................................................................... 55

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 51
Open Systems Features

HYPERMAX OS support for open systems


HYPERMAX OS provides FBA device emulations for open systems and D910 for IBM i.
Any logical device manager software installed on a host can be used with the storage devices.
HYPERMAX OS increases scalability limits from previous generations of arrays, including:
l Maximum device size is 64TB
l Maximum host addressable devices is 64,000 for each array
l Maximum storage groups, port groups, and masking views is 64,000 for each array
l Maximum devices addressable through each port is 4,000
HYPERMAX OS does not support meta devices, thus it is much more difficult to reach this
limit.
Open Systems-specific provisioning on page 66 has more information on provisioning storage in
an open systems environment.
The Dell EMC Support Matrix in the E-Lab Interoperability Navigator at http://
elabnavigator.emc.com has the most recent information on HYPERMAX open systems capabilities.

Backup and restore using ProtectPoint and Data Domain


Dell EMC ProtectPoint provides data backup and restore facilities for a VMAX All Flash array. A
remote Data Domain array stores the backup copies of the data.
ProtectPoint uses existing features of the VMAX All Flash and Data Domain arrays to create
backup copies and to restore backed up data if necessary. There is no need for any specialized or
additional hardware and software.
This section is a high-level summary of ProtectPoint's backup and restore facilities. It also shows
where to get detailed information about the product, including instructions on how to configure
and manage it.
ProtectPoint has been renamed to Storage Direct and it is included in PowerProtect, Data
Protection Suite for Apps, or Data Protection Suite Enterprise Software Edition.

Backup
A LUN is the basic unit of backup in ProtectPoint. For each LUN, ProtectPoint creates a backup
image on the Data Domain array. You can group backup images to create a backup set. One use of
the backup set is to capture all the data for an application as a point-in-time image.

Backup process
To create a backup of a LUN, ProtectPoint:
1. Uses SnapVX to create a local snapshot of the LUN on the VMAX All Flash array (the primary
storage array).
Once this is created, ProtectPoint and the application can proceed independently each other
and the backup process has no further impact on the application.
2. Copies the snapshot to a vdisk on the Data Domain array where it is deduplicated and
cataloged.
On the primary storage array the vdisk appears as a FAST.X encapsulated LUN. The copy of
the snapshot to the vdisk uses existing SnapVX link copy and VMAX All Flash destaging
technologies.

52 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Open Systems Features

Once the vdisk contains all the data for the LUN, Data Domain converts the data into a static
image. This image then has metadata added to it and Data Domain catalogs the resultant backup
image.
Figure 5 Data flow during a backup operation to Data Domain

Incremental data copy


The first time that ProtectPoint backs up a LUN, it takes a complete copy of its contents using a
SnapVX snapshot. While taking this snapshot, the application assigned to the LUN is paused for a
short period of time. This ensures that ProtectPoint has a copy of the LUN that is application
consistent. To create the first backup image of the LUN, ProtectPoint copies the entire snapshot
to the Data Domain array.
For each subsequent backup of the LUN, ProtectPoint copies only those parts of the LUN that
have changed. This makes best use of the communication links and minimizes the time needed to
create the backup.

Restore
ProtectPoint provides two forms of data restore:
l Object level restore from a selected backup image
l Full application rollback restore

Object level restore


For an object level restore, Data Domain puts the static image from the selected backup image on
a vdisk. As with the backup process, this vdisk on the Data Domain array appears as a FAST.X
encapsulated LUN on the VMAX All Flash array. The administrator can now mount the file system
of the encapsulated LUN, and restore one or more objects to their final destination.

Full application rollback restore


In a full application rollback restore, all the static images in a selected backup set are made
available as vdisks on the Data Domain array and available as FAST.X encapsulated LUNs on the
VMAX All Flash array. From there, the administrator can restore data from the encapsulated LUNs
to their original devices.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 53
Open Systems Features

ProtectPoint agents
ProtectPoint has three agents, each responsible for backing up and restoring a specific type of
data:
File system agent
Provides facilities to back up, manage, and restore application LUNs.

Database application agent


Provides facilities to back up, manage, and restore DB2 databases, Oracle databases, or SAP
with Oracle database data.

Microsoft application agent


Provides facilities to back up, manage, and restore Microsoft Exchange and Microsoft SQL
databases.

Features used for ProtectPoint backup and restore


ProtectPoint uses existing features of HYPERMAX OS and Data Domain to provide backup and
restore services:
l HYPERMAX OS:
n SnapVX
n FAST.X encapsulated devices
l Data Domain:
n Block services for ProtectPoint
n vdisk services
n FastCopy

ProtectPoint and traditional backup


The ProtectPoint workflow can provide data protection in situations where more traditional
approaches cannot successfully meet the business requirements. This is often due to small or non-
existent backup windows, demanding recovery time objective (RTO) or recovery point objective
(RPO) requirements, or a combination of both.
Unlike traditional backup and recovery, ProtectPoint does not rely on a separate process to
identify the data that needs to be backed up and additional actions to move that data to backup
storage. Instead of using dedicated hardware, host, and network resources, ProtectPoint uses
existing application and storage capabilities to create point-in-time copies of large data sets. The
copies are transported across a storage area network (SAN) to Data Domain systems to protect
the copies while providing deduplication to maximize storage efficiency.
ProtectPoint minimizes the time required to protect large data sets, and allows backups to fit into
the smallest of backup windows to meet demanding RTO or RPO requirements.

More information
There is more information about ProtectPoint, its components, how to configure them, and how to
use them in:
l ProtectPoint Solutions Guide
l File System Agent Installation and Administration Guide

54 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Open Systems Features

l Database Application Agent Installation and Administration Guide


l Microsoft Application Agent Installation and Administration Guide

VMware Virtual Volumes


VMware Virtual Volumes (vVols) are a storage object developed by VMware to simplify
management and provisioning in virtualized environments. With vVols, the management process
moves from the LUN (data store) to the virtual machine (VM). This level of granularity allows
VMware and cloud administrators to assign specific storage attributes to each VM, according to its
performance and storage requirements. Storage arrays running HYPERMAX OS implement vVols.

vVol components
To support management capabilities of vVols, the storage/vCenter environment requires the
following:
l EMC VMAX VASA Provider – The VASA Provider (VP) is a software plug-in that uses a set of
out-of-band management APIs (VASA version 2.0). The VASA Provider exports storage array
capabilities and presents them to vSphere through the VASA APIs. VVols are managed by way
of vSphere through the VASA Provider APIs (create/delete) and not with the Unisphere for
VMAX user interface or Solutions Enabler CLI. After vVols are setup on the array, Unisphere
and Solutions Enabler only support VVol monitoring and reporting.
l Storage Containers (SC) – Storage containers are chunks of physical storage used to logically
group VVols. SCs are based on the grouping of Virtual Machine Disks (VMDKs) into specific
Service Levels. SC capacity is limited only by hardware capacity. At least one SC per storage
system is required, but multiple SCs per array are allowed. SCs are created and managed on
the array by the Storage Administrator. Unisphere and Solutions Enabler CLI support
management of SCs.
l Protocol Endpoints (PE) – Protocol endpoints are the access points from the hosts to the
array by the Storage Administrator. PEs are compliant with FC and replace the use of LUNs
and mount points. vVols are "bound" to a PE, and the bind and unbind operations are managed
through the VP APIs, not with the Solutions Enabler CLI. Existing multi-path policies and NFS
topology requirements can be applied to the PE. PEs are created and managed on the array by
the Storage Administrator. Unisphere and Solutions Enabler CLI support management of PEs.

Table 9 vVol architecture component management capability

Functionality Component

vVol device management (create, delete) VASA Provider APIs / Solutions Enabler APIs

vVol bind management (bind, unbind)

Protocol Endpoint device management Unisphere/Solutions Enabler CLI


(create, delete)

Protocol Endpoint-vVol reporting (list, show)

Storage Container management (create,


delete, modify)

Storage container reporting (list, show)

vVol scalability
The vVol scalability limits are:

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 55
Open Systems Features

Table 10 vVol-specific scalability

Requirement Value

Number of vVols/Array 64,000

Number of Snapshots/Virtual Machinea 12

Number of Storage Containers/Array 16

Number of Protocol Endpoints/Array 1/ESXi Host

Maximum number of Protocol Endpoints/ 1,024


Array

Number of arrays supported /VP 1

Number of vCenters/VP 2

Maximum device size 16 TB

a. vVol Snapshots are managed through vSphere only. You cannot use Unisphere or Solutions
Enabler to create them.

vVol workflow
Requirements
Install and configure these applications:
l Unisphere for VMAX V8.2 or later
l Solutions Enabler CLI V8.2 or later
l VASA Provider V8.2 or later
Instructions for installing Unisphere and Solutions Enabler are in their respective installation
guides. Instructions on installing the VASA Provider are in the Dell EMC PowerMax VASA Provider
Release Notes .
Procedure
The creation of a vVol-based virtual machine involves both the storage administrator and the
VMware administrator:
Storage administrator
The storage administrator uses Unisphere or Solutions Enabler to create the storage and
present it to the VMware environment:
1. Create one or more storage containers on the storage array.
This step defines how much storage and from which service level the VMware user can
provision.
2. Create Protocol Endpoints and provision them to the ESXi hosts.

VMware administrator
The VMware administrator uses the vSphere Web Client to deploy the VM on the storage
array:
1. Add the VASA Provider to the vCenter.
This allows vCenter to communicate with the storage array,
2. Create a vVol datastore from the storage container.

56 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Open Systems Features

3. Create the VM storage policies.


4. Create the VM in the vVol datastore, selecting one of the VM storage policies.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 57
Open Systems Features

58 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 4
Mainframe Features

This chapter introduces the mainframe-specific features of VMAX All Flash arrays.

l HYPERMAX OS support for mainframe.................................................................................60


l IBM Z Systems functionality support.................................................................................... 60
l IBM 2107 support...................................................................................................................61
l Logical control unit capabilities.............................................................................................. 61
l Disk drive emulations.............................................................................................................62
l Cascading configurations...................................................................................................... 62

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 59
Mainframe Features

HYPERMAX OS support for mainframe


VMAX 450F, 850F, and 950F arrays can be ordered with the zF and zFX software packages to
support mainframe.
VMAX All Flash arrays provide the following mainframe features:
l Mixed FBA and CKD drive configurations.
l Support for 64, 128, 256 FICON single and multi mode ports, respectively.
l Support for CKD 3380/3390 and FBA devices.
l Mainframe (FICON) and OS FC/iSCSI connectivity.
l High capacity flash drives.
l Up to 16 Gb/s FICON host connectivity.
l Support for Forward Error Correction, Query Host Access, and FICON Dynamic Routing.
l T10 DIF protection for CKD data along the data path (in cache and on disk) to improve
performance for multi-record operations.
l D@RE external key managers. Data at Rest Encryption on page 31 provides more information
on D@RE and external key managers.

IBM Z Systems functionality support


VMAX All Flash arrays support the latest IBM Z Systems enhancements, ensuring that the array
can handle the most demanding mainframe environments:
l zHPF, including support for single track, multi track, List Prefetch, bi-directional transfers,
QSAM/BSAM access, and Format Writes
l zHyperWrite
l Non-Disruptive State Save (NDSS)
l Compatible Native Flash (Flash Copy)
l Concurrent Copy
l Multi-subsystem Imaging
l Parallel Access Volumes (PAV)
l Dynamic Channel Management (DCM)
l Dynamic Parallel Access Volumes/Multiple Allegiance (PAV/MA)
l Peer-to-Peer Remote Copy (PPRC) SoftFence
l Extended Address Volumes (EAV)
l Persistent IU Pacing (Extended Distance FICON)
l HyperPAV
l PDS Search Assist
l Modified Indirect Data Address Word (MIDAW)
l Multiple Allegiance (MA)
l Sequential Data Striping
l Multi-Path Lock Facility
l Product Suite for z/TPF

60 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Mainframe Features

l HyperSwap
Note: A VMAX All Flash array can participate in a z/OS Global Mirror (XRC) configuration only
as a secondary.

IBM 2107 support


When VMAX All Flash arrays emulate an IBM 2107, they externally represent the array serial
number as an alphanumeric number in order to be compatible with IBM command output.
Internally, the arrays retain a numeric serial number for IBM 2107 emulations. HYPERMAX OS
handles correlation between the alphanumeric and numeric serial numbers.

Logical control unit capabilities


The following table lists logical control unit (LCU) maximum values:

Table 11 Logical control unit maximum values

Capability Maximum value

LCUs per director slice (or port) 255 (within the range of 00 to FE)

LCUs per splita 255

Splits per array 16 (0 to 15)

Devices per split 65,280

LCUs per array 512

Devices per LCU 256

Logical paths per port 2,048

Logical paths per LCU per port (see Table 12 128


on page 61)

Array system host address per array (base 64K


and alias)

I/O host connections per array engine 32

a. A split is a logical partition of the storage array, identified by unique devices, SSIDs, and
host serial number. The maximum storage array host address per array is inclusive of all
splits.

The following table lists the maximum LPARs per port based on the number of LCUs with active
paths:
Table 12 Maximum LPARs per port

LCUs with active paths per Maximum volumes Array maximum LPARs per
port supported per port port

16 4K 128

32 8K 64

64 16K 32

128 32K 16

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 61
Mainframe Features

Table 12 Maximum LPARs per port (continued)

LCUs with active paths per Maximum volumes Array maximum LPARs per
port supported per port port

255 64K 8

Disk drive emulations


When VMAX All Flash arrays are configured to mainframe hosts, the data recording format is
Extended CKD (ECKD). The supported CKD emulations are 3380 and 3390.

Cascading configurations
Cascading configurations greatly enhance FICON connectivity between local and remote sites by
using switch-to-switch extensions of the CPU to the FICON network. These cascaded switches
communicate over long distances using a small number of high-speed lines called interswitch links
(ISLs). A maximum of two switches may be connected together within a path between the CPU
and the storage array.
Use of the same switch vendors is required for a cascaded configuration. To support cascading,
each switch vendor requires specific models, hardware features, software features, configuration
settings, and restrictions. Specific IBM CPU models, operating system release levels, host
hardware, and HYPERMAX levels are also required.
The Dell EMC Support Matrix, available through E-Lab™ Interoperability Navigator (ELN) at
http://elabnavigator.emc.com has the most up-to-date information on switch support.

62 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 5
Provisioning

This chapter introduces storage provisioning.

l Thin provisioning................................................................................................................... 64
l CloudArray as an external tier............................................................................................... 68

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 63
Provisioning

Thin provisioning
VMAX All Flash arrays are configured in the factory with thin provisioning pools ready for use. Thin
provisioning improves capacity utilization and simplifies storage management. It also enables
storage to be allocated and accessed on demand from a pool of storage that services one or many
applications. LUNs can be “grown” over time as space is added to the data pool with no impact to
the host or application. Data is widely striped across physical storage (drives) to deliver better
performance than standard provisioning.
Note: Data devices (TDATs) are provisioned/pre-configured/created while the host
addressable storage devices TDEVs are created by either the customer or customer support,
depending on the environment.
Thin provisioning increases capacity utilization and simplifies storage management by:
l Enabling more storage to be presented to a host than is physically consumed
l Allocating storage only as needed from a shared thin provisioning pool
l Making data layout easier through automated wide striping
l Reducing the steps required to accommodate growth
Thin provisioning allows you to:
l Create host-addressable thin devices (TDEVs) using Unisphere or Solutions Enabler
l Add the TDEVs to a storage group
l Run application workloads on the storage groups
When hosts write to TDEVs, the physical storage is automatically allocated from the default
Storage Resource Pool.

Pre-configuration for thin provisioning


VMAX All Flash arrays are custom-built and pre-configured with array-based software
applications, including a factory pre-configuration for thin provisioning that includes:
l Data devices (TDAT) — an internal device that provides physical storage used by thin devices.
l Virtual provisioning pool — a collection of data devices of identical emulation and protection
type, all of which reside on drives of the same technology type and speed. The drives in a data
pool are from the same disk group.
l Disk group— a collection of physical drives within the array that share the same drive
technology and capacity. RAID protection options are configured at the disk group level. Dell
EMC strongly recommends that you use one or more of the RAID data protection schemes for
all data devices.

Table 13 RAID options

RAID Provides the following Configuration


considerations

RAID 5 Distributed parity and striped l RAID-5 (3 + 1) provides


data across all drives in the 75% data storage
RAID group. Options include: capacity. Only available
l RAID 5 (3 + 1) — with VMAX 250F arrays.
Consists of four drives
with parity and data

64 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Provisioning

Table 13 RAID options (continued)

RAID Provides the following Configuration


considerations

striped across each l RAID-5 (7 + 1) provides


device. 87.5% data storage
capacity.
l RAID-5 (7 + 1) —
Consists of eight drives l Withstands failure of a
with parity and data single drive within the
striped across each RAID-5 group.
device.

RAID 6 Striped drives with double l RAID-6 (6 + 2) provides


distributed parity (horizontal 75% data storage
and diagonal). The highest capacity. Only available
level of availability options with VMAX 250F arrays.
include:
l RAID-6 (14 + 2) provides
l RAID-6 (6 + 2) — 87.5% data storage
Consists of eight drives capacity.
with dual parity and data
l Withstands failure of two
striped across each
drives within the RAID-6
device.
group.
l RAID-6 (14 + 2) —
Consists of 16 drives with
dual parity and data
striped across each
device.

l Storage Resource Pools — one (default) Storage Resource Pool is pre-configured on the
array. This process is automatic and requires no setup. You cannot modify Storage Resource
Pools, but you can list and display their configuration. You can also generate reports detailing
the demand storage groups are placing on the Storage Resource Pools.

Thin devices (TDEVs)


Note: On VMAX All Flash arrays the thin device is the only device type for front end devices.

Thin devices (TDEVs) have no storage allocated until the first write is issued to the device.
Instead, the array allocates only a minimum allotment of physical storage from the pool, and maps
that storage to a region of the thin device including the area targeted by the write.
These initial minimum allocations are performed in units called thin device extents. Each extent for
a thin device is 1 track (128 KB).
When a read is performed on a device, the data being read is retrieved from the appropriate data
device to which the thin device extent is allocated. Reading an area of a thin device that has not
been mapped does not trigger allocation operations. Reading an unmapped block returns a block in
which each byte is equal to zero.
When more storage is required to service existing or future thin devices, data devices can be
added to existing thin storage groups.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 65
Provisioning

Thin device oversubscription


A thin device can be presented for host use before mapping all of the reported capacity of the
device.
The sum of the reported capacities of the thin devices using a given pool can exceed the available
storage capacity of the pool. Thin devices whose capacity exceeds that of their associated pool
are "oversubscribed".
Over-subscription allows presenting larger than needed devices to hosts and applications without
having the physical drives to fully allocate the space represented by the thin devices.

Open Systems-specific provisioning

HYPERMAX host I/O limits for open systems


On open systems, you can define host I/O limits and associate a limit with a storage group. The
I/O limit definitions contain the operating parameters of the input/output per second and/or
bandwidth limitations.
When an I/O limit is associated with a storage group, the limit is divided equally among all the
directors in the masking view associated with the storage group. All devices in that storage group
share that limit.
When applications are configured, you can associate the limits with storage groups that contain a
list of devices. A single storage group can only be associated with one limit and a device can only
be in one storage group that has limits associated.
There can be up to 4096 host I/O limits.
Consider the following when using host I/O limits:
l Cascaded host I/O limits controlling parent and child storage groups limits in a cascaded
storage group configuration.
l Offline and failed director redistribution of quota that supports all available quota to be
available instead of losing quota allocations from offline and failed directors.
l Dynamic host I/O limits support for dynamic redistribution of steady state unused director
quota.

Auto-provisioning groups on open systems


You can auto-provision groups on open systems to reduce complexity, execution time, labor cost,
and the risk of error.
Auto-provisioning groups enables users to group initiators, front-end ports, and devices together,
and to build masking views that associate the devices with the ports and initiators.
When a masking view is created, the necessary mapping and masking operations are performed
automatically to provision storage.
After a masking view exists, any changes to its grouping of initiators, ports, or storage devices
automatically propagate throughout the view, automatically updating the mapping and masking as
required.

66 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Provisioning

Components of an auto-provisioning group


Figure 6 Auto-provisioning groups

Masking view

Initiator group

VM 1 VM
VM 1 2 VM
VM 2 3 VM
VM 3 4
VM 4

HBA 22

HBA 33

HBA 44
HBA 11

HBA ESX

HBA

HBA
HBA

2
1
Host initiators

Port group
Ports
dev
dev
dev dev
dev
dev
dev
dev
dev Storage group

Devices

SYM-002353

Initiator group
A logical grouping of Fibre Channel initiators. An initiator group is limited to either a parent,
which can contain other groups, or a child, which contains one initiator role. Mixing of
initiators and child name in a group is not supported.

Port group
A logical grouping of Fibre Channel front-end director ports. A port group can contain up to 32
ports.

Storage group
A logical grouping of thin devices. LUN addresses are assigned to the devices within the
storage group when the view is created if the group is either cascaded or stand alone. Often
there is a correlation between a storage group and a host application. One or more storage
groups may be assigned to an application to simplify management of the system. Storage
groups can also be shared among applications.

Cascaded storage group


A parent storage group comprised of multiple storage groups (parent storage group members)
that contain child storage groups comprised of devices. By assigning child storage groups to
the parent storage group members and applying the masking view to the parent storage
group, the masking view inherits all devices in the corresponding child storage groups.

Masking view
An association between one initiator group, one port group, and one storage group. When a
masking view is created, the group within the view is a parent, the contents of the children are
used. For example, the initiators from the children initiator groups and the devices from the
children storage groups. Depending on the server and application requirements, each server or
group of servers may have one or more masking views that associate a set of thin devices to
an application, server, or cluster of servers.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 67
Provisioning

CloudArray as an external tier


VMAX All Flash can integrate with the CloudArray storage product for the purposes of migration.
By enabling this technology, customers can seamlessly archive older application workloads out to
the cloud, freeing up valuable capacity for newer workloads. Once the older applications are
archived, they are directly available for retrieval at any time.
Manage the CloudArray configuration using the CloudArray management console (setup, cache
encryption, monitoring) and the traditional management interfaces (Unisphere, Solutions Enabler,
API).

68 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 6
Native local replication with TimeFinder

This chapter introduces the local replication features.

l About TimeFinder.................................................................................................................. 70
l Mainframe SnapVX and zDP..................................................................................................73

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 69
Native local replication with TimeFinder

About TimeFinder
Dell EMC TimeFinder delivers point-in-time copies of volumes that can be used for backups,
decision support, data warehouse refreshes, or any other process that requires parallel access to
production data.
Previous VMAX families offered multiple TimeFinder products, each with their own characteristics
and use cases. These traditional products required a target volume to retain snapshot or clone
data.
HYPERMAX OS introduces TimeFinder SnapVX which provides the best aspects of the traditional
TimeFinder offerings combined with increased scalability and ease-of-use.
TimeFinder SnapVX emulates the following legacy replication products:
l FBA devices:
n TimeFinder/Clone
n TimeFinder/Mirror
n TimeFinder VP Snap
l Mainframe (CKD) devices:
n TimeFinder/Clone
n TimeFinder/Mirror
n TimeFinder/Snap
n Dell EMC Dataset Snap
n IBM FlashCopy (Full Volume and Extent Level)
TimeFinder SnapVX dramatically decreases the impact of snapshots and clones:
l For snapshots, this is done by using redirect on write technology (ROW).
l For clones, this is done by storing changed tracks (deltas) directly in the Storage Resource
Pool of the source device - sharing tracks between snapshot versions and also with the source
device, where possible.
There is no need to specify a target device and source/target pairs. SnapVX supports up to 256
snapshots per volume. Each snapshot can have a name and an automatic expiration date.
Access to snapshots
With SnapVX, a snapshot can be accessed by linking it to a host accessible volume (known as a
target volume). Target volumes are standard VMAX All Flash TDEVs. Up to 1024 target volumes
can be linked to the snapshots of the source volumes. The 1024 links can all be to the same
snapshot of the source volume, or they can be multiple target volumes linked to multiple snapshots
from the same source volume. However, a target volume may be linked only to one snapshot at a
time.
Snapshots can be cascaded from linked targets, and targets can be linked to snapshots of linked
targets. There is no limit to the number of levels of cascading, and the cascade can be broken.
SnapVX links to targets in the following modes:
l Nocopy Mode (Default): SnapVX does not copy data to the linked target volume but still
makes the point-in-time image accessible through pointers to the snapshot. The target device
is modifiable and retains the full image in a space-efficient manner even after unlinking from
the point-in-time.
l Copy Mode: SnapVX copies all relevant tracks from the snapshot's point-in-time image to the
linked target volume. This creates a complete copy of the point-in-time image that remains
available after the target is unlinked.

70 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Native local replication with TimeFinder

If an application needs to find a particular point-in-time copy among a large set of snapshots,
SnapVX enables you to link and relink until the correct snapshot is located.

Interoperability with legacy TimeFinder products


TimeFinder SnapVX and HYPERMAX OS emulate legacy TimeFinder and IBM FlashCopy
replication products to provide backwards compatibility. You can run your legacy replication
scripts and jobs on VMAX All Flash arrays running TimeFinder SnapVX and HYPERMAX OS
without altering them.
Arrays that run PowerMaxOS 5978.444.444 and later enable coexistence and interoperability of
SnapVX with legacy TimeFinder products. On such an array, a device can simultaneously be the
source of a SnapVX operation and the source of one of these legacy TimeFinder products:
l TimeFinder/Clone
l TimeFinder/Miror
l TimeFinder VP Snap
The target device of a legacy TimeFinder product cannot be the source device for SnapVX.
Similarly, the target device of SnapVX cannot be the source device for a legacy TimeFinder
product.
Uses for the coexistence of Snap VX with legacy TimeFinder products include:
l A site wants to keep its current, legacy configuration in place while trying out SnapVX.
l Moving to SnapVX may require the deletion of existing legacy sessions and that violates local
business policies.
Note: Coexistence of SnapVX and legacy TimeFinder products is not available when the source
of a SnapVX session is undergoing a restore operation.

Targetless snapshots
With the TimeFinder SnapVX management interfaces you can take a snapshot of an entire
VMAX All Flash Storage Group using a single command. With this in mind, VMAX All Flash supports
up to 64K storage groups. The number of groups is enough even in the most demanding
environment to provide one for each application. The storage group construct already exists in
most cases as they are created for masking views. TimeFinder SnapVX uses this existing structure,
so reducing the administration required to maintain the application and its replication environment.
Creation of SnapVX snapshots does not require preconfiguration of extra volumes. In turn, this
reduces the amount of cache that SnapVX snapshots use and simplifies implementation. Snapshot
creation and automatic termination can easily be scripted.
The following Solutions Enabler example creates a snapshot with a 2-day retention period. The
command can be scheduled to run as part of a script to create multiple versions of the snapshot.
Each snapshot shares tracks where possible with the other snapshots and the source devices. Use
a cron job or scheduler to run the snapshot script on a schedule to create up to 256 snapshots of
the source volumes; enough for a snapshot every 15 minutes with 2 days of retention:
symsnapvx -sid 001 -sg StorageGroup1 -name sg1_snap establish -ttl -delta 2
If a restore operation is required, any of the snapshots created by this example can be specified.
When the storage group transitions to a restored state, the restore session can be terminated. The
snapshot data is preserved during the restore process and can be used again should the snapshot
data be required for a future restore.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 71
Native local replication with TimeFinder

Secure snaps
Secure snaps prevent administrators or other high-level users from deleting snapshot data,
intentionally or not. Also, Secure snaps are also immune to automatic failure resulting from running
out of Storage Resource Pool (SRP) or Replication Data Pointer (RDP) space on the array.
When the administrator creates a secure snapshot, they assign it an expiration date and time. The
administrator can express the expiration either as a delta from the current date or as an absolute
date. Once the expiration date passes, and if the snapshot has no links, HYPERMAX OS
automatically deletes the snapshot. Before its expiration, administrators can only extend the
expiration date; they cannot shorten the date or delete the snapshot. If a secure snapshot expires,
and it has a volume linked to it, or an active restore session, the snapshot is not deleted. However,
it is no longer considered secure.
Note: Secure snapshots may only be terminated after they expire or by customer-authorized
Dell EMC support. Refer to Knowledgebase article 498316 for more information.

Provision multiple environments from a linked target


Use SnapVX to create multiple test and development environments using linked snapshots. To
access a point-in-time copy, create a link from the snapshot data to a host mapped target device.
Each linked storage group can access the same snapshot, or each can access a different snapshot
version in either no copy or copy mode. Changes to the linked volumes do not affect the snapshot
data. To roll back a test or development environment to the original snapshot image, perform a
relink operation.
Figure 7 SnapVX targetless snapshots

Note: Unmount target volumes before issuing the relink command to ensure that the host
operating system does not cache any filesystem data. If accessing through VPLEX, ensure that
you follow the procedure outlined in the technical note VPLEX: Leveraging Array Based and
Native Copy Technologies, available on the Dell EMC support website.
Once the relink is complete, volumes can be remounted.
Snapshot data is unchanged by the linked targets, so the snapshots can also be used to restore
production data.

72 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Native local replication with TimeFinder

Cascading snapshots
Presenting sensitive data to test or development environments often requires that the source of
the data be disguised beforehand. Cascaded snapshots provides this separation and disguise, as
shown in the following image.
Figure 8 SnapVX cascaded snapshots

If no change to the data is required before presenting it to the test or development environments,
there is no need to create a cascaded relationship.

Accessing point-in-time copies


To access a point-in time-copy, create a link from the snapshot data to a host mapped target
device. The links may be created in Copy mode for a permanent copy on the target device, or in
NoCopy mode for temporary use. Copy mode links create full-volume, full-copy clones of the data
by copying it to the target device’s Storage Resource Pool. NoCopy mode links are space-saving
snapshots that only consume space for the changed data that is stored in the source device’s
Storage Resource Pool.
HYPERMAX OS supports up to 1,024 linked targets per source device.
Note: When a target is first linked, all of the tracks are undefined. This means that the target
does not know where in the Storage Resource Pool the track is located, and host access to the
target must be derived from the SnapVX metadata. A background process eventually defines
the tracks and updates the thin device to point directly to the track location in the source
device’s Storage Resource Pool.

Mainframe SnapVX and zDP


Data Protector for z Systems (zDP) is a mainframe software solution that is layered on SnapVX on
VMAX All Flash arrays. Using zDP you can recover from logical data corruption with minimal data
loss. zDP achieves this by providing multiple, frequent, consistent point-in-time copies of data
automatically. You can then use these copies to recover an application or the environment to a
point prior to the logical corruption.
By providing easy access to multiple different point-in-time copies of data (with a granularity of
minutes), precise recovery from logical data corruption can be performed using application-based

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 73
Native local replication with TimeFinder

recovery procedure. zDP results in minimal data loss compared to other methods such as restoring
data from daily or weekly backups.
As shown in Figure 9 on page 74, you can use zDP to create and manage multiple point-in-time
snapshots of volumes. Each snapshot is a pointer-based, point-in-time image of a single volume.
These images are created using the SnapVX feature of HYPERMAX OS. SnapVX is a space-
efficient method for making snapshots of thin devices and consuming additional storage capacity
only when changes are made to the source volume.
There is no need to copy each snapshot to a target volume as SnapVX separates the capturing of a
point-in-time copy from its usage. Capturing a point-in-time copy does not require a target
volume. Using a point-in-time copy from a host requires linking the snapshot to a target volume.
There can be up to 256 snapshots of each source volume.
Figure 9 zDP operation

These snapshots share allocations to the same track image whenever possible while ensuring they
each continue to represent a unique point-in-time image of the source volume. Despite the space
efficiency achieved through shared allocation to unchanged data, additional capacity is required to
preserve the pre-update images of changed tracks captured by each point-in-time snapshot.
The process of implementing zDP has two phases — the planning phase and the implementation
phase.
l The planning phase is done in conjunction with your EMC representative who has access to
tools that can help size the capacity needed for zDP if you are currently a VMAX All Flash user.
l The implementation phase uses the following methods for z/OS:
n A batch interface that allows you to submit jobs to define and manage zDP.
n A zDP run-time environment that executes under SCF to create snapsets.
For details on zDP usage, refer to the TimeFinder SnapVX and zDP Product Guide. For details on
zDP usage in z/TPF, refer to the TimeFinder Controls for z/TPF Product Guide.

74 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 7
Remote replication

This chapter introduces the remote replication facilities.

l Native remote replication with SRDF.....................................................................................76


l SRDF/Metro......................................................................................................................... 90
l RecoverPoint........................................................................................................................ 93
l Remote replication using eNAS............................................................................................. 94

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 75
Remote replication

Native remote replication with SRDF


The Dell EMC Symmetrix Remote Data Facility (SRDF) family of products offers a range of array-
based disaster recovery, parallel processing, and data migration solutions for Dell EMC storage
systems, including:
l PowerMaxOS for PowerMax 2000 and 8000 arrays and for VMAX All Flash 450F and 950F
arrays
l HYPERMAX OS for VMAX All Flash 250F, 450F, 850F, and 950F arrays
l HYPERMAX OS for VMAX 100K, 200K, and 400K arrays
l Enginuity for VMAX 10K, 20K, and 40K arrays
SRDF disaster recovery solutions use “active, remote” mirroring and dependent-write logic to
create consistent copies of data. Dependent-write consistency ensures transactional consistency
when the applications are restarted at the remote location. You can tailor your SRDF solution to
meet various Recovery Point Objectives and Recovery Time Objectives.
Using SRDF, you can create complete solutions to:
l Create real-time or dependent-write-consistent copies at 1, 2, or 3 remote arrays.
l Move data quickly over extended distances.
l Provide 3-site disaster recovery with zero data loss recovery, business continuity protection
and disaster-restart.
You can integrate SRDF with other Dell EMC products to create complete solutions to:
l Restart operations after a disaster with zero data loss and business continuity protection.
l Restart operations in cluster environments. For example, Microsoft Cluster Server with
Microsoft Failover Clusters.
l Monitor and automate restart operations on an alternate local or remote server.
l Automate restart operations in VMware environments.

76 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

SRDF 2-site solutions


The following table describes SRDF 2-site solutions.
Table 14 SRDF 2-site solutions

Solution highlights Site topology

SRDF/Synchronous (SRDF/S) Host Primary Secondary


Maintains a real-time copy of production data
at a physically separated array.
R1 Limited distance R2
l No data exposure. Synchronous
l Ensured consistency protection with
SRDF/Consistency Group.
l Recommended maximum distance of 200
km (125 miles) between arrays as
application latency may rise to
unacceptable levels at longer distances.a

SRDF/Asynchronous (SRDF/A) Host Primary Secondary


Maintains a dependent-write consistent copy of
the data on a remote secondary site. The sites
can be an unlimited distance apart. The copy of R1 Unlimited distance R2
the data at the secondary site is seconds Asynchronous
behind the primary site.

SRDF/Data Mobility (SRDF/DM)


Enables the fast transfer of data from R1 to R2
devices over extended distances.
l Uses adaptive copy mode to transfer data. Host R1 SRDF links R2 Host

l Designed for migration or data replication


purposes, not for disaster restart solutions.
Site A Site B

SRDF/Automated Replication (SRDF/AR)


l Combines SRDF and TimeFinder to Host Host
optimize bandwidth requirements and
provide a long-distance disaster restart
solution.
l Operates in 2-site solutions that use
SRDF/DM in combination with TimeFinder.

SRDF
TimeFinder
TimeFinder
background copy
R1 R2

Site A Site B

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 77
Remote replication

Table 14 SRDF 2-site solutions (continued)

Solution highlights Site topology

SRDF/Cluster Enabler (CE)


VLAN switch VLAN switch
l Integrates SRDF/S or SRDF/A with Extended IP subnet

Microsoft Failover Clusters (MSCS) to


automate or semi-automate site failover.
l Complete solution for restarting operations
in cluster environments (MSCS with Cluster 1 Fibre Channel Fibre Channel
Host 1 hub/switch hub/switch Cluster 1
Microsoft Failover Clusters). Host 2

l Expands the range of cluster storage and


management capabilities while ensuring full
protection of the SRDF remote replication.

Cluster 2
SRDF/S or SRDF/A links
Cluster 2 Host 2
Host 1

SRDF-2node2cluster.eps

Site A Site B

SRDF and VMware Site Recovery Manager Protection side Recovery side
vCenter and SRM Server vCenter and SRM Server
Completely automates storage-based disaster Solutions Enabler software Solutions Enabler software
restart operations for VMware environments in
SRDF topologies.
IP Network IP Network
l The Dell EMC SRDF Adapter enables
VMware Site Recovery Manager to
automate storage-based disaster restart ESX Server
Solutions Enabler software
operations in SRDF solutions. configured as a SYMAPI server

l Can address configurations in which data


SAN Fabric SAN Fabric
are spread across multiple storage arrays or SAN Fabric SAN Fabric

SRDF groups.
l Requires that the adapter is installed on
each array to facilitate the discovery of SRDF mirroring
arrays and to initiate failover operations.
l Implemented with:
n SRDF/S
n SRDF/A Site A, primary
Site B, secondary

n SRDF/Star
n TimeFinder

a. In some circumstances, using SRDF/S over distances greater than 200 km may be feasible. Contact your Dell EMC
representative for more information.

78 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

SRDF multi-site solutions


The following table describes SRDF multi-site solutions.

Table 15 SRDF multi-site solutions

Solution highlights Site topology

SRDF/Automated Replication
(SRDF/AR)
Host Host
l Combines SRDF and
TimeFinder to optimize
bandwidth requirements
and provide a long-distance
disaster restart solution.
R1 R2
l Operates in a 3-site SRDF adaptive TimeFinder
SRDF/S TimeFinder copy
environment that uses a
combination of SRDF/S, R2
R1
SRDF/DM, and TimeFinder.
Site A Site B Site C

Concurrent SRDF
3-site disaster recovery and
advanced multi-site business F/S R2
continuity protection. SRD

l Data on the primary site is Site B


R11 adaptive copy
concurrently replicated to 2 R2
secondary sites.
Site A Site C
l Replication to remote site
can use SRDF/S, SRDF/A,
or adaptive copy.

Cascaded SRDF
3-site disaster recovery and
SRDF/S SRDF/A
advanced multi-site business R1 R21 R2
continuity protection.
Data on the primary site (Site A) Site A Site B Site C
is synchronously mirrored to a
secondary site (Site B), and
then asynchronously mirrored
from the secondary site to a
tertiary site (Site C).

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 79
Remote replication

Table 15 SRDF multi-site solutions (continued)

Solution highlights Site topology

SRDF/Star Cascaded SRDF/Star


3-site data protection and
R21
disaster recovery configuration
with zero data loss recovery, F/S SRD
R11 SRD F/A R2/
business continuity protection R22
Site B
and disaster restart.
Site A SRDF/A (recovery) Site C
l Available in 2
configurations: Concurrent SRDF/Star
n Cascaded SRDF/Star R21
Concurrent SRDF/Star F/S SR
n SRD (re DF/A R2/
R11 cov R22
Site B ery
l Differential synchronization )
allows rapid reestablishment Site A SRDF/A Site C
of mirroring among
surviving sites in a multi-site
disaster recovery
implementation.
l Implemented using SRDF
consistency groups (CG)
with SRDF/S and SRDF/A.

Interfamily compatibility
SRDF supports connectivity between different operating environments and arrays. Arrays running
HYPERMAX OS can connect to legacy arrays running older operating environments. In mixed
configurations where arrays are running different versions, SRDF features of the lowest version
are supported.
VMAX All Flash arrays can connect to:
l PowerMax arrays running PowerMaxOS
l VMAX 250F, 450F, 850F, and 950F arrays running HYPERMAX OS
l VMAX 100K, 200K, and 400K arrays running HYPERMAX OS
l VMAX 10K, 20K, and 40K arrays running Enginuity 5876 with an Enginuity ePack
Note: When you connect between arrays running different operating environments, limitations
may apply. Information about which SRDF features are supported, and applicable limitations
for 2-site and 3-site solutions is in the SRDF Interfamily Connectivity Information.
This interfamily connectivity allows you to add the latest hardware platform/operating
environment to an existing SRDF solution, enabling technology refreshes.

SRDF device pairs


An SRDF device is a logical device paired with another logical device that resides in a second array.
The arrays are connected by SRDF links.
Encapsulated Data Domain devices used for ProtectPoint cannot be part of an SRDF device pair.

80 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

Note: ProtectPoint has been renamed to Storage Direct and it is included in the PowerProtect,
Data Protetion Suite for Apps, or Data Protection Suite Enterprise Edition software.

R1 and R2 devices
A R1 device is the member of the device pair at the source (production) site. R1 devices are
generally Read/Write accessible to the application host.
A R2 device is the member of the device pair at the target (remote) site. During normal operations,
host I/O writes to the R1 device are mirrored over the SRDF links to the R2 device. In general, data
on R2 devices is not available to the application host while the SRDF relationship is active. In SRDF
synchronous mode, however, an R2 device can be in Read Only mode that allows a host to read
from the R2.
In a typical environment:
l The application production host has Read/Write access to the R1 device.
l An application host connected to the R2 device has Read Only (Write Disabled) access to the
R2 device.
Figure 10 R1 and R2 devices

Open systems hosts


Production host Optional remote host

Active host path Recovery path


Write Disabled

R1 SRDF Links R2
Read/ Read
Write Only
R1 data copies to R2

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 81
Remote replication

R11 devices
R11 devices operate as the R1 device for two R2 devices. Links to both R2 devices are active.
R11 devices are typically used in 3-site concurrent configurations where data on the R11 site is
mirrored to two secondary (R2) arrays:
Figure 11 R11 device in concurrent SRDF

Site B
Target

R2

Site C
R11
Target

Site A
Source

R2

82 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

R21 devices
R21 devices have a dual role and are used in cascaded 3-site configurations where:
l Data on the R1 site is synchronously mirrored to a secondary (R21) site, and then
l Asynchronously mirrored from the secondary (R21) site to a tertiary (R2) site:
Figure 12 R21 device in cascaded SRDF

Production
host

SRDF Links
R1 R21 R2

Site A Site B Site C

The R21 device acts as a R2 device that receives updates from the R1 device, and as a R1 device
that sends updates to the R2 device.
When the R1->R21->R2 SRDF relationship is established, no host has write access to the R21
device.
In arrays that run Enginuity, the R21 device can be diskless. That is, it consists solely of cache
memory and does not have any associated storage device. It acts purely to relay changes in the R1
device to the R2 device. This capability requires the use of thick devices. Systems that run
PowerMaxOS or HYPERMAX OS contain thin devices only, so setting up a diskless R21 device is
not possible on arrays running those environments.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 83
Remote replication

R22 devices
R22 devices:
l Have two R1 devices, only one of which is active at a time.
l Are typically used in cascaded SRDF/Star and concurrent SRDF/Star configurations to
decrease the complexity and time required to complete failover and failback operations.
l Let you recover without removing old SRDF pairs and creating new ones.
Figure 13 R22 devices in cascaded and concurrent SRDF/Star
Cascaded STAR Concurrent STAR
Site A Site B Site A Site B

SRDF/S SRDF/S
R11 R21
R11 R21

Host Host

Inactive Active
SRDF/A SRDF/A
links links
SRDF/A
SRDF/A

Site C R2 Site C
R22

Dynamic device personalities


SRDF devices can dynamically swap “personality” between R1 and R2. After a personality swap:
l The R1 in the device pair becomes the R2 device, and
l The R2 becomes the R1 device.
Swapping R1/R2 personalities allows the application to be restarted at the remote site without
interrupting replication if an application fails at the production site. After a swap, the R2 side (now
R1) can control operations while being remotely mirrored at the primary (now R2) site.
An R1/R2 personality swap is not supported:
l If the R2 device is larger than the R1 device.
l If the device to be swapped is participating in an active SRDF/A session.
l In SRDF/EDP topologies diskless R11 or R22 devices are not valid end states.
l If the device to be swapped is the target device of any TimeFinder or EMC Compatible flash
operations.

84 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

SRDF modes of operation


The SRDF mode of operation determines:
l How R1 devices are remotely mirrored to R2 devices across the SRDF links
l How I/O operations are processed
l When the acknowledgment is returned to the application host that issued an I/O write
command
In SRDF there are three principal modes:
l Synchronous
l Asynchronous
l Adaptive copy

Synchronous mode
Synchronous mode maintains a real-time mirror image of data between the R1 and R2 devices over
distances up to 200 km (125 miles). Host data is written to both arrays in real time. The application
host does not receive the acknowledgment until the data has been stored in the cache of both
arrays.

Asynchronous mode
Asynchronous mode maintains a dependent-write consistent copy between the R1 and R2 device
over unlimited distances. On receiving data from the application host, SRDF on the R1 side of the
link writes that data to its cache. Also it batches the data received into delta sets. Delta sets are
transferred to the R2 device in timed cycles. The application host receives the acknowledgment
once data is successfully written to the cache on the R1 side.

Adaptive copy modes


Adaptive copy modes:
l Transfer large amounts of data with impact on the application host.
l Accumulate write requests that are destined for the R2 device on the R1 side, but not in cache
memory.
l A background copy process sends the outstanding write requests to the R2 device.
l Allow the R1 and R2 devices to be out of synchronization by user-defined maximum skew
value. Once the skew value is exceeded, SRDF transfers the batched data to the R2 device.
l Send the acknowledgment to the application host once the data is successfully written to
cache on the R1 side.
Unlike asynchronous mode, the adaptive copy modes do not guarantee a dependent-write copy of
data on the R2 devices.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 85
Remote replication

SRDF groups
An SRDF group defines the logical relationship between SRDF devices and directors on both sides
of an SRDF link.
Group properties
The properties of an SRDF group are:
l Label (name)
l Set of ports on the local array used to communicate over the SRDF links
l Set of ports on the remote array used to communicate over the SRDF links
l Local group number
l Remote group number
l One or more pairs of devices
The devices in the group share the ports and associated CPU resources of the port's directors.
Types of group
There are two types of SRDF group:
l Static: which are defined in the local array's configuration file.
l Dynamic: which are defined using SRDF management tools and their properties that are stored
in the array's cache memory.
On arrays running PowerMaxOS or HYPERMAX OS all SRDF groups are dynamic.

86 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

Director boards, links, and ports


SRDF links are the logical connections between SRDF groups and their ports. The ports are
physically connected by cables, routers, extenders, switches and other network devices.
Note: Two or more SRDF links per SRDF group are required for redundancy and fault
tolerance.
The relationship between the resources on a director (CPU cores and ports) varies depending on
the operating environment.

HYPERMAX OS
On arrays running HYPERMAX OS:
l The relationship between the SRDF emulation and resources on a director is configurable:
n One director/multiple CPU cores/multiple ports
n Connectivity (ports in the SRDF group) is independent of compute power (number of CPU
cores). You can change the amount of connectivity without changing compute power.
l Each director has up to 16 front end ports, any or all of which can be used by SRDF. Both the
SRDF Gigabit Ethernet and SRDF Fibre Channel emulations can use any port.
l The data path for devices in an SRDF group is not fixed to a single port. Instead, the path for
data is shared across all ports in the group.

Mixed configurations: HYPERMAX OS and Enginuity 5876


For configurations where one array is running Enginuity 5876, and the other array is running
HYPERMAX OS, the following rules apply:
l On the 5876 side, an SRDF group can have the full complement of directors, but no more than
16 ports on the HYPERMAX OS side.
l You can connect to 16 directors using one port each, 2 directors using 8 ports each or any
other combination that does not exceed 16 per SRDF group.

SRDF consistency
Many applications, especially database systems, use dependent write logic to ensure data
integrity. That is, each write operation must complete successfully before the next can begin.
Without write dependency, write operations could get out of sequence resulting in irrecoverable
data loss.
SRDF implements write dependency using the consistency group (also known as SRDF/CG). A
consistency group consists of a set of SRDF devices that use write dependency. For each device
in the group, SRDF ensures that write operations propagate to the corresponding R2 devices in
the correct order.
However, if the propagation of any write operation to any R2 device in the group cannot complete,
SRDF suspends propagation to all group's R2 devices. This suspension maintains the integrity of
the data on the R2 devices. While the R2 devices are unavailable, SRDF continues to store write
operations on the R1 devices. It also maintains a list of those write operations in their time order.
When all R2 devices in the group become available, SRDF propagates the outstanding write
operations, in the correct order, for each device in the group.
SRDF/CG is available for both SRDF/S and SRDF/A.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 87
Remote replication

Data migration
Data migration is the one-time movement of data from one array to another. Once the movement
is complete, the data is accessed from the secondary array. A common use of migration is to
replace an older array with a new one.
Dell EMC support personnel can assist with the planning and implementation of migration projects.
SRDF multisite configurations enable migration to occur in any of these ways:
l Replace R2 devices.
l Replace R1 devices.
l Replace both R1 and R2 devices simultaneously.
For example, this diagram shows the use of concurrent SRDF to replace the secondary (R2) array
in a 2-site configuration:
Figure 14 Migrating data and removing a secondary (R2) array

Site A Site B

R1 R2

Site A Site B Site A

R11 R2 R1

SRDF
migration
R2
R2

Site C Site C

Here:
l The top section of the diagram shows the original, 2-site configuration.
l The lower left section of the diagram shows the interim, 3-site configuration with data being
copied to two secondary arrays.

88 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

l The lower right section of the diagram shows the final, 2-site configuration where the new
secondary array has replaced the original one.
The Dell EMC SRDF Introduction contains more information about using SRDF to migrate data.

More information
Here are other Dell EMC documents that contain more information about the use of SRDF in
replication and migration:
SRDF Introduction
SRDF and NDM Interfamily Connectivity Information
SRDF/Cluster Enabler Plug-in Product Guide
Using the Dell EMC Adapter for VMWare Site Recovery Manager Technical Book
Dell EMC SRDF Adapter for VMware Site Recovery Manager Release Notes

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 89
Remote replication

SRDF/Metro
In traditional SRDF configurations, only the R1 devices are Read/Write accessible to the
application hosts. The R2 devices are Read Only and Write Disabled.
In SRDF/Metro configurations, however:
l Both the R1 and R2 devices are Read/Write accessible to the application hosts.
l Application hosts can write to both the R1 and R2 side of the device pair.
l R2 devices assume the same external device identity as the R1 devices. The identity includes
the device geometry and device WWN.
This shared identity means that R1 and R2 devices appear to application hosts as a single, virtual
device across two arrays.

Deployment options
SRDF/Metro can be deployed in either a single, multipathed host environment or in a clustered
host environment:
Figure 15 SRDF/Metro
Multi-Path Cluster

Read/Write Read/Write
Read/Write Read/Write

R1 SRDF links R2 R1 SRDF links R2

Site A Site B Site A Site B

Hosts can read and write to both the R1 and R2 devices:


l In a single host configuration, a single host issues I/O operations. Multipathing software directs
parallel reads and writes to each array.
l In a clustered host configuration, multiple hosts issue I/O operations. Those hosts access both
sides of the SRDF device pair. Each cluster node has dedicated access to one of the storage
arrays.
l In both configurations, writes to the R1 and R2 devices are synchronously copied to the paired
device in the other array. SRDF/Metro software resolves any write conflicts to maintain
consistent images on the SRDF device pairs.

SRDF/Metro Resilience
If either of the devices in a SRDF/Metro configuration become Not Ready, or connectivity
between the devices is lost, SRDF/Metro must decide which side remains available to the
application host. There are two mechanisms that SRDF/Metro can use : Device Bias and Witness.
Device Bias
Device pairs for SRDF/Metro are created with a bias attribute. By default, the create pair
operation sets the bias to the R1 side of the pair. That is, if a device pair becomes Not Ready (NR)
on the SRDF link, the R1 (bias side) remains accessible to the hosts, and the R2 (nonbias side)

90 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

becomes inaccessible. However, if there is a failure on the R1 side, the host loses all connectivity
to the device pair. The Device Bias method cannot make the R2 device available to the host.
Witness
A witness is a third party that mediates between the two sides of a SRDF/Metro pair to help:
l Decide which side remains available to the host
l Avoid a "split brain" scenario when both sides attempt to remain accessible to the host despite
the failure
The witness method allows for intelligently choosing on which side to continue operations when
the bias-only method may not result in continued host availability to a surviving, nonbiased array.
There are two forms of the Witness mechanism:
l Array Witness: The operating environment of a third array is the mediator.
l Virtual Witness (vWitness): A daemon running on a separate, virtual machine is the mediator.
When both sides run PowerMaxOS 5978 SRDF/Metro takes these criteria into account when
selecting the side to remain available to the hosts (in priority order):
1. The side that has connectivity to the application host (requires PowerMaxOS 5978.444.444)
2. The side that has a SRDF/A DR leg
3. Whether the SRDF/A DR leg is synchronized
4. The side that has more than 50% of the RA or FA directors that are available
5. The side that is currently the bias side
The first of these criteria that one array has, and the other does not, stops the selection process.
The side with the matched criteria is the preferred winner.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 91
Remote replication

Disaster recovery facilities


Devices in SRDF/Metro groups can simultaneously be part of device groups that replicate data to
a third, disaster-recovery site.
Either or both sides of the Metro region can be replicated. You can choose which ever
configuration that suits your business needs. The following diagram shows the possible
configurations:
Note: When the SRDF/Metro session is using a witness, the R1 side of the Metro pair can
change based on the witness determination of the preferred side.
Figure 16 Disaster recovery for SRDF/Metro
Single-sided replication

SRDF/Metro SRDF/Metro

R11 R2 R1 R21

Site A Site B Site A Site B

SRDF/A SRDF/A
or Adaptive Copy or Adaptive Copy
Disk Disk

R2 R2

Site C Site C

Double-sided replication

SRDF/Metro SRDF/Metro

R11 R21 R11 R21

SRDF/A SRDF/A SRDF/A


Site A or Adaptive Copy Site B or Adaptive Copy Site A or Adaptive Copy Site B
Disk Disk Disk
SRDF/A
or Adaptive Copy
Disk

R2

R2 R2

R2

Site C Site D Site C

92 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Remote replication

Note that the device names differ from a standard SRDF/Metro configuration. This reflects the
change in the devices' function when disaster recovery facilities are in place. For instance, when
the R2 side is replicated to a disaster recovery site, its name changes to R21 because it is both the:
l R2 device in the SRDF/Metro configuration
l R1 device in the disaster-recovery configuration

More information
Here are other Dell EMC documents that contain more information on SRDF/Metro:
SRDF Introduction
SRDF/Metro vWitness Configuration Guide
SRDF Interfamily Connectivity Information

RecoverPoint
HYPERMAX OS 5977.1125.1125 introduced support for RecoverPoint on VMAX storage arrays.
RecoverPoint is a comprehensive data protection solution designed to provide production data
integrity at local and remote sites. RecoverPoint also provides the ability to recover data from a
point in time using journaling technology.
The primary reasons for using RecoverPoint are:
l Remote replication to heterogeneous arrays
l Protection against Local and remote data corruption
l Disaster recovery
l Secondary device repurposing
l Data migrations
RecoverPoint systems support local and remote replication of data that applications are writing to
SAN-attached storage. The systems use existing Fibre Channel infrastructure to integrate
seamlessly with existing host applications and data storage subsystems. For remote replication,
the systems use existing Fibre Channel connections to send the replicated data over a WAN, or
use Fibre Channel infrastructure to replicate data aysnchronously. The systems provide failover of
operations to a secondary site in the event of a disaster at the primary site.
Previous implementations of RecoverPoint relied on a splitter to track changes made to protected
volumes. The current implementation relies on a cluster of RecoverPoint nodes, provisioned with
one or more RecoverPoint storage groups, leveraging SnapVX technology, on the storage array.
Volumes in the RecoverPoint storage groups are visible to all the nodes in the cluster, and available
for replication to other storage arrays.
Recoverpoint allows data replication of up to 8,000 LUNs for each RecoverPoint cluster and up to
eight different RecoverPoint clusters attached to one array. Supported array types include
PowerMax, VMAX All Flash, VMAX3, VMAX, VNX, VPLEX, and XtremIO.
RecoverPoint is licensed and sold separately. For more information about RecoverPoint and its
capabilities see the Dell EMC RecoverPoint Product Guide.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 93
Remote replication

Remote replication using eNAS


File Auto Recovery (FAR) allows you to manually failover or move a virtual Data Mover (VDM)
from a source eNAS system to a destination eNAS system. The failover or move leverages block-
level SRDF synchronous replication, so it incurs zero data loss in the event of an unplanned
operation. This feature consolidates VDMs, file systems, file system checkpoint schedules, CIFS
servers, networking, and VDM configurations into their own separate pools. This feature works for
a recovery where the source is unavailable. For recovery support in the event of an unplanned
failover, there is an option to recover and clean up the source system and make it ready as a future
destination
The manually initiated failover and reverse operations can be performed using EMC File Auto
Recovery Manager (FARM). FARM can automatically failover a selected sync-replicated VDM on a
source eNAS system to a destination eNAS system. FARM can also monitor sync-replicated VDMs
and trigger automatic failover based on Data Mover, File System, Control Station, or IP network
unavailability that would cause the NAS client to lose access to data.

94 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 8
Blended local and remote replication

This chapter introduces TimeFinder integration with SRDF.

l Integration of SRDF and TimeFinder..................................................................................... 96


l R1 and R2 devices in TimeFinder operations..........................................................................96
l SRDF/AR.............................................................................................................................. 96
l TimeFinder and SRDF/A....................................................................................................... 99
l TimeFinder and SRDF/S....................................................................................................... 99

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 95
Blended local and remote replication

Integration of SRDF and TimeFinder


You can use TimeFinder and SRDF products to complement each other when you require both
local and remote replication. For example, you can use TimeFinder to create local gold copies of
SRDF devices for recovery operations and for testing disaster recovery solutions.
The key benefits of TimeFinder integration with SRDF include:
l Remote controls simplify automation—Use Dell EMC host-based control software to transfer
commands across the SRDF links. A single command from the host to the primary array can
initiate TimeFinder operations on both the primary and secondary arrays.
l Consistent data images across multiple devices and arrays—SRDF/CG guarantees that a
dependent-write consistent image of production data on the R1 devices is replicated across the
SRDF links.
You can use TimeFinder/CG in an SRDF configuration to create dependent-write consistent local
and remote images of production data across multiple devices and arrays.
Note: Using a SRDF/A single session guarantees dependent-write consistency across the
SRDF links and does not require SRDF/CG. SRDF/A MSC mode requires host software to
manage consistency among multiple sessions.
Note: Some TimeFinder operations are not supported on devices that SRDF protects. The Dell
EMC Solutions Enabler TimeFinder SnapVX CLI User Guide has further information.
The rest of this chapter summarizes the ways of integrating SRDF and TimeFinder.

R1 and R2 devices in TimeFinder operations


You can use TimeFinder to create local replicas of R1 and R2 devices. The following rules apply:
l You can use R1 devices and R2 devices as TimeFinder source devices.
l R1 devices can be the target of TimeFinder operations as long as there is no host accessing the
R1 during the operation.
l R2 devices can be used as TimeFinder target devices if SRDF replication is not active (writing
to the R2 device). To use R2 devices as TimeFinder target devices, first suspend the SRDF
replication session.

SRDF/AR
SRDF/AR combines SRDF and TimeFinder to provide a long-distance disaster restart solution.
SRDF/AR can be deployed over 2 or 3 sites:
l In 2-site configurations, SRDF/DM is deployed with TimeFinder.
l In 3-site configurations, SRDF/DM is deployed with a combination of SRDF/S and TimeFinder.
The time to create the new replicated consistent image is determined by the time that it takes to
replicate the deltas.

96 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Blended local and remote replication

SRDF/AR 2-site configurations


The following image shows a 2-site configuration where the production device (R1) on the primary
array (Site A) is also a TimeFinder target device:
Figure 17 SRDF/AR 2-site solution

Host Host

SRDF
TimeFinder
TimeFinder
background copy
R1 R2

Site A Site B

In this configuration, data on the SRDF R1/TimeFinder target device is replicated across the SRDF
links to the SRDF R2 device.
The SRDF R2 device is also a TimeFinder source device. TimeFinder replicates this device to a
TimeFinder target device. You can map the TimeFinder target device to the host connected to the
secondary array at Site B.
In a 2-site configuration, SRDF operations are independent of production processing on both the
primary and secondary arrays. You can utilize resources at the secondary site without interrupting
SRDF operations.
Use SRDF/AR 2-site configurations to:
l Reduce required network bandwidth using incremental resynchronization between the SRDF
target sites.
l Reduce network cost and improve resynchronization time for long-distance SRDF
implementations.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 97
Blended local and remote replication

SRDF/AR 3-site configurations


SRDF/AR 3-site configurations provide a zero data loss solution at long distances in the event that
the primary site is lost.
The following image shows a 3-site configuration where:
l Site A and Site B are connected using SRDF in synchronous mode.
l Site B and Site C are connected using SRDF in adaptive copy mode.
Figure 18 SRDF/AR 3-site solution

Host Host

R1 R2
SRDF adaptive TimeFinder
SRDF/S TimeFinder copy

R2
R1

Site A Site B Site C

If Site A (primary site) fails, the R2 device at Site B provides a restartable copy with zero data
loss. Site C provides an asynchronous restartable copy.
If both Site A and Site B fail, the device at Site C provides a restartable copy with controlled data
loss. The amount of data loss is a function of the replication cycle time between Site B and Site C.
SRDF and TimeFinder control commands to R1 and R2 devices for all sites can be issued from Site
A. No controlling host is required at Site B.
Use SRDF/AR 3-site configurations to:
l Reduce required network bandwidth using incremental resynchronization between the
secondary SRDF target site and the tertiary SRDF target site.
l Reduce network cost and improve resynchronization time for long-distance SRDF
implementations.
l Provide disaster recovery testing, point-in-time backups, decision support operations, third-
party software testing, and application upgrade testing or the testing of new applications.

Requirements/restrictions
In a 3-site SRDF/AR multi-hop configuration, SRDF/S host I/O to Site A is not acknowledged until
Site B has acknowledged it. This can cause a delay in host response time.

98 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Blended local and remote replication

TimeFinder and SRDF/A


In SRDF/A solutions, device pacing:
l Prevents cache utilization bottlenecks when the SRDF/A R2 devices are also TimeFinder
source devices.
l Allows R2 or R22 devices at the middle hop to be used as TimeFinder source devices.
Note: Device write pacing is not required in configurations that include HYPERMAX OS and
Enginuity 5876.

TimeFinder and SRDF/S


SRDF/S solutions support any type of TimeFinder copy sessions running on R1 and R2 devices as
long as the conditions described in R1 and R2 devices in TimeFinder operations on page 96 are
met.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 99
Blended local and remote replication

100 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 9
Data Migration

This chapter introduces data migration solutions.

l Overview..............................................................................................................................102
l Data migration for open systems......................................................................................... 103
l Data migration for mainframe.............................................................................................. 109

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 101
Data Migration

Overview
Data migration is a one-time movement of data from one array (the source) to another array (the
target). Typical examples are data center refreshes where data is moved from an old array after
which the array is retired or re-purposed. Data migration is not data movement due to replication
(where the source data is accessible after the target is created) or data mobility (where the target
is continually updated).
After a data migration operation, applications that access the data reference it at the new location.
To plan a data migration, consider the potential impact on your business, including the:
l Type of data to be migrated
l Site location(s)
l Number of systems and applications
l Amount of data to be moved
l Business needs and schedules
PowerMaxOS provides migration facilities for:
l Open systems
l IBM System i
l Mainframe

102 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Data Migration

Data migration for open systems


The data migration features available for open system environments are:
l Non-disruptive migration
l Open Replicator
l PowerPath Migration Enabler
l Data migration using SRDF/Data Mobility
l Space and zero-space reclamation

Non-Disruptive Migration overview


Non-Disruptive Migration (NDM) provides a method for migrating data from a source array to a
target array without application host downtime across a metro distance, typically within a data
center. For NDM array operating system version support, please consult the NDM support matrix,
or the SRDF Interfamily Connectivity Guide.
If regulatory or business requirements for DR (disaster recovery) dictate the use of SRDF/S
during migration, contact Dell EMC for required ePacks for SRDF/S configuration.
The NDM operations involved in a typical migration are:
l Environment setup – Configures source and target array infrastructure for the migration
process.
l Create – Duplicates the application storage environment from source array to target array.
l Cutover – Switches the application data access form the source array to the target array and
duplicates the application data on the source array to the target array.
l Commit – Removes application resources from the source array and releases the resources
used for migration. Application permanently runs on the target array.
l Enviroment remove –Removes the migration infrastructure created by the environmental
setup.
Some key features of NDM are:
l Simple process for migration:
1. Select storage group to migrate.
2. Create the migration session.
3. Discover paths to the host.
4. Cutover or readytgt storage group to VMAX3 or VMAX All Flash array.
5. Monitor for synchronization to complete.
6. Commit the migration.
l Allows for inline compression on VMAX All Flash array during migration.
l Maintains snapshot and disaster recovery relationships on source array, but are not migrated.
l Allows for non-disruptive revert to source array.
l Allows up to 50 concurrent migration sessions.
l Requires no license since it is part of HYPERMAX OS.
l Requires no additional hardware in the data path.
The following graphic shows the connections required between the host (single or cluster) and the
source and target array, and the SRDF connection between the two arrays.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 103
Data Migration

Figure 19 Non-Disruptive Migration zoning

The App host connection to both arrays uses FC, and the SRDF connection between arrays uses
FC or GigE .
The migration controls should be run from a control host and not from the application host. The
control host should have visibility to both the source array and target array.
The following devices and components are not supported with NDM:
l CKD devices
l eNAS data
l ProtectPoint, FAST.X relationships and associated data

Environmental requirements for Non-Disruptive Migration


The following configurations are required for a successful data migration:
Array configuration
l The target array must be running HYPERMAX OS 5977.811.784 or higher. This includes VMAX3
Family arrays and VMAX All Flash arrays.
l The source array must be a VMAX array running Enginuity 5876 with required ePack (contact
Dell EMC for required ePack).
l SRDF is used for data migration, so zoning of SRDF ports between the source and target
arrays is required. Note that an SRDF license is not required, as there is no charge for NDM.
l The NDM RDF group is configured with a minimum of two paths on different directors for
redundancy and fault tolerance. If more paths are found up to eight paths will be configured.
l If SRDF is not normally used in the migration environment, it may be necessary to install and
configure RDF directors and ports on both the source and target arrays and physically
configure SAN connectivity.
Host configuration
l The migration controls should be run from a control host and not from the application host.
l Both the source and the target array have be visible to the controlling host that runs the
migration commands.

104 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Data Migration

Pre-migration rules and restrictions for NDM


In addition to general configuration requirements of the migration environment, the following rules
and restrictions apply before starting a migration.
l A Storage Group is the data container that is migrated, and the requirements that apply to the
group and its devices are:
n Storage groups must have masking views. All devices in the group on the source array must
be visible only through a masking view. Each device must be mapped only to a port that is
part of the masking view.
n Multiple masking views on a storage group using the same initiator group are valid only
when:
– Port groups on the target array exist for each masking view, and
– Ports in the port group are selected
n A storage group must be a parent or stand-alone group. A child storage group with a
masking view on the child group is not supported.
n If the selected storage group is a parent, its child groups are also migrated.
n The names of storage groups and their children (if any) must not exist on the target array.
n Gatekeeper devices in a storage group are not migrated.
l Devices cannot:
n Have a mobility ID
n Have a nonbirth identity, when the source array runs Enginuity 5876
n Have the BCV attribute
n Be encapsulated
n Be RP devices
n Be Data Domain devices
n Be vVOL devices
n Be R2 or Concurrent SRDF devices
n Be masked to FCoE (in the case of source arrays), iSCSI, non-ACLX, or NVMe over FC
ports
n Be part of another data migration operation
n Be part of an ORS relationship
n Be in other masked storage groups
n Have a device status of Not Ready
l Devices can be part of TimeFinder sessions.
l Devices can act as R1 devices but cannot be part of a SRDF/Star or SRDF/SQAR
configuration.
l The names of masking groups to migrate must not exist on the target array.
l The names of initiator groups to migrate may exist on the target array. However, the
aggregate set of host initiators in the initiator groups that the masking groups use must be the
same. Also, the effective ports flags on the host initiators must have the same setting on both
arrays.
l The names of port groups to migrate may exist on the target array, as long as the groups on
the target array are in the logging history table for at least one port.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 105
Data Migration

l The status of the target array must be as follows:


n If a target-side Storage Resource Pool (SRP) is specified for the migration that SRP must
exist on the target array.
n The SRP to be used for target-side storage must have enough free capacity to support the
migration.
n The target side must be able to support the additional devices required to receive the
source-side data.
n All initiators provisioned to an application on the source array must also be logged into ports
on the target array.

Migration infrastructure - RDF device pairing


RDF device pairing is done during the create operation, with the following actions occurring on the
device pairs.
l NDM creates RDF device pairs, in a DM RDF group, between devices on the source array and
the devices on the target array.
l Once device pairing is complete NDM controls the data flow between both sides of the
migration process.
l Once the migration is complete, the RDF pairs are deleted when the migration is committed.
l Other RDF pairs may exist in the DM RDF group if another migration is still in progress.
Due to differences in device attributes between the source and target array, the following rules
apply during migration:
l Any source array device that has an odd number of cylinders is migrated to a device on the
target array that has Geometry Compatibility Mode (GCM).
l Any source array meta device is migrated to a non-meta device on the target array.
Once the copying of data to the target array has begun, the target devices can have SRDF mirrors
(R2 devices) added to them for remote replication. However, the mirror devices cannot be:
l Enabled for MSC or Synchronous SRDF Consistency
l Part of a SRDF/Star, SRDF/SQAR, or SRDF/Metro configuration

106 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Data Migration

Open Replicator
Open Replicator enables copying data (full or incremental copies) from qualified arrays within a
storage area network (SAN) infrastructure to or from arrays running HYPERMAX OS. Open
Replicator uses the Solutions Enabler SYMCLI symrcopy command.
Use Open Replicator to migrate and back up/archive existing data between arrays running
HYPERMAX OS. Open Replicator uses the Solutions Enabler SYMCLI symrcopy and third-party
storage arrays within the SAN infrastructure without interfering with host applications and
ongoing business operations.
Use Open Replicator to:
l Pull from source volumes on qualified remote arrays to a volume on an array running
HYPERMAX OS. Open Replicator uses the Solutions Enabler SYMCLI symrcopy.
l Perform online data migrations from qualified storage to an array running HYPERMAX OS.
Open Replicator uses the Solutions Enabler SYMCLI symrcopy with minimal disruption to host
applications.
NOTICE Open Replicator cannot copy a volume that is in use by TimeFinder.

Open Replicator operations


Open Replicator uses the following terminology:
Control
The recipent array and its devices are referred to as the control side of the copy operation.

Remote
The donor Dell EMC arrays or third-party arrays on the SAN are referred to as the remote
array/devices.

Hot
The Control device is Read/Write online to the host while the copy operation is in progress.
Note: Hot push operations are not supported on arrays running HYPERMAX OS. Open
Replicator uses the Solutions Enabler SYMCLI symrcopy.

Cold
The Control device is Not Ready (offline) to the host while the copy operation is in progress.

Pull
A pull operation copies data to the control device from the remote device(s).

Push
A push operation copies data from the control device to the remote device(s).

Pull operations
On arrays running HYPERMAX OS, Open Replicator uses the Solutions Enabler SYMCLI
symrcopy support up to 512 pull sessions.
For pull operations, the volume can be in a live state during the copy process. The local hosts and
applications can begin to access the data as soon as the session begins, even before the data copy
process has completed.
These features enable rapid and efficient restoration of remotely vaulted volumes and migration
from other storage platforms.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 107
Data Migration

Copy on First Access ensures the appropriate data is available to a host operation when it is
needed. The following image shows an Open Replicator hot pull.
Figure 20 Open Replicator hot (or live) pull

SB14

SB15
SB12

SB13
SB10

SB11
PiT

SB8

SB9
SB6

SB7
Copy

SB4

SB5
SB2

SB3
SB0

SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1

STD

STD

PiT
Copy

The pull can also be performed in cold mode to a static volume. The following image shows an
Open Replicator cold pull.
Figure 21 Open Replicator cold (or point-in-time) pull
SB14

SB15
SB12

SB13
SB10

SB11
SB8

SB9

STD
SB6

SB7
SB4

SB5
SB2

SB3
SB0

SB1

PS0 PS1 PS2 PS3 PS4 SMB0 SMB1

Target
STD
Target

Target
STD

PowerPath Migration Enabler


Dell EMC PowerPath is host-based software that provides automated data path management and
load-balancing capabilities for heterogeneous server, network, and storage deployed in physical
and virtual environments. PowerPath includes a migration tool called PowerPath Migration Enabler
(PPME). PPME enables non-disruptive or minimally disruptive data migration between storage
systems or within a single storage system.
PPME allows applications continued data access throughout the migration process. PPME
integrates with other technologies to minimize or eliminate application downtime during data
migration.
PPME works in conjunction with underlying technologies, such as Open Replicator, SnapVX, and
Host Copy.
Note: PowerPath Multipathing must be installed on the host machine.

The following documentation provides additional information:


l Dell EMC Support Matrix PowerPath Family Protocol Support
l Dell EMCPowerPath Migration Enabler User Guide

108 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Data Migration

Data migration using SRDF/Data Mobility


SRDF/Data Mobility (DM) uses SRDF's adaptive copy mode to transfer large amounts of data
without impact to the host.
SRDF/DM supports data replication or migration between two or more arrays running HYPERMAX
OS. Adaptive copy mode enables applications using the primary volume to avoid propagation
delays while data is transferred to the remote site. SRDF/DM can be used for local or remote
transfers.
Data migration on page 88 has a more information about using SRDF to migrate data.

Space and zero-space reclamation


Space reclamation reclaims unused space following a replication or migration activity from a
regular device to a thin device in which software tools, such as Open Replicator and Open
Migrator, copied-all-zero, unused space to a target thin volume.
Space reclamation deallocates data chunks that contain all zeros. Space reclamation is most
effective for migrations from standard, fully provisioned devices to thin devices. Space
reclamation is non-disruptive and can be executed while the targeted thin device is fully available
to operating systems and applications.
Zero-space reclamations provides instant zero detection during Open Replicator and SRDF
migration operations by reclaiming all-zero space, including both host-unwritten extents (or
chunks) and chunks that contain all zeros due to file system or database formatting.
Solutions Enabler and Unisphere can be used to initiate and monitor the space reclamation
process.

Data migration for mainframe


For mainframe environments, z/OS Migrator provides non-disruptive migration from any vendor
storage to VMAX All Flash or VMAX arrays. z/OS Migrator can also migrate data from one VMAX
or VMAX All Flash array to another VMAX or VMAX All Flash array. With z/OS Migrator, you can:
l Introduce new storage subsystem technologies with minimal disruption of service.
l Reclaim z/OS UCBs by simplifying the migration of datasets to larger volumes (combining
volumes).
l Facilitate data migration while applications continue to run and fully access data being
migrated, eliminating application downtime usually required when migrating data.
l Eliminate the need to coordinate application downtime across the business, and eliminate the
costly impact of such downtime on the business.
l Improve application performance by facilitating the relocation of poor performing datasets to
lesser used volumes/storage arrays.
l Ensure all metadata always accurately reflects the location and status of datasets being
migrated.
Refer to the Dell EMC z/OS Migrator Product Guide for detailed product information.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 109
Data Migration

Volume migration using z/OS Migrator


EMC z/OS Migrator is a host-based data migration facility that performs traditional volume
migrations as well as host-based volume mirroring. Together, these capabilities are referred to as
the volume mirror and migrator functions of z/OS Migrator.
Figure 22 z/OS volume migration

Volume level data migration facilities move logical volumes in their entirety. z/OS Migrator volume
migration is performed on a track for track basis without regard to the logical contents of the
volumes involved. Volume migrations end in a volume swap which is entirely non-disruptive to any
applications using the data on the volumes.

Volume migrator
Volume migration provides host-based services for data migration at the volume level on
mainframe systems. It provides migration from third-party devices to devices on Dell EMC arrays
as well as migration between devices on Dell EMC arrays.

Volume mirror
Volume mirroring provides mainframe installations with volume-level mirroring from one device on
a EMC array to another. It uses host resources (UCBs, CPU, and channels) to monitor channel
programs scheduled to write to a specified primary volume and clones them to also write to a
specified target volume (called a mirror volume).
After achieving a state of synchronization between the primary and mirror volumes, Volume Mirror
maintains the volumes in a fully synchronized state indefinitely, unless interrupted by an operator
command or by an I/O failure to a Volume Mirror device. Mirroring is controlled by the volume
group. Mirroring may be suspended consistently for all volumes in the group.

Dataset migration using z/OS Migrator


In addition to volume migration, z/OS Migrator provides for logical migration, that is, the migration
of individual datasets. In contrast to volume migration functions, z/OS Migrator performs dataset
migrations with full awareness of the contents of the volume, and the metadata in the z/OS
system that describe the datasets on the logical volume.

110 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Data Migration

Figure 23 z/OS Migrator dataset migration

Thousands of datasets can either be selected individually or wild-carded. z/OS Migrator


automatically manages all metadata during the migration process while applications continue to
run.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 111
Data Migration

112 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CHAPTER 10
CloudArray for VMAX All Flash

This chapter introduces CloudArray for VMAX All Flash.

l About CloudArray................................................................................................................. 114


l CloudArray physical appliance.............................................................................................. 115
l Cloud provider connectivity..................................................................................................115
l Dynamic caching.................................................................................................................. 115
l Security and data integrity................................................................................................... 115
l Administration...................................................................................................................... 115

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 113
CloudArray for VMAX All Flash

About CloudArray
Dell EMC CloudArray is a storage software technology that integrates cloud-based storage into
traditional enterprise IT environments. Traditionally, as data volumes increase, organizations must
choose between growing the storage environment, supplementing it with some form of secondary
storage, or simply deleting cold data.
CloudArray combines the resource efficiency of the cloud with on-site storage, allowing
organizations to scale their infrastructure and plan for future data growth. CloudArray makes cloud
object storage look, act, and feel like local storage. Thus seamlessly integrating with existing
applications, giving a virtually unlimited tier of storage in one easy package. By connecting storage
systems to high-capacity cloud storage, CloudArray enables a more efficient use of high
performance primary arrays while leveraging the cost efficiencies of cloud storage.
CloudArray offers a rich set of features to enable cloud integration and protection for
VMAX All Flash data:
l CloudArray’s local drive caching ensures recently accessed data is available at local speeds
without the typical latency associated with cloud storage.
l CloudArray provides support for more than 20 different public and private cloud providers,
including Amazon, EMC ECS, Google Cloud, and Microsoft Azure.
l 256-bit AES encryption provides security for all data that leaves CloudArray, both in-flight to
and at rest in the cloud.
l File and block support enables CloudArray to integrate the cloud into the storage environment
regardless of the data storage level.
l Data compression and bandwidth scheduling reduce cloud capacity demands and limit network
impact.
The following figure illustrates a typical CloudArray deployment for VMAX All Flash.
Figure 24 CloudArray deployment for VMAX All Flash

114 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
CloudArray for VMAX All Flash

CloudArray physical appliance


The physical appliance supplies the physical connection capability from the VMAX All Flash to
cloud storage using Fibre Channel controller cards. FAST.X presents the physical appliance as an
external device.
The CloudArray physical appliance is a 2U server that consists of:
l Up to 40TB usable local cache (12x4TB drives in a RAID-6 configuration)
l 192GB RAM
l 2x2 port 8Gb Fibre Channel cards configured in add-in slots on the physical appliance

Cloud provider connectivity


CloudArray connects directly with more than 20 public and private cloud storage providers.
CloudArray converts the cloud’s object-based storage to one or more local volumes.

Dynamic caching
CloudArray addresses bandwidth and latency issues typically associated with cloud storage by
taking advantage of local, cache storage. The disk-based cache provides local performance for
active data and serves as a buffer for read-write operations. Each volume can be associated with
its own, dedicated cache, or can operate off of a communal pool. The amount of cache assigned to
each volume can be individually configured. A volume’s performance depends on the amount of
data kept locally in the cache and the type of disk used for the cache.
For more information on CloudArray cache and configuration guidelines, see the CloudArray Best
Practices whitepaper on EMC.com.

Security and data integrity


CloudArray employs both in-flight and at-rest encryptions to ensure data security. Each volume
can be encrypted using 256-bit AES encryption prior to replicating to the cloud. CloudArray also
encrypts the data and metadata separately, storing the different encryption keys locally to prevent
any unauthorized access.
CloudArray’s encryption is a critical component in ensuring data integrity. CloudArray segments its
cache into cache pages and, as part of the encryption process, generates and assigns a unique
hash to each cache page. The hash remains with the cache page until that page is retrieved for
access by a requesting initiator. When the page is decrypted, the hash must match the value
generated by the decryption algorithm. If the hash does not match, then the page is declared
corrupt. This process helps prevent any data corruption from propagating to an end user.

Administration
CloudArray is configured using a browser-based graphical user interface. With this interface
administrators can:
l Create, modify, or expand volumes, file shares and caches
l Monitor and display CloudArray health, performance, and cache status
l Apply software updates

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 115
CloudArray for VMAX All Flash

l Schedule and configure snapshots and bandwidth throttling


CloudArray also utilizes an online portal that enables users to:
l Download CloudArray licenses and software updates
l Configure alerts and access CloudArray product documentation
l Store a copy of the CloudArray configuration file for disaster recovery retrieval

116 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
APPENDIX A
Mainframe Error Reporting

This appendix lists the mainframe environmental errors.

l Error reporting to the mainframe host.................................................................................. 118


l SIM severity reporting.......................................................................................................... 118

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 117
Mainframe Error Reporting

Error reporting to the mainframe host


HYPERMAX OS can detect and report the following error types to the mainframe host in the
storage systems:
l Data Check — HYPERMAX OS detected an error in the bit pattern read from the disk. Data
checks are due to hardware problems when writing or reading data, media defects, or random
events.
l System or Program Check — HYPERMAX OS rejected the command. This type of error is
indicated to the processor and is always returned to the requesting program.
l Overrun — HYPERMAX OS cannot receive data at the rate it is transmitted from the host.
This error indicates a timing problem. Resubmitting the I/O operation usually corrects this
error.
l Equipment Check —HYPERMAX OS detected an error in hardware operation.
l Environmental — HYPERMAX OS internal test detected an environmental error. Internal
environmental tests monitor, check, and report failures of the critical hardware components.
They run at the initial system power-up, upon every software reset event, and at least once
every 24 hours during regular operations.
If an environmental test detects an error condition, it sets a flag to indicate a pending error and
presents a unit check status to the host on the next I/O operation. The test that detected the
error condition is then scheduled to run more frequently. If a device-level problem is detected, it is
reported across all logical paths to the device experiencing the error. Subsequent failures of that
device are not reported until the failure is fixed.
If a second failure is detected for a device while there is a pending error-reporting condition in
effect, HYPERMAX OS reports the pending error on the next I/O and then the second error.
HYPERMAX OS reports error conditions to the host and to the Dell EMC Customer Support
Center. When reporting to the host, HYPERMAX OS presents a unit check status in the status
byte to the channel whenever it detects an error condition such as a data check, a command
reject, an overrun, an equipment check, or an environmental error.
When presented with a unit check status, the host retrieves the sense data from the storage array
and, if logging action has been requested, places it in the Error Recording Data Set (ERDS). The
EREP (Environment Recording, Editing, and Printing) program prints the error information. The
sense data identifies the condition that caused the interruption and indicates the type of error and
its origin. The sense data format depends on the mainframe operating system. For 2105, 2107, or
3990 controller emulations, the sense data is returned in the SIM format.

SIM severity reporting


HYPERMAX OS supports SIM severity reporting that enables filtering of SIM severity alerts
reported to the multiple virtual storage (MVS) console.
l All SIM severity alerts are reported by default to the EREP (Environmental Record Editing and
Printing program).
l ACUTE, SERIOUS, and MODERATE alerts are reported by default to the MVS console.
The following table lists the default settings for SIM severity reporting.

118 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Mainframe Error Reporting

Table 16 SIM severity alerts

Severity Description

SERVICE No system or application performance degradation is


expected. No system or application outage has occurred.

MODERATE Performance degradation is possible in a heavily loaded


environment. No system or application outage has
occurred.

SERIOUS A primary I/O subsystem resource is disabled. Significant


performance degradation is possible. System or
application outage may have occurred.

ACUTE A major I/O subsystem resource is disabled, or damage


to the product is possible. Performance may be severely
degraded. System or application outage may have
occurred.

REMOTE SERVICE EMC Customer Support Center is performing service/


maintenance operations on the system.

REMOTE FAILED The Service Processor cannot communicate with the


EMC Customer Support Center.

Environmental errors
The following table lists the environmental errors in SIM format for HYPERMAX OS 5977 or
higher.

Table 17 Environmental errors reported as SIM messages

Hex code Severity level Description SIM reference code

04DD MODERATE MMCS health check error 24DD

043E MODERATE An SRDF Consistency E43E


Group was suspended.

044D MODERATE An SRDF path was lost. E44D

044E SERVICE An SRDF path is E44E


operational after a previous
failure.

0461 NONE The M2 is resynchronized E461


with the M1 device. This
event occurs once the M2
device is brought back to a
Ready state. a

0462 NONE The M1 is resynchronized E462


with the M2 device. This
event occurs once the M1
device is brought back to a
Ready state. a

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 119
Mainframe Error Reporting

Table 17 Environmental errors reported as SIM messages (continued)

Hex code Severity level Description SIM reference code

0463 SERIOUS One of the back-end 2463


directors failed into the
IMPL Monitor state.

0465 NONE Device resynchronization E465


process has started. a

0467 MODERATE The remote storage system E467


reported an SRDF error
across the SRDF links.

046D MODERATE An SRDF group is lost. This E46D


event happens, for
example, when all SRDF
links fail.

046E SERVICE An SRDF group is up and E46E


operational.

0470 ACUTE OverTemp condition based 2470


on memory module
temperature.

0471 ACUTE The Storage Resource Pool 2471


has exceeded its upper
threshold value.

0473 SERIOUS A periodic environmental E473


test (env_test9) detected
the mirrored device in a
Not Ready state.

0474 SERIOUS A periodic environmental E474


est (env_test9) detected
the mirrored device in a
Write Disabled (WD) state.

0475 SERIOUS An SRDF R1 remote mirror E475


is in a Not Ready state.

0476 SERVICE Service Processor has been 2476


reset.

0477 REMOTE FAILED The Service Processor 1477


could not call the EMC
Customer Support Center
(failed to call home) due to
communication problems.

047A MODERATE AC power lost to Power 247A


Zone A or B.

047B MODERATE Drop devices after RDF E47B


Adapter dropped.

01BA ACUTE Power supply or enclosure 24BA


02BA SPS problem.

120 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Mainframe Error Reporting

Table 17 Environmental errors reported as SIM messages (continued)

Hex code Severity level Description SIM reference code

03BA
04BA

047C ACUTE The Storage Resource Pool 247C


has Not Ready or Inactive
TDATs.

047D MODERATE Either the SRDF group lost E47D


an SRDF link or the SRDF
group is lost locally.

047E SERVICE An SRDF link recovered E47E


from failure. The SRDF link
is operational.

047F REMOTE SERVICE The Service Processor 147F


successfully called the
EMC Customer Support
Center (called home) to
report an error.

0488 SERIOUS Replication Data Pointer E488


Meta Data Usage reached
90-99%.

0489 ACUTE Replication Data Pointer E489


Meta Data Usage reached
100%.

0492 MODERATE Flash monitor or MMCS 2492


drive error.

04BE MODERATE Meta Data Paging file 24BE


system mirror not ready.

04CA MODERATE An SRDF/A session E4CA


dropped due to a non-user
request. Possible reasons
include fatal errors, SRDF
link loss, or reaching the
maximum SRDF/A host-
response delay time.

04D1 REMOTE SERVICE Remote connection 14D1


established. Remote
control connected.

04D2 REMOTE SERVICE Remote connection closed. 14D2


Remote control rejected.

04D3 MODERATE Flex filter problems. 24D3

04D4 REMOTE SERVICE Remote connection closed. 14D4


Remote control
disconnected.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 121
Mainframe Error Reporting

Table 17 Environmental errors reported as SIM messages (continued)

Hex code Severity level Description SIM reference code

04DA MODERATE Problems with task/ 24DA


threads.

04DB SERIOUS SYMPL script generated 24DB


error.

04DC MODERATE PC related problems. 24DC

04E0 REMOTE FAILED Communications problems. 14E0

04E1 SERIOUS Problems in error polling. 24E1

052F None A sync SRDF write failure E42F


occurred.

3D10 SERIOUS A SnapVX snapshot failed. E410

a. Dell EMC recommendation: NONE.

Operator messages
Error messages
On z/OS, SIM messages are displayed as IEA480E Service Alert Error messages. They are
formatted as shown below:
Figure 25 z/OS IEA480E acute alert error message format (call home failure)

*IEA480E 1900,SCU,ACUTE ALERT,MT=2107,SER=0509-ANTPC, 266


REFCODE=1477-0000-0000,SENSE=00101000 003C8F00 40C00000 00000014

PC failed to call home due to communication problems.

Figure 26 z/OS IEA480E service alert error message format (Disk Adapter failure)

*IEA480E 1900,SCU,SERIOUS ALERT,MT=2107,SER=0509-ANTPC, 531


REFCODE=2463-0000-0021,SENSE=00101000 003C8F00 11800000

Disk Adapter = Director 21 = 0x2C


One of the Disk Adapters failed into IMPL Monitor state.

122 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Mainframe Error Reporting

Figure 27 z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented against
unrelated resource)

*IEA480E 1900,DASD,MODERATE ALERT,MT=2107,SER=0509-ANTPC, 100


REFCODE=E46D-0000-0001,VOLSER=/UNKN/,ID=00,SENSE=00001F10

SRDF Group 1 SIM presented against unreleated resource


An SRDF Group is lost (no links)

Event messages
The storage array also reports events to the host and to the service processor. These events are:
l The mirror-2 volume has synchronized with the source volume.
l The mirror-1 volume has synchronized with the target volume.
l Device resynchronization process has begun.
On z/OS, these events are displayed as IEA480E Service Alert Error messages. They are
formatted as shown below:
Figure 28 z/OS IEA480E service alert error message format (mirror-2 resynchronization)

*IEA480E 0D03,SCU,SERVICE ALERT,MT=3990-3,SER=,


REFCODE=E461-0000-6200

Channel address of the synchronized device

E461 = Mirror-2 volume resynchronized with Mirror-1 volume

Figure 29 z/OS IEA480E service alert error message format (mirror-1 resynchronization)

*IEA480E 0D03,SCU,SERVICE ALERT,MT=3990-3,SER=,


REFCODE=E462-0000-6200

Channel address of the synchronized device

E462 = Mirror-1 volume resynchronized with Mirror-2 volume

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 123
Mainframe Error Reporting

124 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
APPENDIX B
Licensing

This appendix is an overview of licensing on arrays running HYPERMAX OS.

l eLicensing............................................................................................................................ 126
l Open systems licenses......................................................................................................... 128

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 125
Licensing

eLicensing
Arrays running HYPERMAX OS use Electronic Licenses (eLicenses).
Note: For more information on eLicensing, refer to Dell EMC Knowledgebase article 335235 on
the Dell EMC Online Support website.
You obtain license files from Dell EMC Online Support, copy them to a Solutions Enabler or a
Unisphere host, and push them out to your arrays. The following figure illustrates the process of
requesting and obtaining your eLicense.
Figure 30 eLicensing process
EMC generates a single license file
1. New software purchase either as
part of a new array, or as
2. for the array and posts it
an additional purchase on support.emc.com for download.
to an existing system.

A License Authorization Code (LAC) with


3. instructions on how to obtain the license
activation file is emailed to the
The entitled user retrieves the LAC letter entitled users (one per array).
4. on the Get and Manage Licenses page
on support.emc.com, and then
downloads the license file.

The entitled user loads the license file


5. to the array and verifies that
the licenses were successfully activated.

Note: To install array licenses, follow the procedure described in the Solutions Enabler
Installation Guide and Unisphere Online Help.
Each license file fully defines all of the entitlements for a specific system, including the license
type and the licensed capacity. To add a feature or increase the licensed capacity, obtain and
install a new license file.
Most array licenses are array-based, meaning that they are stored internally in the system feature
registration database on the array. However, there are a number of licenses that are host-based.
Array-based eLicenses are available in the following forms:
l An individual license enables a single feature.
l A license suite is a single license that enables multiple features. License suites are available only
if all features are enabled.
l A license pack is a collection of license suites that fit a particular purpose.
To view effective licenses and detailed usage reports, use Solutions Enabler, Unisphere, Mainframe
Enablers, Transaction Processing Facility (TPF), or IBM i platform console.

126 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Licensing

Capacity measurements
Array-based licenses include a capacity licensed value that defines the scope of the license. The
method for measuring this value depends on the license's capacity type (Usable or Registered).
Not all product titles are available in all capacity types, as shown below.

Table 18 VMAX All Flash product title capacity types

Usable Registered Other

All F software package titles ProtectPoint PowerPath (if purchased


separately)

All FX software package titles Events and Retention Suite

All zF software package titles

All zFX software package


titles

RecoverPoint

Usable capacity
Usable Capacity is defined as the amount of storage available for use on an array. The usable
capacity is calculated as the sum of all Storage Resource Pool (SRP) capacities available for use.
This capacity does not include any external storage capacity.

Registered capacity
Registered capacity is the amount of user data managed or protected by each particular product
title. It is independent of the type or size of the disks in the array.
The methods for measuring registered capacity depends on whether the licenses are part of a
bundle or individual.

Registered capacity licenses


Registered capacity is measured according to the following:
l ProtectPoint
n The registered capacity of this license is the sum of all DataDomain encapsulated devices
that are link targets. When there are TimeFinder sessions present on an array with only a
ProtectPoint license and no TimeFinder license, the capacity is calculated as the sum of all
DataDomain encapsulated devices with link targets and the sum of all TimeFinder allocated
source devices and delta RDPs.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 127
Licensing

Open systems licenses


This section details the licenses available in an open system environment.

License suites
This table lists the license suites available in an open systems environment.
Table 19 VMAX All Flash license suites

License suite Includes Allows you to With the command

All Flash F l HYPERMAX OS Create time windows symoptmz

l Priority Controls symtw

l OR-DM l Add disk group tiers symfast


l Unisphere for VMAX to FAST policies
l FAST l Enable FAST
l SL Provisioning l Set the following
FAST parameters:
l Workload Planner
n Swap Non-Visible
l Database Storage
Devices
Analyzer
n Allow Only Swap
l Unisphere for File
n User Approval
Mode
n Maximum Devices
to Move
n Maximum
Simultaneous
Devices
n Workload Period
n Minimum
Performance
Period
l Add virtual pool (VP)
tiers to FAST policies
l Set the following
FAST VP-specific
parameters:
n Thin Data Move
Mode
n Thin Relocation
Rate
n Pool Reservation
Capacity
l Set the following
FAST parameters:

128 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Licensing

Table 19 VMAX All Flash license suites (continued)

License suite Includes Allows you to With the command

n Workload Period
n Minimum
Performance
Period

Perform SL-based symconfigure


provisioning
symsg
symcfg

AppSync Manage protection and


replication for critical
applications and
databases for Microsoft,
Oracle and VMware
environments.

l TimeFinder/Snap Create new native clone symclone


sessions
l TimeFinder/SnapVX
Create new TimeFinder/ symmir
l SnapSure
Clone emulations

l Create new sessions symsnap

l Duplicate existing
sessions

l Create snap pools symconfigure

l Create SAVE devices

l Perform SnapVX symsnapvx


Establish operations
l Perform SnapVX
snapshot Link
operations

All Flash FX All Flash F Suite Perform tasks available in


the All Flash F suite.

l SRDF l Create new SRDF symrdf


groups
l SRDF/Asynchronous
l Create dynamic
l SRDF/Synchronous
SRDF pairs in
l SRDF/STAR Adaptive Copy mode
l Replication for File
l Create SRDF devices symconfigure
l Convert non-SRDF
devices to SRDF

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 129
Licensing

Table 19 VMAX All Flash license suites (continued)

License suite Includes Allows you to With the command

l Add SRDF mirrors to


devices in Adaptive
Copy mode
Set the dynamic-
SRDF capable
attribute on devices
Create SAVE devices

l Create dynamic symrdf


SRDF pairs in
Asynchronous mode
l Set SRDF pairs into
Asynchronous mode

l Add SRDF mirrors to symconfigure


devices in
Asynchronous mode
Create RDFA_DSE
pools

Set any of the


following SRDF/A
attributes on an
SRDF group:
n Minimum Cycle
Time
n Transmit Idle
n DSE attributes,
including:

– Associating an
RDFA-DSE
pool with an
SRDF
group
DSE
Threshold
DSE Autostart
n Write Pacing
attributes,
including:

– Write Pacing
Threshold
– Write Pacing
Autostart

130 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
Licensing

Table 19 VMAX All Flash license suites (continued)

License suite Includes Allows you to With the command

– Device Write
Pacing
exemption
– TimeFinder
Write Pacing
Autostart

l Create dynamic symrdf


SRDF pairs in
Synchronous mode
l Set SRDF pairs into
Synchronous mode

Add an SRDF mirror to a symconfigure


device in Synchronous
mode

D@RE Encrypt data and protect


it against unauthorized
access unless valid keys
are provided. This
prevents data from being
accessed and provides a
mechanism to quickly
shred data.

SRDF/Metro l Place new SRDF


device pairs into an
SRDF/Metro
configuration.
l Synchronize device
pairs.

VIPR Suite (Controller Automate storage


and SRM) provisioning and
reclamation tasks to
improve operational
efficiency.

Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS 131
Licensing

Individual licenses
These items are available for arrays running HYPERMAX OS and are not included in any of the
license suites:
Table 20 Individual licenses for open systems environment

License Allows you to With the command

ProtectPoint Store and retrieve backup


data within an integrated
environment containing arrays
running HYPERMAX OS and
Data Domain arrays.

RecoverPoint Protect data integrity at local


and remote sites, and recover
data from a point in time
using journaling technology.

Ecosystem licenses
These licenses do not apply to arrays:
Table 21 Individual licenses for open systems environment

License Allows you to

PowerPath Automate data path failover and recovery to


ensure applications are always available and
remain operational.

Events and Retention Suite l Protect data from unwanted changes,


deletions and malicious activity.
l Encrypt data where it is created for
protection anywhere outside the server.
l Maintain data confidentiality for selected
data at rest and enforce retention at the
file-level to meet compliance
requirements.
l Integrate with third-party anti-virus
checking, quota management, and
auditing applications.

132 Dell EMC VMAX All Flash Product Guide VMAX 250F, 450F, 850F, 950F with HYPERMAX OS

You might also like