0% found this document useful (0 votes)
26 views68 pages

TR 3770

2,000-Seat VMware View on NetApp Deployment Guide Using NFS

Uploaded by

stewdapew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views68 pages

TR 3770

2,000-Seat VMware View on NetApp Deployment Guide Using NFS

Uploaded by

stewdapew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Technical Report

2,000-Seat VMware View on NetApp


Deployment Guide Using NFS
A Scalable Solution Architecture on a Cisco Nexus
Infrastructure
Jack McLeod, Chris Gebhardt, Abhinav Joshi
May 2009 | TR-3770
TABLE OF CONTENTS

1 INTRODUCTION TO VMWARE VIEW ON NETAPP STORAGE .............................................. 4


1.1 THE PURPOSE ........................................................................................................................................... 4

1.2 THE ENVIRONMENT................................................................................................................................... 6

1.3 HIGH-LEVEL ARCHITECTURE OF THE VMWARE VIEW MODULE ON NETAPP .................................... 6

2 NETWORK SETUP AND CONFIGURATION ............................................................................ 7


2.1 NETWORK SETUP OF CISCO NEXUS NETWORK SERIES ...................................................................... 8

2.2 STORAGE VLAN FOR NFS ........................................................................................................................ 8

2.3 VMWARE VIEW NETWORK ....................................................................................................................... 8

3 NETAPP STORAGE CONTROLLER SETUP FOR VMWARE ESX 3.5 ................................... 9


3.1 NETAPP CONTROLLER 2,000-SEAT PHYSICAL CONFIGURATION ....................................................... 9

3.2 NETWORK SETUP OF NETAPP STORAGE CONTROLLER ................................................................... 10

3.3 CONFIGURE NFS TRUNK ........................................................................................................................ 11

3.4 OVERVIEW OF THE NETAPP STORAGE CONTROLLER DISK CONFIGURATION ............................... 11

3.5 OVERVIEW OF THE LOGICAL STORAGE CONFIGURATION ................................................................ 12

3.6 STORAGE SIZING FORMULAS ................................................................................................................ 14

3.7 CONFIGURE NETAPP STORAGE CONTROLLER’S SSH CONFIGURATION ........................................ 15

3.8 CONFIGURE FLEXSCALE FOR PERFORMANCE ACCELERATION MODULE (PAM)........................... 15

3.9 CONFIGURE VIRTUAL MACHINE DATASTORE AGGREGATE ............................................................. 15

3.10 MODIFY THE AGGREGATE SNAPSHOT RESERVE FOR VMWARE VIEW_PRODUCTION


AGGREGATE ........................................................................................................................................................ 19

4 NETAPP STORAGE SETUP FOR USE WITH RCU 2.0 .......................................................... 19


4.1 CREATE A VOLUME TO HOST THE TEMPLATE VIRTUAL MACHINE FOR RCU 2.0 ............................ 19

4.2 CONFIGURE THE NFS EXPORT FOR VOLUME HOSTING TEMPLATE VIRTUAL MACHINE ............... 20

4.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO............................ 24

4.4 CONFIGURE SNAPSHOT AUTODELETE ................................................................................................ 24

4.5 CONFIGURE OPTIMAL PERFORMANCE FOR VMDKS ON NFS ............................................................ 24

4.6 STORAGE CONTROLLER “A” ADDITIONAL SETUP AND CONFIGURATION ...................................... 25

4.7 CONFIGURE VOLUME AUTOSIZING ....................................................................................................... 26

5 STORAGE CONTROLLER “B” SETUP AND CONFIGURATION .......................................... 26


5.1 CREATE THE VOLUMES FOR HOSTING LINKED CLONES, AND CIFS USER DATA ........................... 26

5.2 SET UP NFS EXPORTS ............................................................................................................................ 28

5.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO............................ 29

5.4 CONFIGURE SNAPSHOT AUTODELETE FOR VOLUMES...................................................................... 30

5.5 CONFIGURE OPTIMAL PERFORMANCE FOR VMDKS ON NFS ............................................................ 30

5.6 CONFIGURE VOLUME AUTOSIZING ....................................................................................................... 30

5.7 CONFIGURE DEDUPLICATION................................................................................................................ 30

2 VMware View on NetApp Storage Systems Deployment Guide Using NFS


6 VMWARE ESX HOST SETUP .................................................................................................. 32
6.1 PHYSICAL SERVER CONFIGURATION ................................................................................................... 32

6.2 LICENSES NEEDED ................................................................................................................................. 33

6.3 INSTALL ESX 3.5 ...................................................................................................................................... 33

6.4 INSTALL VMWARE VCENTER SERVER.................................................................................................. 33

6.5 CONFIGURE SERVICE CONSOLE FOR REDUNDANCY ........................................................................ 33

6.6 CONFIGURE VMWARE KERNEL NFS PORT .......................................................................................... 37

6.7 CONFIGURE VMOTION ............................................................................................................................ 38

6.8 CONFIGURE VIRTUAL MACHINE NETWORK ......................................................................................... 39

6.9 VMWARE ESX HOST NETWORK CONFIGURATION .............................................................................. 40

6.10 ADD TEMPLATE VIRTUAL MACHINE DATASTORE TO ESX HOST ..................................................... 40

6.11 ADD VIEW_SWAP DATASTORE TO ESX HOST ..................................................................................... 42

6.12 CONFIGURE NFS TUNABLES WITHIN ESX 3.5 ...................................................................................... 43

6.13 CONFIGURE LOCATION OF VIRTUAL SWAPFILE DATASTORE .......................................................... 43

7 SET UP VMWARE VIEW MANAGER 3.0 AND VMWARE VIEW COMPOSER ..................... 44
8 SET UP AND CONFIGURE WINDOWS XP GOLD IMAGE ..................................................... 44
8.1 CREATE VIRTUAL MACHINE IN VMWARE ESX 3.5 .............................................................................. 44

8.2 FORMAT THE VIRTUAL MACHINE WITH THE CORRECT STARTING PARTITION OFFSETS .............. 44

8.3 DOWNLOAD AND PREPARE LSI 53C1030 DRIVER ............................................................................... 47

8.4 WINDOWS XP PREINSTALLATION CHECKLIST .................................................................................... 48

8.5 INSTALL AND CONFIGURE WINDOWS XP ............................................................................................. 49

9 RAPID DEPLOYMENT OF WINDOWS XP VIRTUAL MACHINES IN A VMWARE VIEW


ENVIRONMENT USING RCU 2.0 ................................................................................................... 52
9.1 SET UP AND CONFIGURE TEMPLATE VM VOLUME FOR MASS DEPLOYMENT ................................ 52

9.2 DEPLOY VIRTUAL MACHINES USING RCU 2.0 ...................................................................................... 52

9.3 CONFIGURE THE NEWLY CREATED DATASTORES ............................................................................. 65

10 DEPLOY LINKED CLONES .......................................... ERROR! BOOKMARK NOT DEFINED.


11 ENTITLE USERS/GROUPS TO DESKTOP POOLS ............................................................... 66
12 TESTING AND VALIDATION OF THE VMWARE VIEW AND NETAPP STORAGE
ENVIRONMENT .............................................................................................................................. 66
13 FEEDBACK ............................................................................................................................... 67
14 REFERENCES .......................................................................................................................... 67

3 VMware View on NetApp Storage Systems Deployment Guide Using NFS


1 INTRODUCTION TO VMWARE VIEW ON NETAPP STORAGE

1.1 THE PURPOSE


The purpose of this document is to provide a step-by-step guide on how to deploy VMware View™ on
NetApp® FAS3000 and FAS6000 series HA clusters on Cisco Nexus 5000 and 7000 switches. This
document details the deployment of a typical Windows® XP virtual desktop infrastructure. For the purposes
of this guide we will focus on deploying a 2,000-seat modular environment utilizing 16 VMware® ESX™
3.5 hosts and one NetApp FAS3170HA using the NFS protocol. However, this configuration can also be
used on any NetApp FAS3000 and FAS6000 series HA clusters. Also, due to the modular design approach,
scalability to larger deployments is simplified. The information contained in this guide will assist in setting up
environments ranging from proof of concept (POC) to production deployments. Note that this guide is
intended for storage and systems administrators who are familiar with VMware and NetApp storage.
This document is intended as an instructional guide and does not attempt to explain why certain steps are
taken. For more detailed information on the steps included in this document, see TR-3428: NetApp and
VMware Virtual Infrastructure 3 Storage Best Practices and TR-3705: NetApp and VMware View (VDI)
Solution Guide.
The purpose of this document is to demonstrate a typical mixed deployment scenario where there are
different user types in an environment that require different desktop types and still be able to achieve the
much desired storage efficiency, performance, operational agility, and data protection. The table below
shows a sample customer environment with a mix of users. The mixed user requirements can be easily met
by using different desktop delivery models in VMware VMware View Manager, leveraging both the NetApp
Rapid Cloning Utility 2.0 and VMware linked clone technology. Below is a table detailing the deployment
scenario demonstrated in this document.
Table 1) RCU and Linked Clones Deployment Mix.
Virtual Machine Distribution Number of Virtual Machines Percentage of Deployment
# of VM‟s deployed with RCU 2.0 1500 75%

# of VM‟s deployed with linked clones 500 25%

Total Number of VM’s 2000 100%

Table 2) Details on number of Virtual Machines Deployed using NetApp RCU.


Virtual Machine Distribution Number of Virtual Machines
# of RCU persistent access mode VM‟s 1000

# of RCU nonpersistent access mode VM‟s 500

Total # of VM’s deployed with RCU 2.0 1500

4 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Table 3) Details on number of virtual machines Deployed using VMware Linked Clones.
Virtual Machine Distribution Number of Virtual Machines
Number of linked clone persistent access mode virtual 250
machines
Number of linked clone nonpersistent access mode 250
virtual machines
Total # of virtual machines deployed with linked 500
clones

This guide will focus on achieving multiple levels of storage efficiency and performance acceleration for each
of the deployment scenarios in this mixed environment. While this document has a 75% to 25% split for
deployment, the principles for storage layout, efficiency, performance acceleration, and operational agility
can be used for every type of deployment mix.
The table below shows a sample customer environment with different user profiles having different
requirements in terms of desktop usage and data hosted on the virtual desktops. The table highlights how
the different deployment solutions (NetApp RCU 2.0 and VMware linked clones) can be leveraged to fulfill
requirements for the different user types.
Table 4) Types of VMware View Deployment scenarios.
User Profile User Requirements Number of VMware Access Mode Deployment
Virtual View Solution
Desktops Manager
Desktop
Delivery
Model
Marketing/ Customizable, personalized 1,000 Manual Persistent NetApp
finance/ desktops, using a mix of desktop RCU 2.0
consultants office and specialized, pool
decision-supporting
applications
Download and use several
applications as required
Installed apps and/or data on
the OS disk to be retained
after patches, OS upgrade,
and reboots
Requires protecting the user
data
Software Mix of office and enterprise 500 Manual Nonpersistent NetApp
developers applications on the desktop desktop RCU 2.0
pool
Require installing new
applications, on an as needed
basis
Requires applications
installed on the OS disk to be
retained after patches,
upgrade, and reboots, to be
used by any user who logs in
next time.
Requires protecting the user
data
Help Work with only one 250 Automated Persistent VMware
desk/call application at a time desktop linked

5 VMware View on NetApp Storage Systems Deployment Guide Using NFS


center Do not require customizable, pool clones
personalized desktops
Does not require applications
and data on the OS disk to be
retained after patches, OS
upgrade, and reboots
Requires protecting the user
data on the separate user
data disk
Training/ Temporary desktops for the 250 Automated Nonpersistent VMware
duration of the training desktop linked
students session pool clones
Require clean desktop for
every class
Does not require
customization, or
personalization
Does not require protection of
the OS or user data

1.2 THE ENVIRONMENT


Note that proper licensing for the NetApp controllers, VMware Products, and Windows XP must be obtained
to use the features detailed in this guide. Also, Cisco Nexus 5000 and 7000 switches are used and licensed
for virtual port channels (vPC) and virtual device context (VDC). Where appropriate, trial licenses can be
used for many of the solution components in order to test the configuration. Using this guide will allow you to
create the representative environment as shown in Figure 1.

1.3 HIGH-LEVEL ARCHITECTURE OF THE VMWARE VIEW MODULE ON NETAPP


Below is an illustration depicting a high-level overview of what the solution will look like once you complete
the steps outlined in this document.

6 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 1) High-level representation of the VMware View environment on a FAS3170HA cluster.

2 NETWORK SETUP AND CONFIGURATION


For the purposes of this deployment guide we used a network design with two Cisco Nexus 7000 switches
and two Cisco Nexus 5000 switches. Because of the complexity and variety of each organization‟s network
environment, it is very difficult to provide one general way to set up and configure all networks. For more
detailed information on additional network configuration options, refer to TR-3428: NetApp and VMware
Virtual Infrastructure 3 Storage Best Practices.
.
Below is a list of the topics that are covered in depth in the networking section in TR-3428:
Traditional Ethernet switch designs
Highly available storage design with traditional Ethernet switches
ESX networking with multiple Virtual Machine kernel ports
ESX with multiple Virtual Machine kernel, traditional Ethernet, and NetApp networking with single-mode
VIFS
ESX with multiple Virtual Machine kernel, traditional Ethernet, and NetApp networking with multilevel
VIFS
Cross-stack EtherChannel switch designs
Highly available IP storage design with Ethernet switches that support cross-stack EtherChannel
EtherChannel ESX networking and cross-stack EtherChannel
ESX and NetApp with cross-stack EtherChannel
Datastore configuration with cross-stack EtherChannel

Detailed below are the steps used to create the network layout for the NetApp storage controllers and for
each ESX 3.5 host in the environment.

7 VMware View on NetApp Storage Systems Deployment Guide Using NFS


2.1 NETWORK SETUP OF CISCO NEXUS NETWORK SERIES
For the purposes of this deployment guide we used a network design with two Cisco Nexus 7000 switches
and two Cisco Nexus 5000 switches. All of Cisco‟s best practices were followed in the setup of the Nexus
environment. For more information on configuring a Cisco Nexus environment, visit http://www.cisco.com.

The goal in using a Cisco Nexus environment for networking is to integrate their capabilities to logically
separate public IP traffic from storage IP traffic. In doing this, the chances of issues from changes done to a
portion of the network are mitigated.

Since the Cisco Nexus switches used in this configuration support virtual port channeling (vPC) and are
configured with a VDC specifically for storage traffic, logical separation of the storage network from the rest
of the network is achieved while at the same time providing a high level of redundancy, fault tolerance, and
security.

On the Nexus network perform the following configurations:


Setup a Pier Keep Alive Link as a management interface between the two Nexus switches.
On the default VDC, be sure to enable a management VLAN for the service console, a public VLAN for
the virtual machine network, and a private, non-routable VLAN for VMotion™.
In order to isolate and secure the NFS traffic, create a separate VDC for NFS traffic. Assign ports to
this VDC and configure these ports for a private, non-routable VLAN.*
*Note: This is an optional configuration. If you do not use this configuration or have this option available,
create an additional private, non-routable VLAN on the default VDC.

2.2 STORAGE VLAN FOR NFS


Make sure to configure a nonroutable VLAN on a separate VDC for the NFS storage traffic to pass to and
from the NetApp storage controllers to the ESX hosts. With this setup the NFS traffic is kept completely
contained, and security is more tightly controlled.
Also, it is extremely important to have at least two physical Ethernet switches for proper network redundancy
in your VMware View environment. Carefully plan the network layout for your environment, including detailed
visual diagrams detailing the connections for each port.

2.3 VMWARE VIEW NETWORK


When creating a VMware View environment that contains several hundred or several thousand virtual
machines, be sure to create large enough DHCP scope to cover the number of IP addresses that will be
needed by the clients. This step should be planned well before implementation .

8 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 2) NetApp storage controller VIF configuration.

3 NETAPP STORAGE CONTROLLER SETUP FOR VMWARE ESX 3.5


Perform all of the steps listed below on both controllers of the NetApp system. Failure to do so could result
in inconsistencies and performance problems within the environment.

3.1 NETAPP CONTROLLER 2,000-SEAT PHYSICAL CONFIGURATION


Table 5) NetApp solution configuration.
NetApp System Components Number and/or Type Slot on Each NetApp Controller
Part Installed In
Disk shelves required 6 (totaling 84 FC* disks; N/A
3 shelves per controller)
Size and speed of hard disk in shelves 300GB @ 15K RPM N/A

9 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Approximate starting storage required per 55GB N/A
NetApp controller after deployment
Quad port 1GB Ethernet NIC 4 (2 per controller) 2 and 3

Quad Port Fibre Channel card 4/2/1 2 (one per controller) 4

Performance Acceleration Module (PAM) 2 (one per controller) varies


NFS licenses 2 (one per controller) N/A

For the purpose of this configuration, eight IOPS per virtual machines is used as the basis for the design
architecture. This number might vary per environment and for different user types. For further details on
sizing best practices, check NetApp TR-3705.

* Depending on the requirements, it may be possible to replace the FC disks used in this configuration with
SATA disks. However, be sure to perform proper sizing and testing before making this change.

3.2 NETWORK SETUP OF NETAPP STORAGE CONTROLLER


In order to achieve optimal performance, maximize the number of Ethernet links for both controllers in the
NetApp cluster. Below are the guidelines for setting up the network for both storage controllers.
Table 6) Network setup of NetApp controller.
Step Action
1 Connect to the NetApp storage controller„s console (via either SSH, telnet, or console connection).
2 As depicted in the diagram above interface e2a, e3a, e2c, and e3c must go to Switch A.
3 As depicted in the diagram below, interface e2b, e3b, e2d, and e3d must go to Switch B.
4 The ports that these interfaces are connected to on the switches must meet the following criteria:
a. They must be on the nonroutable VLAN created for NFS network traffic.
b. They must be configured into a trunk, either manually as a multimode VIF or dynamically
as an LACP VIF.
c. If LACP is used, then the VIF type must be set to static LACP instead of multimode on the
NetApp storage controller.
Note: For the purposes of this document we will use the 192.168.0.0/24 network for the private
subnet for NFS and the 192.168.1.0/24 network for the private subnet for VMotion.
a. The NetApp storage controller IP address range will be from 192.168.0.200 through
1921.68.0.254.
b. The ESX 3.5 NFS VMware kernel IP address range will be 192.168.0.1 through
192.168.1.199.
c. The VMware VMotion enabled VMware kernel IP address range will be 192.168.1.1
through 192.168.1.254.

10 VMware View on NetApp Storage Systems Deployment Guide Using NFS


3.3 CONFIGURE NFS TRUNK

Table 7) Configure the NFS trunk on the NetApp storage controller.


Step Action
1 Edit the rc file in the /etc directory of each NetApp storage controller‟s root volume.
Note: If using Windows to edit rc file, use Wordpad instead of Notepad when asked what program
to open the rc file with.
2 Edit the RC file on the first storage controller by adding the following lines just before the “savecore”
entry:
vif create lacp vif0 ip e2a e3a e2c e3c e2b e3b e2d e3d
ifconfig vif0 192.168.0.201 netmask 255.255.255.0 partner vif0 up
3 Edit the RC file on the second storage controller by adding the following lines just before the
“savecore” entry:
vif create lacp vif0 ip e2a e3a e2c e3c e2b e3b e2d e3d
ifconfig vif0 192.168.0.202 netmask 255.255.255.0 partner vif0 up
4 If these interfaces were not previously in use one can reread the configuration from the rc file, log
on to each storage controller console and enter the following command:
source /etc/rc

Note: Repeat these steps for the two remaining ports. Be sure that one NIC is on switch A and the other is
on switch B. These ports will be used for CIFS and management traffic and should be setup using VLAN
tagging.

3.4 OVERVIEW OF THE NETAPP STORAGE CONTROLLER DISK CONFIGURATION


The figure below shows the disk layout for both the NetApp storage controllers. To meet the performance
and capacity needs of this configuration, each controller has two aggregates (aggr0 for root and “VMware
View_Production” for hosting production virtual machines) with the required number of spindles and enough
spares disks that can be easily added later to the aggregates to deal with unknowns.

11 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 3) NetApp storage controller disk configuration.

3.5 OVERVIEW OF THE LOGICAL STORAGE CONFIGURATION


The figure below shows the logical storage layout for this scalable, 2,000-seat modular configuration:
Controller A hosts 1,000 virtual machines created using NetApp RCU 2.0 and part of manual desktop
pool, in persistent access mode.
Controller B hosts the following virtual machine types:
- 500 virtual machines created using NetApp RCU 2.0 and part of a manual desktop pool, in
nonpersistent access mode

12 VMware View on NetApp Storage Systems Deployment Guide Using NFS


- 500 virtual machines created using VMware linked clones and part of an automated
desktop pool
250 virtual machines in persistent access mode
250 virtual machines in nonpersistent access mode
The virtual machine swap file (vswap) datastore on storage controller A hosts virtual machine swap file
for all the 2,000 virtual machines. The assumption is that the backup of the OS disk is not in the scope
of the project for phase 1 of the deployment but might be in phase 2.
Controller B hosts the CIFS share for storing the user data for all 1,500 NetApp RCU 2.0 created virtual
machines and also the 250 virtual machines created using VMware linked clones, in nonpersistent
access mode. For the 250 virtual machines created using linked clones in persistent access mode, the
nd
user data will be hosted on a 2 datastore.

Figure 4) NetApp storage controller logical storage configuration.

13 VMware View on NetApp Storage Systems Deployment Guide Using NFS


FAS Controller A (1,000 NetApp RCU Persistent Desktops)
Table 8) NetApp FAS Controller A configuration over VMware View.
VDI Infrastructure Component Number
Total volumes on FAS controller A 8 (including root volume)
FlexClone® gold volume 1
FlexClone volumes 4
Volume for virtual machine swap file (vswap) datastore 1
Volume to host template virtual machine (to be used as the source 1
for creating all the NetApp RCU 2.0-based virtual machines)

FAS Controller B (500 NetApp RCU Nonpersistent/500 VMware Linked Clones)


Table 9) NetApp FAS Controller B configuration
VDI Infrastructure Component Number
Total volumes on FAS controller B 9 (including root volume)
FlexClone gold volume 1
FlexClone volumes 2
Volume for hosting linked clone parent virtual machine 1
Volume for hosting OS disk for linked clone virtual machines in 1
persistent access mode
Volume for hosting user data disk for linked clone virtual machines 1
in persistent access mode
Volume for hosting OS disk for linked clone virtual machines in 1
nonpersistent access mode
Volume for hosting CIFS user data 1

3.6 STORAGE SIZING FORMULAS

Datastore for hosting Linked Clones OS Data Disk:


(Number of virtual machines in a datastore * storage required by new writes per virtual machine) + storage
per replica virtual machine + 3% free buffer space

(250 * 5GB) + 10GB + 3% free space = 1.3TB (approximately)

Datastore for hosting Linked Clones User Data Disk:


Storage required per user * Number of users per datastore * dedupe savings

2 * 250 * 0.5 = 250GB

For this example 50% deduplication savings number is used for this sample user profile. It may be different
for your environment. We recommend you go through the sizing exercise with NetApp to determine storage
savings for your environment.

Datastore for hosting NetApp RCU based Clones:


(Number of virtual machines in a datastore * storage required by new writes per virtual machine * dedupe
savings for this user profile) + 5% free space, includes dedupe metadata and some free buffer space in the
NetApp FlexClone volumes

(250 * 5GB * 0.4+) + 5% free space = 525GB

14 VMware View on NetApp Storage Systems Deployment Guide Using NFS


+ For this user profile, 0.4 i.e. 60% dedupe savings number is used and based on certain assumptions on
commonality of the OS and non-OS data for this sample user profile. It may be different for your
environment. We highly recommend performing a detailed sizing exercise with NetApp engineer to
determine storage savings for your environment.

Datastore for hosting virtual machine swap file (vswp) for all the 2000 virtual machines:
512MB per virtual machine * 2000 virtual machines * 10% free space = 1100GB

Volume for hosting CIFS user data


Considering 2GB per user and assuming 50% storage savings with NetApp deduplication i.e. 2 * 1750 * 0.5
= 1750GB.

Note that the template virtual machine datastore, gold datastore for RCU provisioned virtual machines,
datastore for hosting linked clone parent virtual machines are all 25GB in size although the virtual machines
are 10GB.

3.7 CONFIGURE NETAPP STORAGE CONTROLLER’S SSH CONFIGURATION


For both storage controllers, perform the following steps:
Table 10) Configuring SSH.
Step Action
1 Connect to the NetApp storage controller„s console (via either SSH, telnet, or console connection).
2 Execute the following commands and follow the setup script:
secureadmin setup ssh
options ssh.enable on
options ssh2.enable on

3.8 CONFIGURE FLEXSCALE FOR PERFORMANCE ACCELERATION MODULE (PAM)


The Performance Acceleration Module (PAM) is an intelligent read cache that reduces storage latency and
increases I/O throughput by optimizing performance of random read intensive workloads. As a result, disk
performance is increased and the amount of storage needed is decreased.
For both storage controllers, perform the following steps:
Table 11) FlexScale configuration.
Step Action
1 Connect to the NetApp storage controller„s console (via either SSH, telnet, or console connection).

2 To enable and configure FlexScale, do the following:

options flexscale.enable on
options flexscale.normal_data_blocks on

3.9 CONFIGURE VIRTUAL MACHINE DATASTORE AGGREGATE


For both storage controllers, perform the following steps:

15 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Table 12) Creating the VMware aggregate.
Step Action
1 Open NetApp FilerView® and click Aggregates->Add.

Figure 5) Aggregate Wizard.


2 Name the new aggregate VMware View_Production.
3 Make sure the Double Parity box is checked (default is checked) as this provides the best RAID
protection available and is the NetApp recommended RAID level for an aggregate.

Figure 6) Aggregate Wizard—Aggregate Parameters.


4 Select the number of disks to assign for the “RAID Group Size.” The default is 16 and is the
NetApp recommended best practice.

16 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 7) Aggregate Wizard—RAID Parameters.
5 Select the disk that will be used for the aggregate. “Automatic” is selected by default and is the
NetApp recommended best practice.

Figure 8) Aggregate Wizard—Disk Selection Method.


6 Choose the type of disk to be added to the aggregate. By default, ”Any Type” will be selected. If
multiple disk types are attached to the controller, select the appropriate disk type..

17 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 9) Aggregate Wizard—Disk Type.
7 Choose the disk size to be used in the aggregate. By default, ”Any Size” will be selected. NetApp
recommends selecting disks of the same size when creating an aggregate.

Figure 10) Aggregate Wizard—Disk Size.


8 Assign 32 disks to provision the aggregate.

18 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 11) Aggregate Wizard—Number of Disks.
9 Click Next and then Commit to finish creating the new aggregate

10 As recommended earlier, make nonroot aggregates as large as possible to benefit from the I/O
capacity of all the spindles in the aggregate.

3.10 MODIFY THE AGGREGATE SNAPSHOT RESERVE FOR VMWARE


VIEW_PRODUCTION AGGREGATE
For both storage controllers, perform the following steps:
Table 13) Modify aggregate Snapshot reserve.
Step Action
1 Connect to the controller‟s console, using either SSH, telnet, or serial console.
2 Set the aggregate Snapshot™ schedule:
snap sched –A <aggregate-name> 0 0 0
3 Set the aggregate Snapshot reserve:
snap reserve –A <aggregate-name> 0
4 Delete existing Snapshot copies, type snap list -A <vol-name> and then type:
snap delete <vol-name> <snap-name>
5 To log out of the NetApp console, type CTRL+D.

4 NETAPP STORAGE SETUP FOR USE WITH RCU 2.0


Perform all of the steps listed below on controller A of the NetApp system. Failure to do so could result in
inconsistencies and performance problems with your environment. Note that this step is not required on
controller B because RCU 2.0 will use the template virtual machine in the template datastore as the basis to
create the gold datastore on controller B as well.

4.1 CREATE A VOLUME TO HOST THE TEMPLATE VIRTUAL MACHINE FOR RCU 2.0

Table 14) Create the virtual machine template volume.

19 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.
4 Complete the Wizard, naming the volume view_rcu_template and placing it on the View_Production
aggregate. This volume should be a total of 20GB in size.
5 Note that Data ONTAP® creates new volumes with a security style matching that of the root
volume. Verify that the security style of the volume is set to UNIX®.
qtree security view_rcu_template
If the qtree security style is NTFS or Mixed, change it using the following command:
qtree security view_rcu_template unix

4.2 CONFIGURE THE NFS EXPORT FOR VOLUME HOSTING TEMPLATE VIRTUAL
MACHINE
Table 15) Configure the template virtual machine volume NFS export.
Step Action

1 From the FilerView menu, select NFS.

2 Select Manage Exports to open the Manage NFS Exports screen.

3 Click the /vol/view_rcu_template NFS export.

Figure 12) NFS Export Wizard.

4 Grant the export Root Access permissions by clicking Next and placing a green
check inside the box. Then click Next.

20 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 13) NFS Export Wizard—export options settings.

5 Determine that the export path is correct for the NFS export.

Figure 14) NFS Export Wizard—Path.

6 At NFS Export Wizard - Read-Write Access, click the Add button and enter the IP
address of the NFS VMware kernel for one of the ESX 3.5 hosts. Optionally, repeat
this step for the VMware kernel IP addresses for the other hosts in the ESX cluster
until all IP addresses have been entered. When this is done, click Next.

21 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 15) NFS Export Wizard—Read-Write Access.

7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP
address of the NFS VMware kernel for the first ESX 3.5 host server. Optionally,
repeat this step for the VMware kernel IP addresses for the other hosts in the ESX
cluster until all four IP addresses have been entered. When this is done, click Next.

Figure 16) NFS Export Wizard—Root Access.

22 VMware View on NetApp Storage Systems Deployment Guide Using NFS


8 At the NFS Export Wizard – Security screen, make sure that Unix Style is selected
and click Next.

Figure 17) NFS Export Wizard—Security.

9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export
Wizard – Success screen, click Close.

Figure 18) NFS Export Wizard—Commit.

23 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 19) NFS Export Wizard—Success.

4.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO
Perform this step for the volume hosting the template virtual machine.
Table 16) Configure Snapshot details.
Step Action

1 Log into the NetApp console.

2 Set the volume Snapshot schedule for the volume that was created above:
snap sched <vol-name> 0 0 0

3 Set the volume Snapshot reserve:


snap reserve <vol-name> 0

4.4 CONFIGURE SNAPSHOT AUTODELETE


Perform this step for the volume hosting the template virtual machine
Table 17) Configure Snapshot autodelete for volumes.
Step Action
1 Log in to the NetApp console.

2 Set Snapshot autodelete policy:


snap autodelete <vol-name> commitment try trigger volume target_free_space 5 delete_order
oldest_first.

3 Set volume autodelete policy: vol options <vol-name> try_first volume_grow.

4.5 CONFIGURE OPTIMAL PERFORMANCE FOR VMDKS ON NFS


Perform this step for the volume hosting the template virtual machine
Table 18) Set optimal performance for VMDKs on NFS.
Step Action
1 Log in to the NetApp console.

24 VMware View on NetApp Storage Systems Deployment Guide Using NFS


2 Adjust the options on each volume by entering:
vol options <vol-name> no_atime_update on
3 Adjust the options on each volume by entering:
options nfs.tcp.recvwindowsize 64240.

4.6 STORAGE CONTROLLER “A” ADDITIONAL SETUP AND CONFIGURATION


Create the volume to host virtual machine swap files.
Table 19) Create the view_swap volume.
Step Action

1 Open FilerView (http://filer/na_admin).

2 Select Volumes.

3 Select Add to open the Volume Wizard.

4 Complete the Wizard using the following:


Name the volume view_swap.
Place the view_swap volume on the View_Production aggregate.
Make the size of the volume 1100GB.
To achieve high levels of storage efficiency, make sure to enable thin provisioning by setting
the volume space guarantee to “none.”

5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to UNIX.

Configure the NFS export for volume hosting virtual machine swap files.
Table 20) Set up the view_swap NFS export.
Step Action
1 From the FilerView menu, select NFS.
2 Select Manage Exports to open the Manage NFS Exports screen.
3 Click the /vol/view_swap NFS export.
4 Grant the export root access permissions by clicking Next and placing a green check inside the
box. Then click Next.
5 Make sure that the export path is correct for the NFS export.
6 At the NFS Export Wizard - Read-Write Access, click the Add button and enter the IP address of
the NFS VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel
IP addresses for all the other hosts in the ESX cluster until all IP addresses have been entered.
When this is done, click Next.
7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP address of the NFS
VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel IP
addresses for the other hosts in the ESX cluster until all IP addresses have been entered. When
this is done, click Next.
8 At the NFS Export Wizard – Security screen, make sure that Unix Style is selected and click Next.

9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export Wizard –
Success screen, click Close.

Disable default Snapshot schedule/set snap reserve to zero.


Table 21) NFS volume configurations.

25 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action

1 Log into the NetApp console.

2 Set the volume Snapshot schedule for the volume that was created above:
snap sched <vol-name> 0 0 0

3 Set the volume Snapshot reserve:


snap reserve <vol-name> 0

Configure Snapshot autodelete.


Table 22) Configure Snapshot autodelete for volumes.
Step Action

1 Log in to the NetApp console.

2 Configure Snapshot autodelete policy: snap autodelete <vol-name> commitment try trigger volume
target_free_space 5 delete_order oldest_first.

3 Set volume autodelete policy: vol options <vol-name> try_first volume_grow.

Configure optimal performance for VMDKs on NFS.


Table 23) Configure optimal performance for VMDKs on NFS.
Step Action
1 Log in to the NetApp console.
2 From the storage controller console, run
vol options <vol-name> no_atime_update on
3 From the console, enter options nfs.tcp.recvwindowsize 64240.

4.7 CONFIGURE VOLUME AUTOSIZING


For all the volumes configured above, do the following:
Table 24) Configure volume autosizing.
Step Action
1 Log in to the NetApp console.
2 Configure volume autosize policy for all the newly created volumes by using this command:
vol autosize <vol-name> -m 500g -i 10g on

5 STORAGE CONTROLLER “B” SETUP AND CONFIGURATION

5.1 CREATE THE VOLUMES FOR HOSTING LINKED CLONES, AND CIFS USER DATA
Create volume to host OS data disks for linked clones in persistent access mode.
Table 25) Create the view_lcp volume.
Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.

26 VMware View on NetApp Storage Systems Deployment Guide Using NFS


4 Complete the Wizard, naming the volume view_lcp and placing it in the View_Production aggregate.
This volume should be a total of 1300GB in size.
To achieve high levels of storage efficiency, make sure to enable thin provisioning by setting the
space guarantee to “none."
5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to UNIX.
qtree security view_lcp
If the qtree security style is NTFS or Mixed, change it using the following command:
qtree security view_lcp unix

Create volume to host user data disks for linked clones in persistent access mode.

Table 26) Create the linked clones volume for host user data.
Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.
4 Complete the Wizard, naming the volume view_lcp_userdata and placing it in the View_Production
aggregate. This volume should be a total of 250GB in size.
To achieve high levels of storage efficiency, make sure to enable thin provisioning by setting the
space guarantee to “none."
5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to UNIX.
qtree security view_lcp_userdata
If the qtree security style is NTFS or Mixed, change it using the following command:
qtree security view_lcp_userdata unix

Create volume to host OS data disks for linked clones in nonpersistent access mode.
Table 27) Create the linked clones host OS data disk volume.
Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.
4 Complete the Wizard, naming the volume view_lcnp and placing it in the View_Production
aggregate. This volume should be a total of 700GB in size.
To achieve high levels of storage efficiency, make sure to enable thin provisioning by setting the
space guarantee to “none."
5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to UNIX.
qtree security view_lcnp
If the qtree security style is NTFS or Mixed, change it using the following command:
qtree security view_lcnp unix

27 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Create the volume to host CIFS user data.
This volume will be used for hosting CIFS user data for virtual machine provisioned using NetApp RCU and
linked clones in nonpersistent access mode.
Table 28) Create the CIFS volume to host user data.
Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.
4 Complete the Wizard, naming the volume view_cifs and placing it in the View_Production
aggregate. This volume should be a total of 1750GB in size.
To achieve high levels of storage efficiency, make sure to enable thin provisioning by setting the
space guarantee to “none." You storage efficiency might vary based on the type of data in your
environment.
5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to NTFS.
qtree security view_cifs
If the qtree security style is UNIX or Mixed, change it using the following command:
qtree security view_cifs ntfs

5.2 SET UP NFS EXPORTS


Set up NFS export for volume hosting OS data for linked clones in persistent access mode.
Table 29) Set up the view_lcp NFS volume.
Step Action
1 From the FilerView menu, select NFS.
2 Select Manage Exports to open the Manage NFS Exports screen.
3 Click the /vol/view_lcp NFS export.
4 Grant the export root access permissions by clicking Next and placing a green check inside the
box. Then click Next.
5 Make sure that the export path is correct for the NFS export.
6 At the NFS Export Wizard - Read-Write Access, click the Add button and enter the IP address of
the NFS VMware kernel for the first ESX 3.5 host. Repeat this step for the VMware kernel IP
addresses for the other seven ESX hosts in the cluster, until all IP addresses have been entered.
When this is done, click Next.
7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP address of the NFS
VMware kernel for the first ESX 3.5 host. Repeat this step for the VMware kernel IP addresses for
the other seven ESX hosts in the ESX cluster until all IP addresses have been entered. When this
is done, click Next.
8 At the NFS Export Wizard – Security screen, make sure that Unix Style is selected and click Next.
9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export Wizard –
Success screen, click Close.

Set up the NFS export for volume hosting user data disk for linked clones in persistent access
mode.
Table 30) Set up the view_lcp_userdata NFS volume.
Step Action

28 VMware View on NetApp Storage Systems Deployment Guide Using NFS


1 From the FilerView menu, select NFS.
2 Select Manage Exports to open the Manage NFS Exports screen.
3 Click the /vol/view_lcp_userdata NFS export.
4 Grant the export Root Access permissions by clicking Next and placing a green check inside the
box. Then click Next.
5 Make sure that the export path is correct for the NFS export.

6 At the NFS Export Wizard - Read-Write Access, click the Add button and enter the IP address of
the NFS VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel
IP addresses for all the other seven hosts in the ESX cluster until all IP addresses have been
entered. When this is done, click Next.
7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP address of the NFS
VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel IP
addresses for all the other seven hosts in the ESX cluster until all IP addresses have been
entered. When this is done, click Next.
8 At the NFS Export Wizard – Security screen, make sure that Unix Style is selected and click Next.

9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export Wizard –
Success screen, click Close.

Set up the NFS export for volume hosting OS data for linked clones in nonpersistent access mode.

Table 31) Set up the view_lcnp NFS volume.


Step Action
1 From the FilerView menu, select NFS.
2 Select Manage Exports to open the Manage NFS Exports screen.
3 Click the /vol/view_lcnp NFS export.
4 Grant the export Root Access permissions by clicking Next and placing a green check inside the
box. Then click Next.
5 Make sure that the export path is correct for the NFS export.
6 At the NFS Export Wizard - Read-Write Access, click the Add button and enter the IP address of
the NFS VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel
IP addresses for all the other seven hosts in the ESX cluster until all IP addresses have been
entered. When this is done, click Next.
7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP address of the NFS
VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel IP
addresses for all the other seven hosts in the ESX cluster until all IP addresses have been
entered. When this is done, click Next.
8 At the NFS Export Wizard – Security screen, make sure that Unix Style is selected and click Next.
9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export Wizard –
Success screen, click Close.

5.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO
For all the volumes configured above for controller B, do the following:
Table 32) Disable default Snapshot schedule and set snap reserve to zero.
Step Action

1 Log into the NetApp console.

2 Set the volume Snapshot schedule for volumes created above:

29 VMware View on NetApp Storage Systems Deployment Guide Using NFS


snap sched <vol-name> 0 0 0

3 Set the volume Snapshot reserve for volumes created above:


snap reserve <vol-name> 0

5.4 CONFIGURE SNAPSHOT AUTODELETE FOR VOLUMES


For all the volumes configured above for controller B, do the following:
Table 33) Set Snapshot autodelete for volumes.
Step Action

1 Log in to the NetApp console.

2 Configure Snapshot autodelete policy for both volumes created above: snap autodelete <vol-
name> commitment try trigger volume target_free_space 5 delete_order oldest_first.

3 Configure volume autodelete policy for volumes created above: vol options <vol-name> try_first
volume_grow.

5.5 CONFIGURE OPTIMAL PERFORMANCE FOR VMDKS ON NFS


For all the volumes with NFS exports configured above for controller B, do the following:
Table 34) Set optimal performance for VMDKs on NFS.
Step Action
1 Log in to the NetApp console.
2 From the storage controller console, run
vol options <vol-name> no_atime_update on
3 Repeat step 2 for each NFS volume.
4 Log in to the NetApp Console
5 From the storage appliance console, run
options nfs.tcp.recvwindowsize 64240
6 Disconnect and if needed remount each NFS datastore to each ESX server

5.6 CONFIGURE VOLUME AUTOSIZING


For all the volumes configured above, across both the storage controllers, do the following:
Table 35) Configure volume autosizing.
Step Action
1 Log in to the NetApp console.
2 Configure volume autosize policy for all the newly created volumes by using this command:
vol autosize <vol-name> -m 500g -i 10g on.

5.7 CONFIGURE DEDUPLICATION


Set up deduplication for volume hosting user data disk for linked clones in persistent access mode.
Table 36) Configure deduplication for volumes hosting user data for link clones in persistent mode.

30 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action

1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.

2 Execute the following command to enable deduplication for the volume hosting the user data disks:
sis on /vol/view_lcp_userdata

3 Execute the following command to start processing existing data in the datastores:

sis start –s <volume path>

4 Execute the following command to monitor the status of the deduplication operation:
sis status

5 After the deduplication has finished you can use the following command to see the savings:

df -s

Configure deduplication schedule for volume hosting user data disk for linked clones in persistent
access mode.
Table 37) Configure deduplication schedule for persistent-mode linked clone volume.
Step Action

1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.

2 Execute the following command to set the deduplication schedule for the volume hosting the user
data disk
sis config [-s sched]<volume path>

Where “sched” can be in four ways:


[day_list][@hour_list]
[hour_list][@day_list]

-
auto

The day_list specifies which days of the week deduplication should run. It is a comma-separated list
of the first three letters of the day: sun, mon, tue, wed, thu, fri, sat. The names are not case
sensitive. Day ranges such as mon-fri can also be used. The default day_list is sun-sat.

The hour_list specifies which hours of the day deduplication should run on each scheduled day. The
hour_list is a comma-separated list of the integers from 0 to 23. Hour ranges such as 8-17 are
allowed.
Step values can be used in conjunction with ranges. For example, 0-23/2 means "every 2 hours."
The default hour_list is 0; that is, midnight on the morning of each scheduled day.
If "-" is specified, there is no scheduled deduplication operation on the flexible volume.
The autoschedule causes deduplication to run on that flexible volume whenever there are 20% new
fingerprints in the change log. This check is done in a background process and occurs every hour.

Set up deduplication for volume hosting CIFS user data.


Table 38) Set up deduplication for volume hosting CIFS user data.
Step Action

1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.

2 Execute the following command to enable deduplication for the volume hosting the CIFs user data:

31 VMware View on NetApp Storage Systems Deployment Guide Using NFS


sis on /vol/view_cifs

3 Execute the following command to start processing existing data in the volume:

sis start –s <volume path>

4 Execute the following command to monitor the status of the deduplication operation:
sis status

5 After the deduplication has finished you can use the following command to see the savings:

df -s

Configure deduplication schedule for volume hosting CIFS user data.

Table 39) Set up deduplication schedule for volume hosting CIFS user data.
Step Action

1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.

2 Execute the following command to set the deduplication schedule for the volume hosting the CIFS
user data:
sis config [-s sched]<volume path>

Where “sched” can be in four ways:


[day_list][@hour_list]
[hour_list][@day_list]

-
auto

The day_list specifies which days of the week deduplication should run. It is a comma-separated list
of the first three letters of the day: sun, mon, tue, wed, thu, fri, sat. The names are not case
sensitive. Day ranges such as mon-fri can also be used. The default day_list is sun-sat.

The hour_list specifies which hours of the day deduplication should run on each scheduled day. The
hour_list is a comma-separated list of the integers from 0 to 23. Hour ranges such as 8-17 are
allowed.

Step values can be used in conjunction with ranges. For example, 0-23/2 means "every 2 hours."
The default hour_list is 0; that is, midnight on the morning of each scheduled day.
If "-" is specified, there is no scheduled deduplication operation on the flexible volume.
The autoschedule causes deduplication to run on that flexible volume whenever there are 20% new
fingerprints in the change log. This check is done in a background process and occurs every hour.

6 VMWARE ESX HOST SETUP

6.1 PHYSICAL SERVER CONFIGURATION


Below are the server specifications that were used for this configuration. You might have different servers with different
configurations.

Table 40) ESX host configuration.


Server Component Number or Type
VMware ESX host 16

32 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Memory per ESX 3.5 host 128GB
CPUs per ESX 3.5 host 4 Intel® Quad core CPUs
Network interface cards (NICs) per ESX 3.5 host 10

6.2 LICENSES NEEDED


Table 41) ESX licenses needed.
VMware View Infrastructure Component Number
ESX Server 3.5 licenses (1 license needed per 2 CPUs) 32
VMware vCenter™ Server Licenses 1
VMware View Enterprise Licenses 1,500
VMware View Premier Licenses 500
Windows XP licenses 2,000

6.3 INSTALL ESX 3.5


For information on the installation and configuration of ESX 3.5, refer to the ESX Server 3 and VMware
vCenter Installation Guide published by VMware.
Below are guidelines used for this environment when deploying the VMware View Infrastructure.
Table 42) VMWARE VIEW infrastructure components.
VMWARE VIEW Infrastructure Component Number
Virtual machine per ESX 3.5 server 125
Virtual machine per CPU core 8
Memory per Windows XP VMware View desktop 512MB

6.4 INSTALL VMWARE VCENTER SERVER


For information on the installation and configuration of VMware vCenter Server and License Server, refer to
the ESX Server 3 and VMware vCenter Installation Guide published by VMware.
To obtain licenses for VMware, contact your VMware sales representative.

6.5 CONFIGURE SERVICE CONSOLE FOR REDUNDANCY


Table 43) Configure service console for redundancy.
Step Action
1 Make sure that the primary Service Console vSwitch has two NICs assigned to it.
Note: The network ports that the NICs use must exist on the administrative VLAN and be on
separate switches to provide network redundancy.
2 Open VMware vCenter.
3 Select an ESX host.
4 In the right pane, select the Configuration tab.

33 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 20) VMware configuration.

5 In the Hardware box under the Configuration tab, select Networking.

Figure 21) VMware networking.

6 In the Networking section, click the Properties section of vSwitch0.

Figure 22) VMware networking properties.

7 In the Properties section, click the Network Adapters tab.

34 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 23) VMware vSwitch configuration.

8 Click Add at the bottom (pictured above) and select the vmnic that will act as the
secondary NIC for the service console.

Figure 24) Adding second vmnic to the vSwitch.

9 Click Next (pictured above). At the following screen, verify and click Next, then at
the following screen click Finish. At the following screen, click Close.

35 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 25) Adding second vmnic to the vSwitch confirmation.

Figure 26) Adding second vmnic to the vSwitch finish.

36 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 27) Adding second vmnic to the vSwitch close.

6.6 CONFIGURE VMWARE KERNEL NFS PORT


Table 44) Configure VMware kernel NFS port.
Step Action
1 For each ESX host, configure NFS connections to the storage controllers using a VMware kernel.
The network ports for this NFS VMware kernel must be on the nonroutable NFS VLAN created that
was created in the separate VDC on the Nexus 7000.*
Note: Currently, VDC is not supported on Cisco Nexus 5000 switches.
2 Use the following assignments for your NFS storage traffic VMware kernel IP addresses:
Note: For the storage network the private subnet of 192.168.0.xxx is being used.
ESX Host 1: ESX Host 5: ESX Host 9: ESX Host 13:
192.168.0.1 192.168.0.5 192.168.0.9 192.168.0.13
ESX Host 2: ESX Host 6: ESX Host 10: ESX Host 14:
192.168.0.2 192.168.0.6 192.168.0.10 192.168.0.14
ESX Host 3: ESX Host 7: ESX Host 11: ESX Host 15:
192.168.0.3 192.168.0.7 192.168.0.11 192.168.0.15
ESX Host 4: ESX Host 8: ESX Host 12: ESX Host 16:
192.168.0.4 192.168.0.8 192.168.0.12 192.168.0.16

3 For each ESX host, the virtual switch for the NFS VMware kernel should have two network ports
that each goes to different Cisco Nexus 5000 switch. Two vmnics should be assigned to the virtual
switch for the NFS storage VMware kernel.

37 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Nexus 7000 A

Virtual Switch: vSwitch2


VMkernel Port
Service Console Port
vmnic1
VMkernel 1 Nexus 5000 A
192.168.0.1

vmnic2

Peer-keepalive Link
vPC Peer Link
(10Gb)
ESX2

Virtual Switch: vSwitch2


VMkernel Port vmnic1
VMkernel 1
192.168.0.2

Nexus 5000 B
vmnic2

Nexus 7000 B

Figure 28) ESX host vSwitch2 network setup.

4. For the vSwitch for the NFS VMware kernel set the load balancing policy to “Route based on IP
hash”.

Figure 29) ESX host NFS load balancing configuration.

6.7 CONFIGURE VMOTION


Table 45) Configure VMotion.
Step Action

1 For each ESX host, configure VMotion using a VMware kernelVMware kernel using network ports
on the nonroutable VMotion VLAN.

38 VMware View on NetApp Storage Systems Deployment Guide Using NFS


2 For each ESX host, VMotion should have two network ports that each go to different Cisco Nexus
5000 in the network. Two vmnics should be assigned to the virtual switch for the VMotion VMware
kernel.
4 Use the following assignments for your VMotion VMware kernelVMware kernel IP addresses:
Note: For the storage network the private subnet of 192.168.1.xxx is being used.
ESX Host 1: ESX Host 5: ESX Host 9: ESX Host 13:
192.168.1.1 192.168.1.5 192.168.1.9 192.168.1.13
ESX Host 2: ESX Host 6: ESX Host 10: ESX Host 14:
192.168.1.2 192.168.1.6 192.168.1.10 192.168.1.14
ESX Host 3: ESX Host 7: ESX Host 11: ESX Host 15:
192.168.1.3 192.168.1.7 192.168.1.11 192.168.1.15
ESX Host 4: ESX Host 8: ESX Host 12: ESX Host 16:
192.168.1.4 192.168.1.8 192.168.1.12 192.168.1.16

6.8 CONFIGURE VIRTUAL MACHINE NETWORK

Table 46) Configure VMotion.


Step Action

1 For each ESX host, configure the network for the virtual machines by creating a public virtual machine
port group in a new vSwitch. This should be on a public VLAN.

2 For each ESX host, the virtual machine Network the vSwitch should have four network ports with two
going to each of the Cisco Nexus 5000 switches. Four vmnics should be assigned to this virtual switch.

ESX1 Nexus 7000 A

Virtual Switch: vSwitch1


Virtual Machine Port Group
Service Console Port
VM Network Nexus 5000 A
Virtual Machine 1

Virtual Machine 2

Peer-keepalive Link
Virtual Machine 3
Virtual Machine 4

vPC Peer Link


(10Gb)
ESX2

Virtual Switch: vSwitch1


Virtual Machine Port Group
Service Console Port
VM Network
Virtual Machine 1

Virtual Machine 2 Nexus 5000 B


Virtual Machine 3
Virtual Machine 4
Nexus 7000 B

Figure 30) ESX host vSwitch1 network setup.

3 For the virtual machine Network vSwitch set the load balancing policy to “Route based on IP hash”.

39 VMware View on NetApp Storage Systems Deployment Guide Using NFS


6.9 VMWARE ESX HOST NETWORK CONFIGURATION
Depicted below is how a fully configured network environment will look once all the networking steps above
have been completed.

Figure 31) VMware ESX host configuration example.

6.10 ADD TEMPLATE VIRTUAL MACHINE DATASTORE TO ESX HOST


Table 47) Add template virtual machine datastore to ESX hosts.
Step Action
1 Open VMware vCenter.
2 Select an ESX host.
3 In the right pane, select the Configuration tab.

Figure 32) VMware configuration.

40 VMware View on NetApp Storage Systems Deployment Guide Using NFS


4 In the Hardware box, select the Storage link.

Figure 33) VMware virtual machine swap location.

5 In the upper-right corner, click Add Storage to open the Add Storage Wizard.

Figure 34) VMware Add Storage.

6 Select the Network File System radio button and click Next.

Figure 35) VMware Add Storage Wizard.

7 Enter a name for the storage controller, export, and datastore (view_rcu_template),
and then click Next.

41 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 36) VMware Add Storage Wizard NFS configuration.

8 Click Finish.

Figure 37) VMware Add Storage Wizard finish.

6.11 ADD VIEW_SWAP DATASTORE TO ESX HOST


Table 48) Add vdi_swap datastore to ESX hosts.

42 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action

1 Open vCenter.

2 Select a VMware ESX host.

3 In the right pane, select the Configuration tab.


4 In the Hardware box, select the Storage link.
5 In the upper-right corner, click Add Storage to open the Add Storage Wizard.
6 Select the Network File System radio button and click Next.
7 Enter a name for the storage controller, export, and datastore (view_swap), then click Next.
8 Click Finish.
9. Repeat this procedure for all the ESX hosts

6.12 CONFIGURE NFS TUNABLES WITHIN ESX 3.5


Table 49) Set NFS tunables.
Step Action
1 Connect to the ESX host‟s system console using either SSH, telnet, or serial console and log in to
the console. Type each command below and press Enter.
2 esxcfg-advcfg -s 32 /NFS/MaxVolumes
3 esxcfg-advcfg -s 12 /NFS/HeartbeatFrequency
4 esxcfg-advcfg -s 5 /NFS/HeartbeatTimeout
5 esxcfg-advcfg -s 10 /NFS/HeartbeatMaxFailures

6 esxcfg-advcfg -s 30 /Net/TcpIpHeapSize

7 Esxcfg-advcfg –s 120 /Net/TcpIpHeapMax

8 Repeat this procedure for all the ESX hosts

6.13 CONFIGURE LOCATION OF VIRTUAL SWAPFILE DATASTORE


Table 50) Configure location of datastore virtual swap file.
Step Action
1 Open VMware vCenter.
2 Select an ESX host.
3 In the right pane, select the Configuration tab.

Figure 38) VMware configuration.

4 In the Software box, select virtual machine Swapfile Location.

43 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 39) VMware Virtual Machine Swap Location.

5 In the right pane, select Edit.


6 The virtual machine Swapfile Location Wizard will open.
7 Click the view_swap datastore and select OK.
8 Repeat steps 2 through 7 for each ESX host in the ESX cluster.

7 SET UP VMWARE VIEW MANAGER 3.0 AND VMWARE VIEW


COMPOSER
VMware View Manager is a key component of VMware View and is an enterprise class desktop
management solution, which streamlines the management, provisioning, and deployment of virtual
desktops. This product allows security for and configuration of the VMware View environment and allows an
administrator to determine exactly which virtual machines a user may access.
View Composer is a new component of the VMware View solution and uses VMware linked clone
technology to rapidly create desktop images that share virtual disks with a master image to conserve disk
space and streamline management.
For setup and configuration details for the different components of VMware View Manager and View
Composer, refer to the VMware View Manager Administration Guide.

8 SET UP AND CONFIGURE WINDOWS XP GOLD IMAGE

8.1 CREATE VIRTUAL MACHINE IN VMWARE ESX 3.5


For the purposes of this portion of the document, follow whatever guidelines you have for both virtual machine size and
RAM for your Windows XP virtual machine. For the purposes of this implementation we will be using 512MB RAM
(VMware guidelines for RAM are between 256MB for low end to 512MB for high end) for RAM. Follow the Basic System
Administration guide by VMware, starting on page 145. Be sure to name this Windows XP virtual machine
windows_xp_gold.

8.2 FORMAT THE VIRTUAL MACHINE WITH THE CORRECT STARTING PARTITION
OFFSETS

To set up the starting offset using the fdisk command found in ESX, follow the steps detailed below:

Table 51) Format a virtual machine with the correct starting offsets.

44 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
1 Log in to the ESX Service Console.
2 CD to the virtual machine directory and view this directory by typing the following commands
(shown below):
cd /vmfs/volumes/vdi_gold /windows_xp_gold
ls –l

Figure 40) Using FDisk for setting offset—navigate to .vmdk directory.


3 Get the number of cylinders from the vdisk descriptor by typing the following command (this number
will be different depending on several factors involved with the creation of your .vmdk file):
cat windows_xp_gold.vmdk

Figure 41) Using FDisk for setting offset—find cylinders of the vDisk.
4 Run fdisk on the windows_xp_gold-flat.vmdk file by typing the following command:
fdisk ./windows_xp_gold-flat.vmdk

Figure 42) Using FDisk for setting offset—starting FDisk.


5 Set the number of cylinders.

45 VMware View on NetApp Storage Systems Deployment Guide Using NFS


6 Type in x and then press Enter.
7 Enter c and press Enter.
8 Type in the number of cylinders that you found from doing step 3.

Figure 43) Using FDisk for setting offset—set the number of cylinders.
9 Type p at the expert command screen to look at the partition table (which should be blank).

Figure 44) Using FDisk for setting offset—set view partition information.
10 Return to regular (nonextended) command mode by typing r at the prompt.

Figure 45) Using FDisk for setting offset—set cylinder information.


11 Create a new partition by typing n and then p when it asks which type of partition.
12 Enter 1 for the partition number, 1 for the first cylinder, and press Enter for the last cylinder question
to make it use the default value.
13 Go into extended mode to set the starting offset by typing x.
14 Set the starting offset by typing b and pressing Enter, selecting 1 for the partition and pressing
Enter, and entering 64 and pressing Enter.
15 Check the partition table by typing p.

46 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 46) Using FDisk for setting offset—view partition table to verify changes.
16 Type r to return to the regular menu.
17 To set the system type to HPFS/NTF, type t.
18 For the Hex code, type 7.

Figure 47) Using FDisk for setting offset—set system type and hex code.
19 Save and write the partition by typing w. Ignore the warning, which is normal.

Figure 48) Using FDisk for setting offset—save and write the partition.
20 Start the virtual machine and run Windows setup.
21 When the install gets to the partition screen, install on the existing partition. DO NOT DESTROY or
RECREATE! C: should already be highlighted. Press Enter at this stage.

8.3 DOWNLOAD AND PREPARE LSI 53C1030 DRIVER


Table 52) Download and prepare LSI 53C1030 driver.
Step Action
1 Download the LSI 53C1030 driver from http://www.rtfm-ed.co.uk/downloads/lsilogic.zip.
2 Using MagicISO or another third-party solution, create a .flp image containing LSI logic drivers. An
alternative third-party solution is Virtual Floppy Drive 2.1.
3 Using VMware vCenter 2.5, upload the file to the desired datastore by performing the following
steps:
a. At the Summary screen for an ESX host, double-click the datastore icon to go into the
Datastore Browser screen.

Figure 49) Datastore icon.


b. In the Datastore Browser, select the upload button and find the .flp image. Upload it to either
the root level or to a specific folder on the NFS datastore.

47 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 50) Upload button.
c. Depending on your connection speed, it should only take a few moments for the .flp image to
upload. While it is uploading, you should see an Upload screen similar to the one below.

Figure 51) Uploading screen.

8.4 WINDOWS XP PREINSTALLATION CHECKLIST


Table 53) Windows XP preinstallation checklist.

Step Action

1 Be sure to have a Windows XP CD or ISO image that is accessible from the virtual machine.

2 Using the Virtual Infrastructure Client (VIC), connect to VMware vCenter.


3 Locate the virtual machine that was initially created and verify the following by right-clicking the
virtual machine and selecting Edit Settings:
a. A floppy drive is present.
b. The floppy drive is configured to connect at power on.
c. The device type is set to use a floppy image and is pointing to the LSI driver image.
d. A CD/DVD drive is present and configured to connect at power on.
e. A CD/DVD device type is configured to point at the Windows XP CD or ISO image.

48 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 52) Verify virtual machine settings for virtual floppy drive.

Figure 53) Verify virtual machine settings for virtual CD or ISO file.

8.5 INSTALL AND CONFIGURE WINDOWS XP


Install Windows XP.
Table 54) Install Windows XP.
Step Action
1 Using the Virtual Infrastructure client, connect to VMware vCenter Server.
2 Right-click the virtual machine and select Open Console. This will allow you to send input and view
the boot process.
3 Power on the virtual machine created earlier by clicking the green arrow icon at the top of the

49 VMware View on NetApp Storage Systems Deployment Guide Using NFS


console screen (shown below).

Figure 54) Power on button.


4 As the Windows setup process begins, press F6 when prompted to add an additional SCSI driver.
Specify the LSI logic driver on the floppy image (.flp) at this stage.
5 Perform the installation of Windows XP as normal, selecting any specifics for your environment that
need to be configured.
6 Because this is a template, keep the installation as generic as possible.
7 Enter a name for the storage appliance, export, and datastore (view_rcu_template), then click Next.
8 Click Finish.

Configure Windows XP*


Table 55) Configure Windows XP.
Step Action
1 Install and configure the VMware tools.

2 If not applied to the installation CD, install the most recent service pack and the most recent
Microsoft® updates.
3 Install the connection broker agent.
4 Set the Windows screen saver to blank.
5 Configure the default color setting for RDP by making the following change in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-
Tcp - Change the color depth to 4
6 Disable unused hardware.
7 Turn off theme enhancements.
8 Adjust the system for best performance by going to My Computer>Properties>Advanced
Tab>Performance Section>Settings.
9 Set the blank screen saver to password protect on resume.
10 Enable hardware acceleration by going to
Start>Control Panel>Display>Settings Tab>Advanced Button>Troubleshooting Tab.
11 Delete any hidden Windows update uninstalls.
12 Disable indexing services by going to Start>Control Panel>Add Remove Windows
Components>Indexing Service.
Note: Indexing improves searches by cataloging files. For users who search a lot, indexing might be
beneficial and should not be disabled.
13 Disable indexing of the C: drive by opening My Computer, right-clicking C:, and selecting properties.
Uncheck the options shown below:

50 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 55) Uncheck to disable Indexing Service on C: drive.
14 Remove system restore points:
Start>Control Panel>System>System Restore
15 Disable any unwanted services.
16 Run disk cleanup:
My Computer>C: properties
17 Run disk defrag:
My Computer>C: properties>Tools

*From Warren Ponder, Windows XP Deployment Guide (Palo Alto, CA: VMware, Inc., 2008), pp. 3–4.

Optimize Windows file system for optimal performance.


Table 56) Optimize Windows file system for optimal performance.
Step Action
1 Log into the gold virtual machine.
2 Open a CMD window by going to start > run and enter cmd and press Enter.
3 At the command line enter the following:
fsutil behavior set disablelastaccess 1

Change disk timeout values.


Table 57) Change disk timeout values.

51 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
1 Log into the gold VM.
2 Open a regedit by going to start > run and enter regedit and press Enter.
3 Find the TimeOutValue by following the path
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk].
4 Change the key "TimeOutValue"=dword:00000190.
5 Reboot the virtual machine now or at the end of the installation of applications and general system
settings.

Install Applications
Install all the necessary infrastructure and business applications in the gold VM. A few examples include
VMware View Agent (if planning to use VMware View Manager) to allow specific users or groups RDP
access to the virtual machines, MS Office, antivirus scanning agent, Adobe Reader, and so on.

Install VMware View Agent


Install VMware View Agent (if planning to use VMware View Manager) to allow specific users or groups RDP
access to the virtual desktops.

9 RAPID DEPLOYMENT OF WINDOWS XP VIRTUAL MACHINES IN A


VMWARE VIEW ENVIRONMENT USING RCU 2.0

9.1 SET UP AND CONFIGURE TEMPLATE VM VOLUME FOR MASS DEPLOYMENT

Perform NetApp deduplication of the volume hosting the template virtual machine datastore.
Table 58) Perform deduplication on template virtual machine datastore.
Step Action
1 Connect to the storage controller‟s system console, using either SSH, telnet, or serial console.
2 Execute the following command to enable NetApp dedupe for the template virtual machine volume:
sis on <template VM volume path>
3 Execute the following command to start processing existing data:
sis start –s <template VM volume path>
4 Execute the following command to monitor the status of the dedupe operation:
sis status

Resize Template virtual machine Volume


Set the volume to 10% over the current volume size.

9.2 DEPLOY VIRTUAL MACHINES USING RCU 2.0

Install RCU on the VMware vCenter server.


Table 59) Install RCU on VMware vCenter.
Step Action
1 Make sure all prerequisites for the host system are met.

52 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
2 Download the RCU setup executable RCU_Setup_2_0_win32.exe from http://now.netapp.com.

3 Save the file to your local file system.


4 Navigate to the folder containing the executable.
5 Double-click the executable and follow the onscreen directions.
6 Click Next.

Figure 56) Rapid Cloning Utility Setup Wizard.


7 Click the box to accept the terms of the license agreement and click Next.

Figure 57) RCU setup wizard license agreement.

53 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
8 Keep the default location for the installation and click Next.

Figure 58) RCU setup wizard installation location.


9 The below screen scrolls by as the installation occurs.

Figure 59) RCU setup wizard installing screen.

54 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
10 Once installation is complete, click Next.

Figure 60) RCU setup wizard installation complete screen.


11 Make sure the correct plugin IP address is shown. Then input the VC server IP address, username,
and password and click Next.

Figure 61) RCU registration complete screen.


12 Once this process has completed, log on to the VMware vCenter server and check the toolbar at the
top. A new NetApp logo should be present. Click this toolbar to access RCU 2.0.

Figure 62) RCU toolbar.

55 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Add storage controllers in RCU.
Table 60) Add storage controller in RCU.
Step Action
1 Navigate to the NetApp vCenter plugin.

Figure 63) RCU vCenter plugin.


2 Click the Storage Controllers tab.

Figure 64) RCU home screen.


3 Click Add Storage Controller.

Figure 65) RCU “Add Storage Controller” screen.


4 Enter the credentials for the storage controller.

56 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Figure 66) RCU storage controller credentials screen.
5 Click Ok on the message “Storage controller added successfully” and verify the storage controller
on the storage controller tab.

Figure 67) RCU storage controller setup complete screen.

Figure 68) RCU storage controller screen.


6 Add the storage cluster parter node and verify.

Figure 69) RCU storage controller partner screen.

Create Customization Specification


Create a customization specification for use with the deployment of the virtual machine. The customization
specification creates the information necessary for sysprep to successfully customize a guest OS from the
VMware vCenter Server. It includes information on hostname, network configuration, license information,
domain membership, and other information necessary to customize a guest OS. This procedure can be
found in the VMware Basic System Administration Guide on page 224 at
www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_admin_guide.pdf. This customization specification can
be used by RCU to personalize each virtual machine.

57 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Deploy Space-Efficient Clones Using RCU 2.0
Using the template virtual machine as the source virtual machine, create 1, virtual machines in four
datastores (250 virtual machines per datastore) on storage controller A in ESX Cluster A with eight ESX
hosts. These virtual machines will be imported into VMware View Manager, as part of a manual desktop
pool, in persistent access mode.
RCU will perform the following steps:
1. Create the clones with file FlexClone.
2. Clone the datastores with volume FlexClone.
3. Mount the NFS datastores to the ESX hosts.
4. Create the virtual machines from the cloned vmdk.
5. Customize the virtual machines using the customization specification.
6. Power on the virtual machines.
7. Import virtual machines into VMware View Manager.

Table 61) Deploy space-efficient clones using RCU 2.0.


Step Action
1 Right-click the template VM to be cloned and select “Create NetApp Rapid Clones.”

Figure 70) RCU “Create NetApp Rapid Clones” screen.

58 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
2 Choose the storage controller with the drop-down arrow and click Next. Make sure to choose
controller A.

Figure 71) RCU storage controller selection screen.


Additionally, if the VMware Virtual Infrastructure client is not running, select Advanced Options and
enter the password for the VMware vCenter Server.

3 Select the data center, cluster, or server to which to provision the virtual machines and select Next.
Selecting the cluster will distribute virtual machines evenly across the ESX hosts in the ESX cluster.

Figure 72) RCU provisioning screen.

59 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
4 Select “Create new clones in new datastores” and click Next.

Figure 73) RCU create new clones screen.

5 Enter in the number of clones per datastore to be 250, the total number of datastores to be 4, the
clone name prefix, and the starting clone number. Then for guest customization, select the check box
and the customization specification that will be applied. Then choose if the virtual machine will be
powered on after the clones are created.

Figure 74) RCU clone details screen.

60 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
6 Select to automatically create an import file. Add the VMware View Manager Connection Server
name or IP address, the domain name, username, and password. Then select virtual machine s to be
imported, manual desktop pool and access mode to be persistent, and specify the desktop pool
name. Then select the validate credentials to test the connection to the VMware View Manager
Connection Server.

Figure 75) RCU VMware View server import file screen.

7 After the credentials have been validated, click Ok.

Figure 76) RCU VMware View server import file completion screen.

61 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
8 Select Next.

Figure 77) RCU VMware View server import file screen.

9 Set the size of the datastore to 25GB. Then select Thin Provisioning if you don‟t want to preallocate
the storage from the aggregate for each volume. Additionally, you can select to manually set the
datastore names.

Figure 78) RCU details of new datastore screen.

62 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
14 Review the configuration and click Apply.

Figure 79) RCU rapid clone completion screen.

Import RCU Deployed virtual machines into VMware View Manager


In this step the virtual machines will be imported into VMware View Manager.
Table 62) Import RCU deployed virtual machines into VMware View Manager.
Step Action
1 Copy the LDF file from the Windows machine running RCU. The LDF file will be located in
C:\Program Files\NetApp\Rapid Cloning Utilities\export. Copy this file to the VMware View Manager
Server.

63 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
2 Log in to the VMware View Manager Server and open a command prompt. At the command prompt,
change directory to the location where the ldf file is located. Then import the LDF command by
entering the following command:

ldifde.exe -i -f <name of ldf file> -s 127.0.0.1

Figure 80) RCU import ldf file screen.


3 Open the VMware View Manager Administrator and verify that the pool has been created.

Figure 81) VMware View Manager screen.

64 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Step Action
4 Click the Desktop Sources tab to verify the virtual machines that were created by RCU are now
imported in VMware View Manager and are part of the pool.

Figure 82) VMware View Manager Desktop Sources tab.

Resize the FlexClone Volumes to the Estimated Size


Resize the four FlexClone volumes created by RCU on the storage controller A to 525GB, as planned for
future growth considering the assumptions on new writes.

Deploy Space-Efficient Clones Using RCU 2.0


Repeat steps 9.2.4 through 9.2.6 to create 500 virtual machines on storage controller B using RCU and
placing them in ESX cluster B. Provision 250 virtual machines per datastore and two datastores total. These
virtual machines will be imported into VMware View Manager as part of a new manual desktop pool in
nonpersistent access mode.

Note: The architecture proposed in this deployment guide balances the 2,000 virtual machines across two
ESX clusters with eight ESX hosts per cluster. The reason for this is that VMware does not support more
than eight ESX hosts per cluster when using VMware View Composer/linked clones. For further details, refer
to View Composer Design Guide.

9.3 CONFIGURE THE NEWLY CREATED DATASTORES

9.3.1 Configure Volume Guarantee


Table 63) Configure volume guarantee.
Step Action
1 Log in to the NetApp console.
2 Turn off the volume guarantee for all four of the newly created volumes by using this command:
vol options <vol-name> guarantee none

65 VMware View on NetApp Storage Systems Deployment Guide Using NFS


9.3.2 Configure Volume Autosizing
Table 64) Configure volume autosizing.
Step Action
1 Log in to the NetApp console.
2 Set volume autosize policy for all four of the newly created volumes by using this command:
vol autosize <vol-name> -m 500g -i 10g on.

9.3.3 Configure Deduplication on Each Newly Created Datastore Volume


Table 65) Configure deduplication on FlexClone volumes.
Step Action
1 Connect to the controller‟s system console, using either SSH, telnet, or serial console.
2 Execute the following command to enable dedupe for the gold volume:
sis on <gold volume path>
3 Execute the following command to start processing existing data:
sis start –s <gold volume path>
4 Execute the following command to monitor the status of the dedupe operation:
sis status
5 Perform the above steps for all four volumes that are created.

10 DEPLOY LINKED CLONES


As shown in Figure 1, this sample deployment has 500 virtual machines that are part of two automated
desktop pools, created using linked clones.
Pool 1: 250 virtual machines provisioned in persistent access mode with OS data disks and user
data disk hosted on separate datastores created earlier.
Pool 2: 250 virtual machines provisioned in nonpersistent access mode with one datastore hosting
OS data disk, created earlier.
For provisioning the linked clone-based desktop pools and associated virtual machines , refer to the
procedure mentioned in VMware View Manager Administration Guide.

11 ENTITLE USERS/GROUPS TO DESKTOP POOLS


The next step is to entitle users/groups to the various desktop pools created in VMware View Manager.
Follow the instructions in VMware View Manager Administration Guide. Finally, install VMware View Client
on every end user access device (PCs, thin clients, and so on).

12 TESTING AND VALIDATION OF THE VMWARE VIEW AND NETAPP


STORAGE ENVIRONMENT
Below is a checklist designed to determine if your environment is setup correctly. Run these tests as
appropriate for your environment and document the results .

66 VMware View on NetApp Storage Systems Deployment Guide Using NFS


Table 66) Testing and validation steps.
Item Item Description
1 Test Ethernet connectivity for VMware ESX servers and NetApp. If using NIC teams or VIFs, pull
network cables or down the interfaces and verify network functionality.
2 If running in a cluster, test SAN multipathing by performing a cable pull or by disabling a switch port
(if applicable).
3 Verify datastores are seen as cluster-wide resources by creating a custom map of the hosts and
datastores and verifying connectivity.
4 Test vCenter functionality for appropriate access control, authentication, and VI clients.
5 Perform NetApp cluster failover testing for NAS and verify that datastores remain connected.
6 Test performance and IOPS to determine that the environment is behaving as expected.

13 FEEDBACK
Send an e-mail to [email protected] with questions or comments concerning this document.

14 REFERENCES
TR-3705: NetApp and VMware VMware View Best Practices
TR-3428: NetApp and VMware Virtual Infrastructure Storage Best Practices
TR-3505: NetApp Deduplication for FAS Deployment and Implementation Guide
VMware View Administrator Guide
Introduction to VMware View Manager
VMware Infrastructure Documentation
VMware View Windows XP Deployment Guide
VMware View Composer Design Guide
VMware View Composer Deployment Guide
VMware View Reference Architecture
Windows XP Deployment Guide
Cisco Nexus 7000 Series NX-OS Interfaces Configuration Guide, Release 4.1
Cisco Nexus 5000 Series Switch CLI Software Configuration Guide

67 VMware View on NetApp Storage Systems Deployment Guide Using NFS


ACKNOWLEDGMENTS
The following people contributed to the creation and design of this guide:
Vaughn Stewart, Technical Marketing Engineer, NetApp
Larry Touchette, Technical Marketing Engineer, NetApp
Eric Forgette, Software Engineer, NetApp
Peter Learmonth, Reference Architect, NetApp
Mike Slisinger, Technical Marketing Engineer, NetApp
David Klem, Reference Architect, NetApp
Wen Yu, Sr. Technical Alliance Manager, VMware
Fred Schimscheimer, Sr. Technical Marketing Manager, VMware

© Copyright 2009 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp,
Inc. NetApp, the NetApp logo, Go further, faster, Data ONTAP, FlexClone, FilerView, and Snapshot are trademarks or registered trademarks
VMware View
of NetApp, Inc. on NetApp
in the UnitedStorage Systems
States and/or Deployment
other Guide Using
countries. Microsoft NFS
and Windows are registered trademarks of Microsoft Corporation. Intel
is a registered trademark of Intel Corporation. VMware and VMotion are registered trademarks of VMware, Inc. UNIX is a registered
trademark of The Open Group. All other brands or products are trademarks or registered trademarks of their respective holders and should
be treated as such. TR-3770

You might also like