TR 3770
TR 3770
4.2 CONFIGURE THE NFS EXPORT FOR VOLUME HOSTING TEMPLATE VIRTUAL MACHINE ............... 20
4.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO............................ 24
5.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO............................ 29
7 SET UP VMWARE VIEW MANAGER 3.0 AND VMWARE VIEW COMPOSER ..................... 44
8 SET UP AND CONFIGURE WINDOWS XP GOLD IMAGE ..................................................... 44
8.1 CREATE VIRTUAL MACHINE IN VMWARE ESX 3.5 .............................................................................. 44
8.2 FORMAT THE VIRTUAL MACHINE WITH THE CORRECT STARTING PARTITION OFFSETS .............. 44
This guide will focus on achieving multiple levels of storage efficiency and performance acceleration for each
of the deployment scenarios in this mixed environment. While this document has a 75% to 25% split for
deployment, the principles for storage layout, efficiency, performance acceleration, and operational agility
can be used for every type of deployment mix.
The table below shows a sample customer environment with different user profiles having different
requirements in terms of desktop usage and data hosted on the virtual desktops. The table highlights how
the different deployment solutions (NetApp RCU 2.0 and VMware linked clones) can be leveraged to fulfill
requirements for the different user types.
Table 4) Types of VMware View Deployment scenarios.
User Profile User Requirements Number of VMware Access Mode Deployment
Virtual View Solution
Desktops Manager
Desktop
Delivery
Model
Marketing/ Customizable, personalized 1,000 Manual Persistent NetApp
finance/ desktops, using a mix of desktop RCU 2.0
consultants office and specialized, pool
decision-supporting
applications
Download and use several
applications as required
Installed apps and/or data on
the OS disk to be retained
after patches, OS upgrade,
and reboots
Requires protecting the user
data
Software Mix of office and enterprise 500 Manual Nonpersistent NetApp
developers applications on the desktop desktop RCU 2.0
pool
Require installing new
applications, on an as needed
basis
Requires applications
installed on the OS disk to be
retained after patches,
upgrade, and reboots, to be
used by any user who logs in
next time.
Requires protecting the user
data
Help Work with only one 250 Automated Persistent VMware
desk/call application at a time desktop linked
Detailed below are the steps used to create the network layout for the NetApp storage controllers and for
each ESX 3.5 host in the environment.
The goal in using a Cisco Nexus environment for networking is to integrate their capabilities to logically
separate public IP traffic from storage IP traffic. In doing this, the chances of issues from changes done to a
portion of the network are mitigated.
Since the Cisco Nexus switches used in this configuration support virtual port channeling (vPC) and are
configured with a VDC specifically for storage traffic, logical separation of the storage network from the rest
of the network is achieved while at the same time providing a high level of redundancy, fault tolerance, and
security.
For the purpose of this configuration, eight IOPS per virtual machines is used as the basis for the design
architecture. This number might vary per environment and for different user types. For further details on
sizing best practices, check NetApp TR-3705.
* Depending on the requirements, it may be possible to replace the FC disks used in this configuration with
SATA disks. However, be sure to perform proper sizing and testing before making this change.
Note: Repeat these steps for the two remaining ports. Be sure that one NIC is on switch A and the other is
on switch B. These ports will be used for CIFS and management traffic and should be setup using VLAN
tagging.
For this example 50% deduplication savings number is used for this sample user profile. It may be different
for your environment. We recommend you go through the sizing exercise with NetApp to determine storage
savings for your environment.
Datastore for hosting virtual machine swap file (vswp) for all the 2000 virtual machines:
512MB per virtual machine * 2000 virtual machines * 10% free space = 1100GB
Note that the template virtual machine datastore, gold datastore for RCU provisioned virtual machines,
datastore for hosting linked clone parent virtual machines are all 25GB in size although the virtual machines
are 10GB.
options flexscale.enable on
options flexscale.normal_data_blocks on
10 As recommended earlier, make nonroot aggregates as large as possible to benefit from the I/O
capacity of all the spindles in the aggregate.
4.1 CREATE A VOLUME TO HOST THE TEMPLATE VIRTUAL MACHINE FOR RCU 2.0
4.2 CONFIGURE THE NFS EXPORT FOR VOLUME HOSTING TEMPLATE VIRTUAL
MACHINE
Table 15) Configure the template virtual machine volume NFS export.
Step Action
4 Grant the export Root Access permissions by clicking Next and placing a green
check inside the box. Then click Next.
5 Determine that the export path is correct for the NFS export.
6 At NFS Export Wizard - Read-Write Access, click the Add button and enter the IP
address of the NFS VMware kernel for one of the ESX 3.5 hosts. Optionally, repeat
this step for the VMware kernel IP addresses for the other hosts in the ESX cluster
until all IP addresses have been entered. When this is done, click Next.
7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP
address of the NFS VMware kernel for the first ESX 3.5 host server. Optionally,
repeat this step for the VMware kernel IP addresses for the other hosts in the ESX
cluster until all four IP addresses have been entered. When this is done, click Next.
9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export
Wizard – Success screen, click Close.
4.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO
Perform this step for the volume hosting the template virtual machine.
Table 16) Configure Snapshot details.
Step Action
2 Set the volume Snapshot schedule for the volume that was created above:
snap sched <vol-name> 0 0 0
2 Select Volumes.
5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to UNIX.
Configure the NFS export for volume hosting virtual machine swap files.
Table 20) Set up the view_swap NFS export.
Step Action
1 From the FilerView menu, select NFS.
2 Select Manage Exports to open the Manage NFS Exports screen.
3 Click the /vol/view_swap NFS export.
4 Grant the export root access permissions by clicking Next and placing a green check inside the
box. Then click Next.
5 Make sure that the export path is correct for the NFS export.
6 At the NFS Export Wizard - Read-Write Access, click the Add button and enter the IP address of
the NFS VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel
IP addresses for all the other hosts in the ESX cluster until all IP addresses have been entered.
When this is done, click Next.
7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP address of the NFS
VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel IP
addresses for the other hosts in the ESX cluster until all IP addresses have been entered. When
this is done, click Next.
8 At the NFS Export Wizard – Security screen, make sure that Unix Style is selected and click Next.
9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export Wizard –
Success screen, click Close.
2 Set the volume Snapshot schedule for the volume that was created above:
snap sched <vol-name> 0 0 0
2 Configure Snapshot autodelete policy: snap autodelete <vol-name> commitment try trigger volume
target_free_space 5 delete_order oldest_first.
5.1 CREATE THE VOLUMES FOR HOSTING LINKED CLONES, AND CIFS USER DATA
Create volume to host OS data disks for linked clones in persistent access mode.
Table 25) Create the view_lcp volume.
Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.
Create volume to host user data disks for linked clones in persistent access mode.
Table 26) Create the linked clones volume for host user data.
Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.
4 Complete the Wizard, naming the volume view_lcp_userdata and placing it in the View_Production
aggregate. This volume should be a total of 250GB in size.
To achieve high levels of storage efficiency, make sure to enable thin provisioning by setting the
space guarantee to “none."
5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to UNIX.
qtree security view_lcp_userdata
If the qtree security style is NTFS or Mixed, change it using the following command:
qtree security view_lcp_userdata unix
Create volume to host OS data disks for linked clones in nonpersistent access mode.
Table 27) Create the linked clones host OS data disk volume.
Step Action
1 Open FilerView (http://filer/na_admin).
2 Select Volumes.
3 Select Add to open the Volume Wizard.
4 Complete the Wizard, naming the volume view_lcnp and placing it in the View_Production
aggregate. This volume should be a total of 700GB in size.
To achieve high levels of storage efficiency, make sure to enable thin provisioning by setting the
space guarantee to “none."
5 Note that Data ONTAP creates new volumes with a security style matching that of the root volume.
Verify that the security style of the volume is set to UNIX.
qtree security view_lcnp
If the qtree security style is NTFS or Mixed, change it using the following command:
qtree security view_lcnp unix
Set up the NFS export for volume hosting user data disk for linked clones in persistent access
mode.
Table 30) Set up the view_lcp_userdata NFS volume.
Step Action
6 At the NFS Export Wizard - Read-Write Access, click the Add button and enter the IP address of
the NFS VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel
IP addresses for all the other seven hosts in the ESX cluster until all IP addresses have been
entered. When this is done, click Next.
7 At the NFS Export Wizard - Root Access, click the Add button and enter the IP address of the NFS
VMware kernel for the first ESX 3.5 host server. Repeat this step for the VMware kernel IP
addresses for all the other seven hosts in the ESX cluster until all IP addresses have been
entered. When this is done, click Next.
8 At the NFS Export Wizard – Security screen, make sure that Unix Style is selected and click Next.
9 At the NFS Export Wizard – Commit screen, click Commit, and at the NFS Export Wizard –
Success screen, click Close.
Set up the NFS export for volume hosting OS data for linked clones in nonpersistent access mode.
5.3 DISABLE DEFAULT SNAPSHOT SCHEDULE AND SET SNAP RESERVE TO ZERO
For all the volumes configured above for controller B, do the following:
Table 32) Disable default Snapshot schedule and set snap reserve to zero.
Step Action
2 Configure Snapshot autodelete policy for both volumes created above: snap autodelete <vol-
name> commitment try trigger volume target_free_space 5 delete_order oldest_first.
3 Configure volume autodelete policy for volumes created above: vol options <vol-name> try_first
volume_grow.
1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.
2 Execute the following command to enable deduplication for the volume hosting the user data disks:
sis on /vol/view_lcp_userdata
3 Execute the following command to start processing existing data in the datastores:
4 Execute the following command to monitor the status of the deduplication operation:
sis status
5 After the deduplication has finished you can use the following command to see the savings:
df -s
Configure deduplication schedule for volume hosting user data disk for linked clones in persistent
access mode.
Table 37) Configure deduplication schedule for persistent-mode linked clone volume.
Step Action
1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.
2 Execute the following command to set the deduplication schedule for the volume hosting the user
data disk
sis config [-s sched]<volume path>
-
auto
The day_list specifies which days of the week deduplication should run. It is a comma-separated list
of the first three letters of the day: sun, mon, tue, wed, thu, fri, sat. The names are not case
sensitive. Day ranges such as mon-fri can also be used. The default day_list is sun-sat.
The hour_list specifies which hours of the day deduplication should run on each scheduled day. The
hour_list is a comma-separated list of the integers from 0 to 23. Hour ranges such as 8-17 are
allowed.
Step values can be used in conjunction with ranges. For example, 0-23/2 means "every 2 hours."
The default hour_list is 0; that is, midnight on the morning of each scheduled day.
If "-" is specified, there is no scheduled deduplication operation on the flexible volume.
The autoschedule causes deduplication to run on that flexible volume whenever there are 20% new
fingerprints in the change log. This check is done in a background process and occurs every hour.
1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.
2 Execute the following command to enable deduplication for the volume hosting the CIFs user data:
3 Execute the following command to start processing existing data in the volume:
4 Execute the following command to monitor the status of the deduplication operation:
sis status
5 After the deduplication has finished you can use the following command to see the savings:
df -s
Table 39) Set up deduplication schedule for volume hosting CIFS user data.
Step Action
1 Connect to the controller system‟s console, using either SSH, telnet, or serial console.
2 Execute the following command to set the deduplication schedule for the volume hosting the CIFS
user data:
sis config [-s sched]<volume path>
-
auto
The day_list specifies which days of the week deduplication should run. It is a comma-separated list
of the first three letters of the day: sun, mon, tue, wed, thu, fri, sat. The names are not case
sensitive. Day ranges such as mon-fri can also be used. The default day_list is sun-sat.
The hour_list specifies which hours of the day deduplication should run on each scheduled day. The
hour_list is a comma-separated list of the integers from 0 to 23. Hour ranges such as 8-17 are
allowed.
Step values can be used in conjunction with ranges. For example, 0-23/2 means "every 2 hours."
The default hour_list is 0; that is, midnight on the morning of each scheduled day.
If "-" is specified, there is no scheduled deduplication operation on the flexible volume.
The autoschedule causes deduplication to run on that flexible volume whenever there are 20% new
fingerprints in the change log. This check is done in a background process and occurs every hour.
8 Click Add at the bottom (pictured above) and select the vmnic that will act as the
secondary NIC for the service console.
9 Click Next (pictured above). At the following screen, verify and click Next, then at
the following screen click Finish. At the following screen, click Close.
3 For each ESX host, the virtual switch for the NFS VMware kernel should have two network ports
that each goes to different Cisco Nexus 5000 switch. Two vmnics should be assigned to the virtual
switch for the NFS storage VMware kernel.
vmnic2
Peer-keepalive Link
vPC Peer Link
(10Gb)
ESX2
Nexus 5000 B
vmnic2
Nexus 7000 B
4. For the vSwitch for the NFS VMware kernel set the load balancing policy to “Route based on IP
hash”.
1 For each ESX host, configure VMotion using a VMware kernelVMware kernel using network ports
on the nonroutable VMotion VLAN.
1 For each ESX host, configure the network for the virtual machines by creating a public virtual machine
port group in a new vSwitch. This should be on a public VLAN.
2 For each ESX host, the virtual machine Network the vSwitch should have four network ports with two
going to each of the Cisco Nexus 5000 switches. Four vmnics should be assigned to this virtual switch.
Virtual Machine 2
Peer-keepalive Link
Virtual Machine 3
Virtual Machine 4
3 For the virtual machine Network vSwitch set the load balancing policy to “Route based on IP hash”.
5 In the upper-right corner, click Add Storage to open the Add Storage Wizard.
6 Select the Network File System radio button and click Next.
7 Enter a name for the storage controller, export, and datastore (view_rcu_template),
and then click Next.
8 Click Finish.
1 Open vCenter.
6 esxcfg-advcfg -s 30 /Net/TcpIpHeapSize
8.2 FORMAT THE VIRTUAL MACHINE WITH THE CORRECT STARTING PARTITION
OFFSETS
To set up the starting offset using the fdisk command found in ESX, follow the steps detailed below:
Table 51) Format a virtual machine with the correct starting offsets.
Figure 41) Using FDisk for setting offset—find cylinders of the vDisk.
4 Run fdisk on the windows_xp_gold-flat.vmdk file by typing the following command:
fdisk ./windows_xp_gold-flat.vmdk
Figure 43) Using FDisk for setting offset—set the number of cylinders.
9 Type p at the expert command screen to look at the partition table (which should be blank).
Figure 44) Using FDisk for setting offset—set view partition information.
10 Return to regular (nonextended) command mode by typing r at the prompt.
Figure 47) Using FDisk for setting offset—set system type and hex code.
19 Save and write the partition by typing w. Ignore the warning, which is normal.
Figure 48) Using FDisk for setting offset—save and write the partition.
20 Start the virtual machine and run Windows setup.
21 When the install gets to the partition screen, install on the existing partition. DO NOT DESTROY or
RECREATE! C: should already be highlighted. Press Enter at this stage.
Step Action
1 Be sure to have a Windows XP CD or ISO image that is accessible from the virtual machine.
Figure 53) Verify virtual machine settings for virtual CD or ISO file.
2 If not applied to the installation CD, install the most recent service pack and the most recent
Microsoft® updates.
3 Install the connection broker agent.
4 Set the Windows screen saver to blank.
5 Configure the default color setting for RDP by making the following change in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-
Tcp - Change the color depth to 4
6 Disable unused hardware.
7 Turn off theme enhancements.
8 Adjust the system for best performance by going to My Computer>Properties>Advanced
Tab>Performance Section>Settings.
9 Set the blank screen saver to password protect on resume.
10 Enable hardware acceleration by going to
Start>Control Panel>Display>Settings Tab>Advanced Button>Troubleshooting Tab.
11 Delete any hidden Windows update uninstalls.
12 Disable indexing services by going to Start>Control Panel>Add Remove Windows
Components>Indexing Service.
Note: Indexing improves searches by cataloging files. For users who search a lot, indexing might be
beneficial and should not be disabled.
13 Disable indexing of the C: drive by opening My Computer, right-clicking C:, and selecting properties.
Uncheck the options shown below:
*From Warren Ponder, Windows XP Deployment Guide (Palo Alto, CA: VMware, Inc., 2008), pp. 3–4.
Install Applications
Install all the necessary infrastructure and business applications in the gold VM. A few examples include
VMware View Agent (if planning to use VMware View Manager) to allow specific users or groups RDP
access to the virtual machines, MS Office, antivirus scanning agent, Adobe Reader, and so on.
Perform NetApp deduplication of the volume hosting the template virtual machine datastore.
Table 58) Perform deduplication on template virtual machine datastore.
Step Action
1 Connect to the storage controller‟s system console, using either SSH, telnet, or serial console.
2 Execute the following command to enable NetApp dedupe for the template virtual machine volume:
sis on <template VM volume path>
3 Execute the following command to start processing existing data:
sis start –s <template VM volume path>
4 Execute the following command to monitor the status of the dedupe operation:
sis status
3 Select the data center, cluster, or server to which to provision the virtual machines and select Next.
Selecting the cluster will distribute virtual machines evenly across the ESX hosts in the ESX cluster.
5 Enter in the number of clones per datastore to be 250, the total number of datastores to be 4, the
clone name prefix, and the starting clone number. Then for guest customization, select the check box
and the customization specification that will be applied. Then choose if the virtual machine will be
powered on after the clones are created.
Figure 76) RCU VMware View server import file completion screen.
9 Set the size of the datastore to 25GB. Then select Thin Provisioning if you don‟t want to preallocate
the storage from the aggregate for each volume. Additionally, you can select to manually set the
datastore names.
Note: The architecture proposed in this deployment guide balances the 2,000 virtual machines across two
ESX clusters with eight ESX hosts per cluster. The reason for this is that VMware does not support more
than eight ESX hosts per cluster when using VMware View Composer/linked clones. For further details, refer
to View Composer Design Guide.
13 FEEDBACK
Send an e-mail to [email protected] with questions or comments concerning this document.
14 REFERENCES
TR-3705: NetApp and VMware VMware View Best Practices
TR-3428: NetApp and VMware Virtual Infrastructure Storage Best Practices
TR-3505: NetApp Deduplication for FAS Deployment and Implementation Guide
VMware View Administrator Guide
Introduction to VMware View Manager
VMware Infrastructure Documentation
VMware View Windows XP Deployment Guide
VMware View Composer Design Guide
VMware View Composer Deployment Guide
VMware View Reference Architecture
Windows XP Deployment Guide
Cisco Nexus 7000 Series NX-OS Interfaces Configuration Guide, Release 4.1
Cisco Nexus 5000 Series Switch CLI Software Configuration Guide
© Copyright 2009 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp,
Inc. NetApp, the NetApp logo, Go further, faster, Data ONTAP, FlexClone, FilerView, and Snapshot are trademarks or registered trademarks
VMware View
of NetApp, Inc. on NetApp
in the UnitedStorage Systems
States and/or Deployment
other Guide Using
countries. Microsoft NFS
and Windows are registered trademarks of Microsoft Corporation. Intel
is a registered trademark of Intel Corporation. VMware and VMotion are registered trademarks of VMware, Inc. UNIX is a registered
trademark of The Open Group. All other brands or products are trademarks or registered trademarks of their respective holders and should
be treated as such. TR-3770