Virtualizing your Datacenter
with Windows Server 2012 R2 & System Center 2012 R2
Module 1 2 3 4 5
Title Introduction to Microsoft Virtualization & Host Configuration VM Clustering & Configuration VM Mobility & Replication, Network Virt and Service Templates Private Clouds & System Center 2012 R2 Datacenter Virtualization with the Hybrid Cloud, VMware Management, Integration & Migration
Microsoft Virtual Academy
Microsoft Virtual Academy
SCVMM01
DC01
FS01
HYPER-V01
HYPER-V02
Automation Service Mgmt. Protection Monitoring Self-Service VM Management Hypervisor
Orchestrator Service Manager Data Protection Manager System Center 2012 R2 Operations Manager App Controller Virtual Machine Manager Hyper-V
vCenter Orchestrator vCloud Automation Center vSphere Data Protection vCloud Suite vCenter& Ops Mgmt. Suite vCenter vCloud Director vCenter Server vSphere Hypervisor
Automation Service Mgmt. Protection Monitoring Self-Service VM Management Hypervisor
License
System Center 2012 R2 Licensing
Orchestrator
Standard
vCenter Orchestrator vCloud Suite Licensing
2
Service # of Physical CPUs per Manager 2
# of Managed OSEs per License Includes all SC Mgmt. Components Includes SQL Server for Mgmt. Server Use Yes Yes
Datacenter
vCloud Automation Center # of Physical CPUs 1 1 1
per License # of Managed OSEs per License Includes vSphere 5.1 Enterprise Plus
Std.
Adv.
Ent.
2 + Host Unlimited Data Protection Manager
Unlimited VMs on Hosts vSphere Data Protection
Yes No Yes No Yes No No $11,495
Operations Manager App Controller $1,323
Yes Yes
Includes vCenter 5.5
vCenter Ops Mgmt. Suite
No vCloud No Director
Open No Level (NL) & Software Assurance (L&SA) 2 year Pricing
$3,607
Includes all required database licenses Retail Pricing per CPU (No S&S)
Virtual Machine Manager
Windows Server 2012 R2 Inc. Hyper-V Hyper-V Server 2012 R2 = Free Download
vSphere 5.5 Standalone Per CPU Pricing (Excl. S&S): Standard = $995 Enterprise = $2,875 Enterprise Plus = $3,495
vCenter Server
$4,995
$7,495
vSphere Hypervisor
Massive scalability for the most demanding workloads
Hosts Support for up to 320 logical processors & 4TB physical memory per host Support for up to 1,024 virtual machines per host Support for up to 64 physical nodes & 8,000 virtual machines per cluster
Clusters
Virtual Machines
Support for up to 64 virtual processors and 1TB memory per VM
Supports in-guest NUMA
System
Resource
Windows Server 2012 R2 Hyper-V 320
4TB 2,048 64 1TB 1,024
vSphere Hypervisor 320
4TB 4,096 8 1TB 512
vSphere 5.5 Enterprise Plus 320
4TB 4,096 641 1TB 512
Logical Processors
Host Physical Memory Virtual CPUs per Host Virtual CPUs per VM VM Memory per VM Active VMs per Host
Guest NUMA
Cluster
1. 2.
Yes
64 8,000
Yes
N/A2 N/A2
Yes
32 4,000
Maximum Nodes Maximum VMs
vSphere 5.5 Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM with all other editions supporting 8 vCPUs per VM For clustering/high availability, customers must purchase vSphere
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf and http://www.vmware.com/products/vsphere-hypervisor/faq.html, http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf
Centralized, Scalable Management of Hyper-V
Supports up to 1,000 Hyper-V hosts & 25,000 virtual machines per VMM Server
Supports Hyper-V hosts in trusted & untrusted domains, disjointed namespace & perimeter networks
Supports Hyper-V from 2008 R2 SP1 through to 2012 R2
Comprehensive fabric management capabilities across Compute, Network & Storage
End to end VM management across heterogeneous hosts & clouds
Deep Discovery Prior to Hyper-V Deployment
Through integration with the BMC, VMM can wake a physical server & collect information to determine appropriate deployment
4 2
1. OOB Reboot
2. Boot from PXE 3. Authorize PXE boot 4. Download VMM customized WinPE 5. Execute a set of calls in WinPE to collect hardware inventory data (network adapters and disks) 6. Send hardware data back to VMM
3 5
6 1
Virtualization Deployment with VMM
Centralized, Automated Bare Metal Hyper-V Deployment
Post-deep discovery, VMM will deploy a Hyper-V image to the physical server
1. OOB Reboot
2. Boot from PXE 3. Authorize PXE boot 4. Download VMM customized WinPE
5. Run generic command execution scripts and configure partitions
6. Download VHD & Inject Drivers The host is then domain joined, added to VMM Management & post-install scripts executed
Capability
Microsoft
VMware
Deployment from DVD
Deployment from USB PXE Deployment - Stateful PXE Deployment - Stateless
Yes
Yes Yes WDS, MDT, SCCM, SCVMM No
Yes
Yes Yes PXE/Auto Deploy1 Yes Auto Deploy
Virtualization Host Configuration
Granular, Centralized Configuration of Hosts
Virtual Machine Manager 2012 R2 provides complete, centralized hardware configuration for Hyper-V hosts Hardware Allows the admin to configure local storage, networking, BMC settings etc. Storage Allows the admin control granular storage settings, such as adding an iSCSI or FC array LUN to the host, or an SMB share. Virtual Switches A detailed view of the virtual switches associated with physical network adaptors. Migration Settings Configuration of Live Migration settings, such as LM network, simultaneous migrations
iSCSI & Fibre Channel
Integrate with existing storage investments quickly and easily
Multi-Path I/O Support
Inbox for resiliency, increased performance & partner extensibility
Offloaded Data Transfer
Offloads storageintensive tasks to the SAN
Native 4K Disk Support
Take advantage of enhanced density and reliability
VMM Storage Management
Centralized Management & Provisioning of Storage
VMM can discover & manage local and remote storage, including SANs, Pools, LUNs, disks, volumes, and virtual disks.
VMM supports iSCSI & Fibre Channel Block Storage & File-based Storage VMM integrates with WS SMAPI for discovery of: SMI-S, SMP, and Spaces Devices Disk & Volume management iSCSI/FC/SAS HBA initiator management Block Storage File Storage
System Center Virtual Machine Manager 2012 R2
Storage Management
R2: 10x faster enumeration of storage
Integrated iSCSI Target
Transform Windows Server 2012 R2 into an iSCSI SAN
Integrated Role within Windows Server & manageable via GUI, PowerShell Ideal for Network & Diskless Boot, Server Application Storage, Heterogeneous Storage & Development, Test & Lab Deployments Supports up to 64TB VHDX, Thin Provisioning, Dynamic & Differencing. Also supports secure zeroing of disk for Fixed size disk deployments. Scalable up to 544 sessions & 256 LUNs per iSCSI Target Server & can be clustered for resilience Complete VMM Management via SMI-S
VMM iSCSI & Fibre Channel Integration
Improved Support for Fibre Channel Fabrics
Once discovered, VMM can centrally manage key iSCSI & Fibre Channel capabilities. iSCSI - Connects Hyper-V hosts to iSCSI portal and logs on to iSCSI target ports including multiple sessions for MPIO. Fibre Channel - Add target ports to Zone Zone Management, Member Management, Zoneset Management
Once connected, VMM can create and assign LUNs, initialize disks, create partitions, volumes etc. VMM can also remove capacity, unmounts volumes, mask LUNs etc.
Storage Spaces
Transform high-volume, low cost disks into flexible, resilient virtualized storage
Storage Tiering*
Pool HDD & SSD and automatically move hot data to SSD for increased performance
Data Deduplication
Reduce file storage consumption, now supported for live VDI virtual hard disks*
Hyper-V over SMB 3.0
Ease of provisioning, increased flexibility & seamless integration with high performance
New in Windows Server 2012 R2 *
Inbox solution for Windows to manage storage
Virtualize storage by grouping industrystandard disks into storage pools Pools are sliced into virtual disks, or Spaces. Spaces can be Thin Provisioned, and can be striped across all physical disks in a pool. Mirroring or Parity are also supported. Windows then creates a volume on the Space, and allows data to be placed on the volume.
Spaces can use DAS only (local to the chassis, or via SAS)
Optimizing storage performance on Spaces
Disk pool consists of both high performance SSDs and higher capacity HDDs Hot data is moved automatically to SSD and cold data to HDD using Sub-File-Level data movement With write-back-caching, SSD absorb random writes that are typical in virtualized deployments Admins can pin hot files to SSDs manually to drive high performance SSD Tier - 400GB EMLC SAS SSD
Hot Data
New PowerShell cmdlets are available for the management of storage tiers
Store Hyper-V VMs on SMB 3.0 File Shares
Simplified Provisioning & Management Low OPEX and CAPEX Adding multiple NICs in File Servers unlocks SMB Multichannel enables higher throughput and reliability. Requires NICs of same type and speed. Using RDMA capable NICs unlocks SMB Direct offloading network I/O processing to the NIC. SMB Direct provides high throughput and low latency and can reach 40Gbps (RoCE) and 56Gbps (Infiniband) speeds
\\SOFSFileServerName\VMs
File Storage Integration
Comprehensive, Integrated File Storage Management
VMM supports network shares via SMB 3.0 on NAS device from storage vendors such as EMC and NetApp VMM supports integration and management with standalone and clustered file servers VMM will quickly discover and inventory selected File Storage VMM allows the selection, and now, the classification of existing File Shares to streamline VM placement
VMM allows IT Admin to assign Shares to Hyper-V hosts for VM placement, handling ACLing automatically.
Scale-Out File Server
Low Cost, High Performance, Resilient Shared Storage
Clustered file server for storing Hyper-V virtual machine files, on file shares High reliability, availability, manageability, and performance that you would expect from a SAN Active-Active file shares - file shares online simultaneously Increased bandwidth as more SOFS nodes are added CHKDSK with zero downtime & CSV Cache Created & Managed by VMM, both from existing Windows Servers & Bare Metal
JBOD Storage via Shared SAS
Scale Out File Server (4 Nodes) FS1 FS2 FS3 FS4
Clustered Spaces Clustered Pools
Scale-Out File Server Deployment
Centralized, Managed Deployment of File Storage
VMM can not only manage standalone File Servers, but can deploy Scale-Out File Servers, even to Bare Metal For Bare Metal deployment, a physical profile determines the characteristics of the File Server Existing Windows Servers can be transformed into a SOFS, right within VMM Once imported, VMM can transform individual disks into highly available, dynamic pools, complete with classification. VMM can then create the resilient Spaces & File Shares within the Storage Pool
Storage & Fabric Classification
Granular Classification of Storage & FC Fabrics
VMM can classify storage at a granular level to abstract storage detail: Volumes (including local host disks & Direct Attached Storage)
File Shares (Standalone & SOFS-based)
Storage Pools & SAN LUNs Fibre Channel Fabrics - Helps to identify fabric using friendly names.
Support for efficient & simplified deployment of VMs to classifications Now integrated with Clouds
In-box Disk Encryption to Protect Sensitive Data
Data Protection, built in
Supports Used Disk Space Only Encryption
Integrates with TPM chip Network Unlock & AD Integration
Multiple Disk Type Support
Direct Attached Storage (DAS)
Traditional SAN LUN Cluster Shared Volumes Windows Server 2012 File Server Share
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.5 Enterprise Plus
iSCSI/FC Support 3rd Party Multipathing (MPIO) SAN Offload Capability Storage Virtualization
Yes Yes Yes (ODX) Yes (Spaces)
Yes No No No
Yes Yes (VAMP)1 Yes (VAAI)2 Yes (vSAN)3
Storage Tiering
Network File System Support Data Deduplication Storage Encryption
1. 2. 3. 4.
Yes
Yes (SMB 3.0) Yes Yes
No
Yes (NFS) No No
Yes4
Yes (NFS) No No
vSphere API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5 vSphere API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5 vSphere vSAN is still in beta vSphere Flash Read Cache has a write-through caching mechanism only, so reads only are accelerated. vSAN also has SSD caching capabilities built in, acting as a read cache & write buffer.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/files/pdf/vsphere/VMwarevSphere-Platform-Whats-New.pdf, http://www.vmware.com/products/vsphere/compare.html,
Integrated Solution for Network Card Resiliency
Vendor agnostic and shipped inbox Provides local or remote management through Windows PowerShell or UI Enables teams of up to 32 network adapters Aggregates bandwidth from multiple network adapters whilst providing traffic failover in the event of NIC outage Includes multiple nodes: switch dependent and independent Multiple traffic distribution algorithms: Hyper-V Switch Port, Hashing and Dynamic Load Balancing
Virtual adapters
Team network adapter
Team network adapter
Connecting VMs to each other, and the outside world
3 Types of Hyper-V Network Private = VM to VM Communication Internal = VM to VM to Host (loopback) External = VM to Outside & Host
Hyper-V Host
VM1 VM2
Each vNIC can have multiple VLANs attached to it, however if using the GUI, only a single VLAN ID can be specified.
Set-VMNetworkAdapterVlan -VMName VM01 -Trunk -AllowedVlanIdList 14,22,40
Creating an external network transforms the chosen physical NIC into a switch and removes TCP/IP stack and other protocols Optional host vNIC is created to allow communication of host out of the physical NIC
Layer-2 Network Switch for Virtual Machine Connectivity
Virtual machine
HyperV host
Virtual machine Network application Virtual network adapter Virtual machine Network application Virtual network adapter Network application Virtual network adapter
Extensible Switch Virtual Ethernet switch that runs in the management OS of the host
Exists on Windows Server Hyper-V, and Windows Client Hyper-V
Managed programmatically Extensible by partners and customers Virtual machines connect to the extensible switch with their virtual network adaptor Can bind to a physical NIC or team Bypassed by SR-IOV
Hyper-V Extensible Switch Physical network adapter
Physical switch
Layer-2 Network Switch for Virtual Machine Connectivity
Granular In-box Capabilities Isolated (Private) VLAN (PVLANs) ARP/ND Poisoning (spoofing) protection DHCP Guard protection
HyperV host
Virtual machine Network application Virtual network adapter Virtual machine Network application Virtual network adapter Virtual machine Network application Virtual network adapter
Virtual Port ACLs
Trunk Mode to VMs Network Traffic Monitoring PowerShell & WMI Interfaces for extensibility
Hyper-V Extensible Switch Physical network adapter
Physical switch
Build Extensions for Capturing, Filtering & Forwarding
2 Platforms for Extensions Network Device Interface Specification (NDIS) filter drivers Windows Filtering Platform (WFP) callout drivers NDIS filter drivers WFP callout drivers Ingress filtering Destination lookup and forwarding Egress filtering
Virtual Machine
Virtual Machine
Parent Partition
VM NIC Host NIC Virtual Switch Extension Protocol Capture Extensions Extension A Filtering Extensions Extension C Forwarding Extension Extension D Extension Miniport VM NIC
Extensions
Physical NIC
Hyper-V Extensible Switch architecture
Build Extensions for Capturing, Filtering & Forwarding
Many Key Features
Extension monitoring & uniqueness Extensions that learn VM life cycle Extensions that can veto state changes Multiple extensions on same switch Cisco Nexus 1000V & UCS-VMFEX NEC ProgrammableFlow PF1000 5nine Security Manager
Virtual Machine
Virtual Machine
Parent Partition
VM NIC Host NIC Virtual Switch Extension Protocol Capture Extensions Extension A Filtering Extensions Extension C Forwarding Extension Extension D Extension Miniport VM NIC
Several Partner Solutions Available
InMon - SFlow
Physical NIC
Hyper-V Extensible Switch architecture
Advanced Networking Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.5 Enterprise Plus
Integrated NIC Teaming
Yes
Yes
Yes
Extensible Network Switch
Confirmed Partner Solutions Private Virtual LAN (PVLAN) ARP Spoofing Protection DHCP Snooping Protection Virtual Port ACLs Trunk Mode to Virtual Machines
Yes
5 Yes Yes Yes Yes Yes
No
N/A No No No No No
Replaceable
2 Yes1 vCloud/Partner2 vCloud/Partner2 vCloud/Partner2 Yes3
Port Monitoring
Port Mirroring
1.
2. 3.
Yes
Yes
Per Port Group
Per Port Group
Yes3
Yes3
The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.5 and is replaceable (By Partners such as Cisco/IBM) rather than extensible. ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the vCloud Networking & Security package, which is part of the vCloud Suite or a Partner solution, all of which are additional purchases Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in the Enterprise Plus edition of vSphere 5.5
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technical-resources/virtualization-topics/virtual-networking/distributed-virtual-switches.html, http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere51-Network-Technical-Whitepaper.pdf, and http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html, http://www.vmware.com/products/vcloudnetwork-security,
Comprehensive Network Management
Integrated management of the software defined network
Top of rack switch management and integration for configuration and compliance Logical network management: named networks that serve particular functions in your environment i.e. backend IP address pool management and integration with IP address management Host and VM network switch management Load balancer integration and automated deployment Network virtualization deployment and management
Top of Rack Switch Integration
Synchronize & Integrate ToR Settings with VMM
Physical switch management and integration built into VMM using in-box or partnersupplied provider Switches running Open Management Infrastructure (OMI) Switch Management PowerShell Cmdlets Common management interface across multiple network vendors
Automate common network management
Communicating using WS-MAN
tasks
OMI
OMI
OMI
Manage compliancy between VMM, Hyper-V Hosts & physical switches.
Logical Networks
Abstraction of Infrastructure Networks with VMM
Logical networks are named networks that serve particular functions i.e. Backend, Frontend, or Backup. Used to organize and simplify network assignments Logical network is a container for network sites, IP subnet & VLAN information Supports VLANs & PVLAN Isolation Hosts & Host Groups can be associated with Logical Networks IP Addresses can be assigned to Host & VM NICs from Static IP Pools
Static IP Pool Management in VMM
IP Address Management for Hosts & Virtual Machines
VMM can maintain centralized control of host & VM IP address assignment IP Pools defined and associated with a Logical Network & Site
VMM supports specifying IP range, along with VIPs & IP address reservations
Each IP Pool can have Gateway, DNS & WINS Configured. IP address pools support both IPv4 and IPv6 addresses, but not in the same pool. IP addresses assigned on VM creation, and retrieved on VM deletion
The Logical Switch
Centralized Configuration of Network Adaptors across Hosts
Combines key VMM networking constructs to standardize deployment across multiple hosts within the infrastructure: Uplink Port Profiles Native Port Profile for Uplinks
Port Classification
Native Port Profile for vNIC
Virtual Port Profiles for vNICs
Port Classifications for vNICs Switch Extensions
Logical Switches support compliance & remediation
Port Classification
Native Port Profile for vNIC
Logical Switches support Host NIC Teaming & Converged Networking
Uplink Port Profiles
Host Physical Network Adaptor Configuration with VMM
Uplink Port Profile centralized configuration of physical NIC settings that VMM will apply upon assigning a Logical Switch to a Hyper-V host. Teaming Automatically created when assigned to multiple physical NICs, but admin can select LB algorithm & teaming mode Sites Assign the relevant network sites & logical networks that will be supported by this uplink port profile
Virtual Port Profiles
Host Physical Network Adaptor Configuration with VMM
Virtual Port Profile Used to pre-configure VM or Host vNICs with specific settings. Offloading Admins can enable offload capabilities for a specific vNIC Port Profile. Dynamic VMq, IPsec Task Offload & SR-IOV are available choices. Security Admins can enable key Hyper-V security settings for the vNIC Profile, such as DHCP Guard, or enable Guest Teaming. QoS Admins can configure QoS bandwidth settings for the vNIC Profile so when assigned to VMs, their traffic may be limited/guaranteed.
Increased efficiency of network processing on Hyper-V hosts
Without VMQ Hyper-V Virtual Switch is responsible for routing & sorting packets for VMs
Hyper-V Host
Hyper-V Host
Hyper-V Host
This leads to increased CPU processing, all focused on CPU0
Physical NIC creates virtual network queues for each VM to reduce host CPU Processor cores dynamically allocated for a better spread of network traffic processing
With VMQ
With Dynamic VMQ
Integrated with NIC hardware for increased performance
Virtual Machine VM Network Stack Synthetic NIC Virtual Function
Standard that allows PCI Express devices to be shared by multiple VMs
More direct hardware path for I/O Reduces network latency, CPU utilization for processing traffic and increases throughput SR-IOV capable physical NICs contain virtual functions that are securely mapped to VM This bypasses the Hyper-V Extensible Switch
Hyper-V Extensible Switch
Full support for Live Migration
SR-IOV NIC
VF
VF
VF
Achieve desired levels of networking performance
Bandwidth Management Establishes a bandwidth floor Assigns specified bandwidth for each type of traffic Helps to ensure fair sharing during congestion Can exceed quota with no congestion
Relative minimum bandwidth
Normal priority
W=1
Strict minimum bandwidth
Bronze tenant
100 MB
High priority
W=2
Critical
Silver tenant
200 MB
Gold tenant
500 MB
W=5
Hyper-V Extensible Switch
Hyper-V Extensible Switch
1 Gbps
2 Mechanisms
Bandwidth oversubscription
Gold tenant
500 MB
Gold tenant
500 MB
Gold tenant
500 MB
Enhanced packet scheduler (software)
Network adapter with DCB support (hardware)
Hyper-V Extensible Switch
NIC Teaming
1 Gbps
1 Gbps
Port Classifications
Abstract Technical Depth from Virtual Network Adaptors
Port Classifications provides a global name for identifying different types of virtual network adapter port profiles Cross-Switch - Classification can be used across multiple logical switches while the settings for the classification remain specific to each logical switch Simplification Similar to Storage Classification, Port Classification used to abstract technical detail when deploying VMs with certain vNICs. Useful in Self-Service scenarios.
Constructing the Logical Switch
Combining Building Blocks to Standardize NIC Configuration
Simple Setup Define the name and whether SR-IOV will be used by VMs. SR-IOV can only be enabled at switch creation time. Switch Extensions Pre-installed/ Configured extensions available for use with this Logical Switch are chosen at this stage Teaming Decide whether this logical switch will bind to individual NICs, or to NICs that VMM should team automatically. Virtual Ports Define which port classifications and virtual port profiles can be used with this Logical Switch
Deploying the Logical Switch
Applying Standardized Configuration Across Hosts
Assignment VMM can assign logical switches directly to the Hyper-V hosts. Teaming or No Teaming Your logical switch properties will determine if multiple NICs are required or not Converged Networking VMM can create Host Virtual Network Adaptors for isolating host traffic types i.e. Live Migration, CSV, SMB 3.0 Storage, Management etc. It will also issue IP addresses from its IP Pool. This is useful with hosts that have just 2 x 10GbE adaptors but require multiple separate, resilient networks.