The fourth Microsoft Virtualisation User Group (MVUG) meeting took place last night and Microsoft’s Matt McSpirit presented a session on the R2 wave of virtualisation products. I’ve written previously about some of the things to expect in Windows Server 2008 R2 but Matt’s presentation was specifically related to virtualisation and there are some cool things to look forward to.
Hyper-V in Windows Server 2008 R2
At last night’s event, Matt asked the UK User Group what they saw as the main limitations in the original Hyper-V release and the four main ones were:
- USB device support
- Dynamic memory management (ballooning)
- Live Migration
- 1 VM per storage LUN
Hyper-V R2 does not address all of these (regardless of feedback, the product group is still unconvinced about the need for USB device support… and dynamic memory was pulled from the beta – it’s unclear whether it will make it back in before release) but live migration is in and Windows finally gets a clustered file system in the 2008 R2 release.
So, starting out with clustering – a few points to note:
- For the easiest support path, look for cluster solutions on the Windows Server Catalog that have been validated by Microsoft’s Failover Cluster Configuration Program (FCCP).
- FCCP solutions are recommended by Microsoft but are not strictly required for support – as long as all the components (i.e. server and SAN) are certified for Windows Server 2008 – a failover clustering validation report will still be required though – FCCP provides another level of confidence.
- When looking at cluster storage, fibre channel (FC) and iSCSI are the dominant SAN technologies. With 10Gbps Ethernet coming onstream, iSCSI looked ready to race ahead and has the advantage of using standard Ethernet hardware (which is why Dell bought EqualLogic and HP bought LeftHand Networks) but then Fibre Channel over Ethernet came onstream, which is potentially even faster (as outlined in a recent RunAs Radio podcast).
With a failover cluster, Hyper-V has always been able to offer high availability for unplanned outages – just as VMware do with their HA product (although Windows Server 2008 Enterprise or Datacenter Editions were required – Standard Edition does not include failover clustering).
For planned outages, quick migration offered the ability to pause a virtual machine and move it to another Hyper-V host but there was one significant downside of this. Because Microsoft didn’t have a clustered file system, each storage LUN could only be owned by one cluster node at a time (a “shared nothing” model). If several VMs were on the same LUN, all of them needed to be managed as a group so that they could be paused, the connectivity failed over, and then restarted, which slowed down transfer times and limited flexibility. The recommendation was for 1 LUN per VM and this doesn’t scale well with tens, hundreds, or thousands of virtual machines although it does offer one advantage as there is no contention for disk access. Third party clustered file system solutions are available for Windows (e.g. Sanbolic Melio FS) but, as Rakesh Malhotra explains on his blog, these products have their limitations too.
Windows Server 2008 R2 Hyper-V can now provide Live Migration for planned failovers – so Microsoft finally has an alternative to VMware VMotion (at no additional cost). This is made possible because of the new clustered shared volume (CSV) feature with IO fault tolerance (dynamic IO) overcomes the limitations with the shared nothing model and allows up to 256TB per LUN, running on NTFS with no need for third party products. The VM is still stored on a shared storage volume and at the time of failover, memory is scanned for dirty pages whilst still running on the source cluster node. Using an iterative process of scanning memory for dirty pages and transferring them to the target node, the memory contents are transferred (over a dedicated network link) until there are so few that the last few pages may be sent and control passed to the target node in fraction of a second with no discernible downtime (including ARP table updates to maintain network connectivity).
Allowing multiple cluster nodes to access a shared LUN is as simple as marking the LUN as a CSV in the Failover Clustering MMC snap-in. Each node has a consistent namespace for LUNS so as many VMs as required my be stored on a CSV as need (although all nodes must use the same letter for the system drive – e.g. C:). Each CSV appears as an NTFS mount point, e.g. C:\ClusterStorage\Volume1
and even though the volume is only mounted on one node, distributed file access is co-ordinated through another node so that the VM can perform direct IO. Dynamic IO ensures that, if the SAN (or Ethernet) connection fails then IO is re-routed accordingly and if the owning node fails then volume ownership is redirected accordingly. CSV is based on two assumptions (that data read/write requests far outnumber metadata access/modification requests; and that concurrent multi-node cached access to files is not needed for files such as VHDs) and is optimised for Hyper-V.
At a technical level, CSVs:
- Are implemented as a file system mini-filter driver, pinning files to prevent block allocation movement and tracking the logical-to-physical mapping information on a per-file basis, using this to perform direct reads/writes.
- Enable all nodes to perform high performance direct reads/writes to all clustered storage and read/write IO performance to a volume is the same from any node.
- Use SMB v2 connections for all namespace and file metadata operations (e.g. to create, open, delete or extend a file).
- Need:
- No special hardware requirements.
- No special application requirements.
- No file type restrictions.
- No directory structure or depth limitations.
- No special agents or additional installations.
- No proprietary file system (using the well established NTFS).
Live migration and clustered storage are major improvements but other new features for Hyper-V R2 include:
- 32 logical processor (core) support, up from 16 at RTM and 24 with a hotfix (to support 6-core CPUs) so that Hyper-V will now support up to 4 8-core CPUs (and I would expect this to be increased as multi-core CPUs continue to develop).
- Core parking to allow more intelligent use of processor cores – putting them into a low power suspend state if the workload allows (configurable via group policy).
- The ability to hot add/remove storage so that additional VHDs or pass through disks may be assigned to to running VMs if the guest OS supports supports the Hyper-V SCSI controller (which should cover most recent operating systems but not Windows XP 32-bit or 2000).
- Second Level Address Translation (SLAT) to make use of new virtualisation technologies from Intel (Intel VT extended page tables) and AMD (AMD-V nested paging) – more details on these technologies can be found in Johan De Gelas’s hardware virtualisation article at AnandTech.
- Boot from VHD – allowing virtual hard disks to be deployed to virtual or or physical machines.
- Network improvements (jumbo frames to allow larger Ethernet frames and TCP offload for on-NIC TCP/IP processing).
Hyper-V Server
So that’s covered the Hyper-V role in Windows Server 2008 R2 but what about its baby brother – Hyper-V Server 2008 R2? The good news is that Hyper-V Server 2008 R2 will have the same capabilities as Hyper-V in Windows Server 2008 R2 Enterprise Edition (previously it was based on Standard Edition) to allow access to up to 1TB of memory, 32 logical cores, hot addition/removal of storage, and failover clustering (with clustered shared volumes and live migration). It’s also free, and requires no dedicated management product although it does need to be managed using the RSAT tools for Windows Server 2008 R2 of Windows 7 (Microsoft’s advice is never to manage an uplevel operating system from a downlevel client).
With all that for free, why would you buy Windows Server 2008 R2 as a virtualisation host? The answer is that Hyper-V Server does not include licenses for guest operating systems as Windows Server 2008 Standard, Enterprise and Datacenter Editions do; it is intended for running non-Windows workloads in a heterogeneous datacentre standardised on Microsoft virtualisation technologies.
Management
The final piece of the puzzle is management:
- System Center Configuration Manager (SCCM) 2007 R2 was released last year, including App-V management as well as a number of improvements in areas that are not specifically related to virtualisation.
- System Center Data Protection Manager (SCDPM) 2007 is not yet at R2 – and Microsoft has not announced an update either (although SP1 brought some significant enhancements – including support for Hyper-V).
- System Center Operations Manager (SCOM) 2007 R2 is at release candidate stage, allowing for monitoring of virtual and physical machines from a single console (and the new service level dashboard beta looks pretty cool).
- System Center Virtual Machine Manager (SCVMM) 2008 R2 is currently in beta and is expected to follow Windows Server 2008 R2 release within 60 days. Complementing the other System Center products to orchestrate VMs, SCVMM 2008 R2 will feature support for live migration and multiple VMs per LUN (using CSVs or 3rd party tools), SAN enhancements (for SAN migration in/out of a cluster – e.g. migrating VMs between two environments on the same SAN – such as staging to production), network optimisations and maintenance mode for simple VM evacuation (no more manual selection and migration of VMs in order to patch the host)!
There are a couple of caveats to note: the SCVMM 2008 R2 features mentioned are in the beta – more can be expected at final release; and, based on previous experience when Hyper-V RTMed, there may be some incompatibilities between the beta of SCVMM and the release candidate of Windows Server Hyper-V R2 (expected to ship soon).
SCVMM 2008 R2 is not a free upgrade – but most customers will have purchased it as part of the Server Management Suite Enterprise (SMSE) and so will benefit from the two years of software assurance included within the SMSE pricing model.
Wrap-up
That’s about it for the R2 wave of Microsoft Virtualization – for the datacentre at least – but there’s a lot of improvements in the upcoming release. Sure, there are things that are missing (memory ballooning may not a good idea for server consolidation but it will be needed for any kind of scalability with VDI – and using RDP as a workaround for USB device support doesn’t always cut it) and I’m sure there will be a lot of noise about how VMware can do more with vSphere but, as I’ve said previously, VMware costs more too – and I’d rather have most of the functionality at a much lower price point (unless one or more of those extra features will make a significant difference to the business case). Of course there are other factors too – like maturity in the market – but Hyper-V is not far off its first anniversary and, other than a couple of networking issues on guests (which were fixed) I’ve not heard anyone complaining about it.
I’ll write more about Windows 7 and Windows Server 2008 R2 virtualisation options (i.e. client and server) as soon as I can but, based on a page which briefly appeared on the Microsoft website, the release candidate for is expected to ship next month and, after reading Paul Thurrott’s post about a forthcoming Windows 7 announcement, I have a theory (and that’s all it is right now) as to what a couple of the Windows 7 surprises may be…
Mark
Quote “The recommendation was for 1 LUN per VM and this doesn’t scale well with tens, hundreds, or thousands of virtual machines although it does offer one advantage as there is no contention for disk access.”
This is not strictly true as multiple LUNs will most likely be formed from a RAID group comprising multiple spindles; therefore two seperate LUNs can cause contention at the backend disk.
@Dave – you’re absolutely correct – my comment did assume that there was a 1:1 mapping between logical and physical disks, which is unlikely to be the case. Even so, 1 LUN per VM is a horrible restriction – and I’ll be pleased to see it lifted!
Just a very brief comment to the Product Team who are developing Win2k8 R2…. (regardless of feedback, the product group is still unconvinced about the need for USB device support…)
A word from the wise…
Whoever releases a TC product to market first which does support USB devices (drives, cams, printers, scanners, etc) will undoubtedly dominate the Thin Client market. USB device support is one of the biggest ‘drum bangers’ for adopting a true Thin Client architecture.
To me it sounds like they’re NOT listening to their user and developer community. This could prove disasterous to Microsoft if Citrix get there first.
@Matt, I’m not sure that Citrix getting there first would be “disastrous for Microsoft” (they partner with Citrix and a lot of Citrix technologies find their way into Windows). I do agree though that this is a feature that enterprise customers are waiting to see and I can’t see why it’s not yet making its way into the product (especially with the improved VDI focus that 2008 R2 brings via the RDS broker).
I’ll feed your comments back (again!) on the next MVP call I have with the virtualisation team (although I won’t be able to report back in this blog). What I can say is that, a) you’re not alone in wanting this and b) it’s worth providing your own feedback via the local Microsoft subsidiary in your area, your partner manager (if you work for a Microsoft partner), or whatever means you have available to you.