Several months back, I blogged about a Microsoft event with a difference – one which, by and large, dropped the PowerPoint deck and scripted demos in favour of a more hands-on approach. That was the Windows Vista after hours event (which I know has been popular and re-run several times) but then, a couple of weeks back, I attended another one at Microsoft’s new offices in London, this time about creating and managing a virtual environment on the Microsoft platform.
Now, before I go any further I should point out that, as I write this in late 2007, I would not normally recommend Microsoft Virtual Server for an enterprise virtualisation deployment and tend to favour VMware Virtual Infrastructure (although the XenSource products are starting to look good too). My reasons for this are all about scalability – Virtual Server is limited in a number of ways, most notably that it doesn’t support multiple-processor virtual machines – it is perfectly suitable for a workgroup/departmental deployment though. Having said that, things are changing – next year we will see Windows Server Virtualisation, the management situation is improving with System Center Virtual Machine Manager (VMM).
…expect Microsoft to make a serious dent in VMware’s x86 virtualisation market dominance over the next couple of years
Throughout the day, Microsoft UK’s James O’Neill and Steve Lamb demonstrated a number of technologies for virtualisation on a Microsoft platform and the first scenario involved setting up a Virtual Server cluster, building the second node from a Windows Deployment Services (WDS) image (more on WDS later…) and using the Microsoft iSCSI target for shared storage (currently only available as part of Windows Storage Server although there is a free alternative called Nimbus MySAN iSCSI Server) together with the Microsoft iSCSI initiator – included within Windows Vista and Server 2008 (and available for download on Windows 2000/XP/Server 2003).
When clustering Virtual Server, it’s important to understand that Microsoft’s step by step guide for Virtual Server 2005 R2 host clustering includes an appendix containing a script (havm.vbs
) to add as a cluster resource in order to allow servers to behave well in a virtual cluster. Taking the script offline effectively saves the virtual machine (VM), allowing the cluster group to be moved to a new node and then bringing the script back online will restore the state of the VM.
After demonstrating building Windows Server 2008 Server Core (using WDS) and full Windows Server 2008 (from an .ISO image), James and Steve demonstrated VMM, the System Center component for server consolidation through virtual migration and virtual machine provisioning and configuration. Whilst the current version of VMM only supports Virtual Server 2005 and Windows Server Virtualisation, a future version will also support the management of XenSource and VMware virtual machines, providing a single point of management for all virtual machines, regardless of the platform.
At this point, it’s probably worth looking at the components of a VMM enterprise deployment:
- The VMM engine server is typically deployed on a dedicated server, and managed from the VMM system console.
- Each virtual server host has a VMM agent installed for communication with the VMM engine.
- Library servers can be used to store templates, .ISO images, etc. for building the virtual infarstructure, with optional content replication using distributed file system replication (DFS-R).
- SQL Server is used for storage of configuration and discover information.
- VMM uses a job metaphor for management, supporting administration from graphical (administration), web (delegated provisioning), or command line interfaces (the command line interface is through the use of VMM extensions for Windows PowerShell, for which a cmdlet reference is available for download and the GUI interface allows identification of the equivalent PowerShell command).
Furthermore, Windows Remote Management (WinRM/WS-Management) can be used to tunnel virtual machine management through HTTPS, allowing a virtual host to be remotely added to VMM.
VMM is currently available as part of an enterprise server management license; however it will soon be available in workstation edition, priced per physical machine.
The next scenario was based around workload management, migrating virtual machines between hosts (in a controlled manner). One thing that VMM cannot do is dynamically redistribute the workload between virtual server hosts – in fact Microsoft were keen to point out that they do not consider virtualisation technology to be mature enough to make the necessary technical decisions for automatic resource allocation. This is one area where my opinion differs – the Microsoft technology may not yet be mature enough (and many organisations’ IT operations processes may not be mature enough) but ruling out dynamic workload management altogether runs against the idea of creating a dynamic data centre.
It’s worth noting that there are two main methodologies for virtual machine migration:
- Quick migration requires shared storage (e.g. in a cluster scenario) with the saving of the VM state, transfer of control to another cluster node, and restoration of the VM on the new node. This necessarily involves some downtime but is fault tolerant with the main considerations being the amount of RAM in the VM and the speed at which this can be written to or read from the disk.
- Live migration is more complex (and will not be implemented in the forthcoming release of Windows Server Virtualization), involving copying the contents of the virtual machine’s RAM between two hosts whilst it is running. Downtime should be sub-second; however there is a requirement to schedule such a migration and it does involve copying the contents of the virtual machine’s memory across the network.
Some time ago, I wrote about using the Virtual Server Migration Toolkit (VSMT) to perform a physical to virtual (P2V) conversion. At that time, the deployment technology in use was Automated Deployment Services (ADS) but ADS has now been replaced with Windows Deployment Services (WDS), part of the Windows Automated Installation Kit (AIK). WDS supports imaged deployment using Windows imaging format (.WIM) files for installation and boot images or legacy images (not really images at all, but RIS-style file shares including support for pending devices (prestaged computer accounts based on the machine’s GUID). P2V capabilities are now included within VMM, with a wizard for gathering information about the physical host server, then converting it to a virtual format, including analysis of the most suitable host using a star system for host ratings based on CPU, memory, disk and network availability. At the time of writing, VMM supports a P2V conversion as well as virtual to virtual (V2V) conversion from a running VM (strangely, Microsoft still refer to this as P2V) and V2V file format conversion and optimisation (from competing virtualisation products) but not virtual to physical (V2P) conversion (this may be possible using a Windows Vista System Restore but there would be issues around hardware detection – success is more likely by capturing a virtual machine image in WDS and then deploying that to physical hardware). In addition, VMM supports creating template VMs by cloning a VM that is not currently running and it was also highlighted that removing a VM from VMM will actually delete the virtual machine files – not simply removing them from the VMM console.
The other components in the virtual machine management puzzle are System Center Operations Manager (a management pack is available for server health monitoring and management, performance reporting and analysis, including this ability to monitor both the host server workload and the VMs running on the server), System Center Configuration Manager (for patch management and software upgrades) and System Centre Data Protection Manager (DPM), which allows for virtual machine backup and restoration as well as disaster recovery. DPM builds on Windows’ Volume Shadow Copy (VSS) technology to take snapshots of running applications, with agents available for Exchange Server, SharePoint, SQL Server and Virtual Server. Just like traditional backup agents, the DPM agents can be used within the VMs for granular backups, or each VM can be treated as a “black box”, by running just the Virtual Server agent on the hosts and backing up entire VMs.
The final scenarios were all based around Windows Server Virtualization, including running Virtual Server VMs in a WSV environment. WSV is an extensive topic with a completely new architecture and I’ve wanted to write about it for a while but was prevented from doing so by an NDA. Now that James has taken the wraps off much of what I was keeping quiet about, I’ve written a separate post about WSV.
Finally, a couple of points worth noting:
- When using WDS to capture an image for deployment to a VM, it’s still necessary to sysprep that machine.
- Virtualisation is not a “silver bullet” – even though Windows Server Virtualisation on hardware that provides virtualisation assistance will run at near native speeds, Virtual Server 2005 is limited by factors of CPU speed, network and disk access and available memory that can compromise performance. In general, if a server is regularly running at ~60-75% CPU utilisation then it’s probably not a good virtualisation candidate but many servers are running at less than 15% of their potential capacity.
Microsoft’s virtualisation technology has come a long way and I expect Microsoft to make a serious dent in VMware’s x86 virtualisation market dominance over the next couple of years. Watch this space!
Microsoft iSCSI target is not available to “normal” people and Nimbus junk does not even install on most of the machines. Do you happen to know WORKING alternative?
Gismo,
The iSCSI target is not available unless you purchase Windows Storage Server but I’m pretty sure that is available to “normal” people, just at a price!
I’ve provided feedback to Microsoft suggesting that they make the iSCSI target available as a free download for Windows Server and suggest you do the same if that’s what you would like to see (one person’s opinion won’t count for much, many people’s might – although there may be conflicting commercial requirements).
If you don’t like Nimbus, I think there is a Linux-based iSCSI target that you can use (not sure what it’s called but we used it on my VCP training); however I should point out that the configuration James and Steve used at the event is only really for test and development scenarios – there are many iSCSI devices available that will be far more suitable for the provision of shared storage in a production environment.
Mark