Microsoft releases Hyper-V to manufacturing

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Hyper-VWhen Windows Server 2008 shipped with only a beta version of the new “Hyper-V” virtualisation role in the box Microsoft undertook to release a final version within 180 days. I’ve commented before that, based on my impressions of the product, I didn’t think it would take that long and, as Microsoft ran at least two virtualisation briefings this week in the UK, I figured that something was just about to happen (on the other hand I guess they could just have been squeezing the events into the 2007/8 marketing budget before year-end on 30 June).

The big news is that Microsoft has released Hyper-V to manufacturing today.

[Update: New customers and partners can download Hyper-V. Customers who have deployed Windows Server 2008 can receive Hyper-V from Windows Update starting from 8 July 2008.]

Why choose Hyper-V?

I’ve made no secret of the fact that I think Hyper-V is one of the most significant developments in Windows Server 2008 (even though the hypervisor itself is a very small piece of code), and, whilst many customers and colleagues have indicated that VMware has a competitive advantage through product maturity, Microsoft really are breaking down the barriers that, until now, have set VMware ESX apart from anything coming out of Redmond.

When I asked Byron Surace, a Senior Product Manager for Microsoft’s Windows Server Virtualization group, why he believes that customers will adopt Hyper-V in the face of more established products, like ESX, he put it down to two main factors:

  • Customers now see server virtualisation as a commodity feature (so they expect it to be part of the operating system).
  • The issue of management (which I believe is the real issue for organisations adopting a virtualisation strategy) – and this is where Microsoft System Center has a real competitive advantage with the ability to manage both the physical and virtual servers (and the running workload) within the same toolset, rather than treating the virtual machine as a “container”.

When asked to comment on Hyper-V being a version 1 product (which means it will be seen by many as immature), Surace made the distinction between a “typical” v1 product and something “special”. After all, why ship a product a month before your self-imposed deadline is up? Because customer evidence (based on over 1.3 million beta testers, 120 TAP participants and 140 RDP customers) and analyst feedback to date is positive (expect to see many head to head comparisons between ESX and Hyper-V over the coming months). Quoting Surace:

“Virtualisation is here to stay, not a fad. [… it is a] major initiative [and a] pillar in Windows Server 2008.”

I do not doubt Microsoft’s commitment to virtualisation. Research from as recently as October 2007 indicates only 7% of servers are currently virtualised but expect that to grow to 17% over the next 2 years. Whilst there are other products to consider (e.g. Citrix XenServer), VMware products currently account for 70% of the x86 virtualisation market (4.9% overall) and are looking to protect their dominant position. One strategy appears to be pushing out plenty of FUD – for example highlighting an article that compares Hyper-V to VMware Server (which is ridiculous as VMware Server is a hosted platform – more analogous to the legacy Microsoft Virtual Server product, albeit more fully-featured with SMP and 64-bit support) and commenting that live migration has been dropped (even though quick migration is still present). The simple fact is that VMware ESX and Microsoft Hyper-V are like chalk and cheese:

  • ESX has a monolithic hypervisor whilst Hyper-V takes the same approach as the rest of the industry (including Citrix/Xen and Sun) with its microkernelised architecture which Microsoft consider to be more secure (Hyper-V includes no third party code whilst VMware integrates device drivers into its hypervisor).
  • VMware use a proprietary virtual disk format whilst Microsoft’s virtual hard disk (.VHD) specification has long since been offered up as an open standard (and is used by competing products like Citrix XenServer).
  • Hyper-V is included within the price of most Windows Server 2008 SKUs, whilst ESX is an expensive layer of middleware.
  • ESX doesn’t yet support 64-bit Windows Server 2008 (although that is expected in the next update).

None of this means that ESX, together with the rest of VMware’s Virtual Infrastructure (VI), are not good products but for many organisations Hyper-V offers everything that they need without the hefty ESX/VI price tag. Is the extra 10% really that important? And when you consider management, is VMware Virtual Infrastructure as fully-featured as the Microsoft Hyper-V and System Center combination? Then consider that server virtualisation is just one part of Microsoft’s overall virtualisation strategy, which includes server, desktop, application, presentation and profile virtualisation, within an overarching management framework.

Guest operating system support

At RTM the supported guest operating systems have been expanded to include:

  • Windows Server 2008 32- or 64-bit (1, 2 or 4-way SMP).
  • Windows Server 2003 32- or 64-bit (1, or 2 way SMP).
  • Windows Vista with SP1 32- or 64-bit (1, or 2 way SMP).
  • Windows XP with SP3 64-bit (1, or 2 way SMP), with SP2 64-bit (1, or 2 way SMP) or with SP2 32-bit (1 vCPU only).
  • Windows Server 2000 with SP4 (1 vCPU only).
  • SUSE Linux Enterprise Server 10 with SP1 or 2, 32- or 64-bit.

Whilst this is a list of supported systems (i.e. those with integration components to make full use of Hyper-V’s synthetic device driver model), others may work (in emulation mode) but my experience of installing the Linux integration components is that it is not always straightforward. Meanwhile, for many, the main omissions from that list will be Red Hat and Debian-based Linux distributions (e.g. Ubuntu). Microsoft isn’t yet making an official statement on support for other flavours of Linux (and the Microsoft-Novell partnership makes SUSE an obvious choice) but they are pushing the concept of a virtualisation ecosystem where customers don’t need to run one virtualisation technology for Linux/Unix operating systems and another for Windows and its logical to assume that this ecosystem should also include the leading Linux distribution (I’ve seen at least one Microsoft slide listing RHEL as a supported guest operating system for Hyper-V), although Red Hat’s recent announcement that they will switch their allegiance from Xen to KVM could raise some questions (it seems that Red Hat has never been fully on-board with the Xen hypervisor).

Performance and scalability

Microsoft are claiming that Hyper-V disk throughput is 150% that of VMware ESX Server – largely down to the synthetic device driver model (with virtualisation service clients in child partitions communicating with virtualisation service providers in the parent partition over a high-speed VMBus to access disk and network resources using native Windows drivers). The virtualisation overhead appears minimal – in Microsoft and QLogic’s testing of three workloads with two identical servers (one running Hyper-V and the other running direct on hardware) the virtualised system maintained between 88 and 97% of the number of IOPS that the native system could sustain and when switching to iSCSI there was less than a single percentage point difference (although the overall throughput was much lower). Intel’s vConsolidate testing suggests that moving from 2-core to 4-core CPUs can yield a 47% performance improvement with both disk and network IO scaling in a linear fashion.

Hardware requirements are modest too (Hyper-V requires a 64-bit processor with standard enhancements such as NX/XD and the Intel VT/AMD-V hardware virtualisation assistance) and a wide range of commodity servers are listed for Hyper-V in the Windows Server Catalog. According to Microsoft, when comparing Hyper-V with Microsoft Virtual Server (both running Windows Server 2003, with 16 single vCPU VMs on an 8-core server), disk-intensive operations saw a 178% improvement, CPU-intensive operations returned a 21% improvement and network-intensive operations saw a 107% improvement (in addition to the network improvements that the Hyper-V virtual switch presents over Virtual Server’s network hub arrangements).

Ready for action

As for whether Hyper-V is ready for production workloads, Microsoft’s experience would indicate that it is – they have moved key workloads such as Active Directory, File Services, Web Services (IIS), some line of business applications and even Exchange Server onto Hyper-V. By the end of the month (just a few days away) they aim to have 25% of their infrastructure virtualised on Hyper-V – key websites such as MSDN and TechNet have been on the new platform for several weeks now (combined, these two sites account for over 4 million hits each day).

It’s not just Microsoft that thinks Hyper-V is ready for action – around 120 customers have committed to Microsoft’s Rapid Deployment Programme (RDP) and, here in the UK, Paul Smith (the retail fashion and luxury goods designer and manufacturer) will shortly be running Active Directory, File Services, Print Services, Exchange Server, Terminal Services, Certificate Services, Web Services and Management servers on a 6-node Hyper-V cluster stretched between two data centres. A single 6-node cluster may not sound like much to many enterprises, but when 30 of your 53 servers are running on that infrastructure it’s pretty much business-critical.

Looking to the future

So, what does that future hold for Hyper-V? Well, Microsoft already announced a standalone version of Hyper-V (without the rest of Windows) and are not yet ready to be drawn on when that might ship.

In the meantime, System Center Virtual Machine Manager 2008 will ship later this year, including suppoort for managing Virtual Server, Hyper-V and VMware ESX hosts.

In addition, whilst Microsoft are keeping tight-lipped about what to expect in future Windows versions, Hyper-V is a key role for Windows Server and so the next release (expected in 2010) will almost certainly include additional functionality in support of virtualisation. I’d expect to see new features include those that were demonstrated and then removed from Hyper-V earlier in its lifecycle (live migration and the ability to hot-add virtual hardware) and a file system designed for clustered disks would be a major step forward too.

In conclusion…

Hyper-V may be a version 1 product but I really do think it is an outstanding achievement and a major step forward for Microsoft. As I’ve written before, expect Microsoft to make a serious dent in VMware’s x86 [and x64] virtualisation market dominance over the next couple of years.

Hyper-V: RC1 is released – not long to wait now

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days back I gave the strongest hint that I could without breaking any NDAs that Microsoft’s Windows virtualization product group were about to release something special . I couldn’t say what at the time but it’s no longer a secret – RC1 of Hyper-V is available for download.

This second release candidate is expected to be the last before the final product ships although, as for when that will be, the only public commitment that Microsoft has made is that it will ship within 180 days of Windows Server 2008 RTM (I think that works out as 2 August 2008). Personally, I don’t think we’ll have to wait that long, although it should be said that I have no information to back this up.

Unfortunately, the current beta of System Center Virtual Machine Manager (SCVMM) 2008 is not compatible with Hyper-V RC1 although I understand that the product team are working on a fix and, to be fair, that’s one of the perils of running pre-release software. As is the fact that I need to collapse all my virtual machine snapshots before upgrading my Hyper-V hosts – it seems that Microsoft’s previous statement that “With RC, Hyper-V is now feature complete and provides a seamless upgrade path to RTM of Hyper-V.” doesn’t include snapshots (at least the VMs themselves no longer need to be recreated as part of the upgrade).

For those who used earlier versions of Hyper-V, there is one more thing to watch out for – in RC0, Windows Server 2008 guests needed to have an update applied to support the integration components but that changes in RC1 – just use the integration services setup disk as for other operating system versions.

Virtual PC and Virtual Server updated to support the latest Windows releases

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those who thought (as I did until a few weeks back) that there would be no more Virtual Server development as Microsoft focuses its efforts on Hyper-V may be interested to hear that they have just announced an update for Virtual Server 2005 R2 SP1, providing host and guest support for Windows Server 2008, Windows Vista Service Pack 1 and Windows XP Service Pack 3. Further details can be found in Microsoft knowledge base article 948515.

In addition, Microsoft has shipped service pack 1 for Virtual PC 2007, providing guest support for Windows Server 2008, Windows Vista Service Pack 1 and Windows XP Service Pack 3 as well as host support for Windows Vista Service Pack 1 and Windows XP Service Pack 3. Further details can be found in the accompanying release notes.

Both of these products take the VM Additions version to 13.820.

This information has been floating around for a few weeks but was under NDA until yesterday. Watch out for more news from the virtualisation labs in Redmond soon…

Comparing internal and USB-attached hard disk performance in a notebook PC

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Recently, I was in a meeting with a potential business partner and their software was performing more slowly than they had expected in the virtual environment on my notebook PC. The application was using a SQL Server 2005 Express Edition database and SQL Server is not normally a good candidate for virtualisation but I was prepared to accept the performance hit as I do not want any traces of the product to remain on my PC once the evaluation is over.

Basic inspection using Task Manager showed that neither the virtual nor the physical system was stressed from a memory or CPU perspective but the disk access light was on continuously, suggesting that the application was IO-bound (as might be expected with a database-driven application). As I was also running low on physical disk space, I considered whether moving the VM to an external disk would improve performance.

On the face of it, spreading IO across disk spindles should improve performance but with SATA hard disk interfaces providing a theoretical data transfer rate of 1.5-3.0Gbps and USB 2.0 support at 480Mbps, my external (USB-attached) drive is, on paper at least, likely to result in reduced IO when compared with the internal disk. That’s not the whole story though – once you factor in the consideration that standard notebook hard drives are slow (4200 or 5400RPM), this becomes less of a concern as the theoretical throughput of the disk controller suddenly looks far less attainable (my primary hard drive maxes out at 600Mbps). Then consider that actual hard disk performance under Windows is determined not only by the speed of the drive but also by factors such as the motherboard chipset, UDMA/PIO mode, RAID configuration, CPU speed, RAM size and even the quality of the drivers and it’s far from straightforward.

I decided to take a deeper look into this. I should caveat this with a note that performance testing is not my forte but I armed myself with a couple of utilities that are free for non-commercial use – Disk Thruput Tester (DiskTT.exe) and HD Tune.

Both disks were attached to the same PC, a Fujitsu-Siemens S7210 with a 2.2GHz Intel Mobile Core 2 Duo (Merom) CPU, 4GB RAM and two 2.5″ SATA hard disks but the internal disk was a Western Digital Scorpio WD1200BEVS-22USTO whilst the external was a Fujitsu MHY2120BH in a Freecom ToughDrive enclosure.

My (admittedly basic) testing revealed that although the USB device was a little slower on sequential reads, and quite a bit slower on sequential writes, the random access figure was very similar:

Internal (SATA) disk External (USB) disk
Sequential writes 25.1MBps 22.1MBps
Sequential reads 607.7MBps 570.8MBps
Random access 729.3MBps 721.6MBps

Testing was performed using a 1024MB file, in 1024 chunks and the cache was flushed after writing. No work was performed on the PC during testing (background processes only). Subsequent re-runs produced similar test results.

Disk throughput test results for internal diskDisk throughput test results for external (USB-attached) disk

Something doesn’t quite stack up here though. My drive is supposed to max out at 600Mbps (not MBps) so I put the strange results down to running a 32-bit application on 64-bit Windows and ran a different test using HD Tune. This gave some interesting results too:

Internal (SATA) disk External (USB) disk
Minimum transfer rate 19.5MBps 18.1MBps
Maximum transfer rate 52.3MBps 30.6MBps
Average transfer rate 40.3MBps 27.6MBps
Access time 17.0ms 17.7ms
Burst rate 58.9MBps 24.5MBps
CPU utilisation 13.2% 14.3%

Based on these figures, the USB-attached disk is slower than the internal disk but what I found interesting was the graph that HD Tune produced – the USB-attached disk was producing more-or-less consistent results across the whole drive whereas the internal disk tailed off considerably through the test.

Disk performance test results for internal disk
Disk performance test results for external (USB-attached) disk

There’s a huge difference between benchmark testing and practical use though – I needed to know if the USB disk was still slower than the internal one when it ran with a real workload. I don’t have any sophisticated load testing tools (or experience) so I decided to use the reliability and performance (performance monitor) capabilities in Windows Server 2008 to measure the performance of two identical virtual machines, each running on a different disk.

Brent Ozar has written a good article on using perfmon for SQL performance testing and, whilst my application is running on SQL Server (so the article may help me find bottlenecks if I’m still having issues later), by now I was more interested in the effect of moving the virtual machine between disks. It did suggest some useful counters to use though:

  • Memory – Available MBytes
  • Paging File – % Usage
  • Physical Disk – % Disk Time
  • Physical Disk – Avg. Disk Queue Length
  • Physical Disk – Avg. Disk sec/Read
  • Physical Disk – Avg. Disk sec/Write
  • Physical Disk – Disk Reads/sec
  • Physical Disk – Disk Writes/sec
  • Processor – % Processor Time
  • System – Processor Queue Length

I set this up to monitor both my internal and external disks, and to log to a third external disk so as to minimise the impact of the logging on the test.

Starting from the same snapshot, I ran the VM on the external disk and monitored the performance as I started the VM, waited for the Windows Vista Welcome screen and then shut it down again. I then repeated the test with another copy of the same VM, from the same snapshot, but running on the internal disk.

Sadly, when I opened the performance monitor file that the data collector had created, the disk counters had not been recorded (which was disappointing) but I did notice that the test had run for 4 minutes and 44 seconds on the internal disk and only taken 3 minutes and 58 seconds on the external one, suggesting that the external disk was actually faster in practice.

I’ll admit that this testing is hardly scientific – I did say that performance testing is not my forte. Ideally I’d research this further and I’ve already spent more time on this than I intended to but, on the face of it, using the slower USB-attached hard disk still seems to improve VM performance because the disk is dedicated to that VM and not being shared with the operating system.

I’d be interested to hear other people’s comments and experience in this area.

Moving virtual machines between disks in Hyper-V

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The trouble with running Microsoft Hyper-V on a notebook PC is that notebook PCs typically don’t have large hard disks. Add a few snapshots and a virtual machine (VM) can quickly run into tens or even hundreds of gigabytes and that meant that I needed to move my VMs onto an external hard disk.

In theory at last, there should also be a performance increase by moving the VMs off the system disk and onto a separate spindle; however that’s not straightforward on a notebook PC as second disks will (normally) be external (and therefore using a slower USB 2.0 interface, rather than the internal SATA controller) – anyway, in my case, disk space was a more important than any potential performance hit.

Moving VMs around under Hyper-V is not as straightforward as in Virtual Server; however there is an export function in Hyper-V Manager that allowed me to export a VM to my external hard disk, complete with snapshots (Ken Schaefer describes the equivalent manual process for moving a Hyper-V VM on his blog).

The exported VM is still not ready to run though – it needs to be imported again but the import operation is faster as it doesn’t involve copying the .VHD file (and any associated snapshots) to a new location. After checking that the newly imported VM (with disk and snapshot storage on the external drive) would fire up, I deleted the original version. Or, more accurately, I would have done if I hadn’t run out of disk space in the meantime (Windows Server 2008 doesn’t like it when you leave it with only a few MB of free space).

Deleting VMs is normally straightforward, but my machine got stuck half way through the “destroy” process (due to the lack of hard disk space upsetting my system’s stability) and I failed to recover from this, so I manually deleted the files and restarted. At this point, Hyper-V manager thought that the original VM was still present but any attempt to modify VM settings resulted in an error (not surprising as I’d deleted the virtual machine’s configuration file and the virtual hard disks). What I hadn’t removed though was the shortcut (symbolic link) from the to my external hard disk. Deleting this file from %systemdrive%\ProgramData\Microsoft\Windows\Hyper-V\Virtual Machines and refreshing Hyper-V Manager left me with a clean management console again.

Accessing USB devices from within Microsoft virtual machines

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my Hyper-V presentation on Wednesday, I said that USB support was one of the things that is missing from Hyper-V. That is correct – i.e. there is no ability to add USB devices as virtual hardware – but, in a conversation yesterday, Clive Watson pointed out that if you connect to a virtual machine using RDP, there is the ability to access local resources – including hard drives and smart card readers.

The way to do this is to use the Local Resources tab in the Remote Desktop Connection client options, where local devices and resources may be specified for connection:

Accessing local resources in the RDP client

If you click more, there is the option to select smart cards, serial ports, drives and supported plug and play devices (i.e. those that support redirection). In this case, I selected the USB hard drive that was currently plugged into my computer:

Accessing local resources in the RDP client

And when I connect to the virtual machine using RDP, it is listed the drive as driveletter on localmachine:

Accessing local resources via RDP - as seen on the remote machine

This is really a Terminal Services (presentation virtualisation) feature – rather than something in Hyper-V – and so it is true to say that there is no USB device support in Hyper-V for other access methods (e.g. from a virtual machine console) and that the RDP connection method is a workaround for occasional access. Microsoft see USB support as a desktop virtualisation feature and the only way that will change is if they see enough customer feedback to tell them that it’s something we need on servers too.

My slides from the Microsoft UK user groups community day

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m presenting two sessions at the Microsoft UK user groups community day today on behalf of the Windows Server Team.

The first is an introduction to Hyper-V and the second will look at server core installations of Windows Server 2008. I’ve included full speaker notes in the slide decks, as well as some additional material that I won’t have time to present and screen grabs from my demos. Both decks are available on my Windows Live SkyDrive, along with a couple of videos I recorded of the Hyper-V installation process:




Removing phantom network adapters from virtual machines

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last night, I rebuilt my Windows Server 2008 machine at home to use the RTM build (it was running on an escrow build from a few days before it was finally released) and Hyper-V RC0. It was non-trivial because the virtual machines I had running on the server had to be recreated in order to move from the Hyper-V beta to the release candidate (which meant merging snapshots) and so it’s taken me a few weeks to get around to it.

The recreation of the virtual machine configuration (but using the existing virtual hard disk) meant that Windows detected new network adapters when I started up the VM. Where I previously had a NIC called Local Area Connection using Microsoft VMBus Network Adapter I now had a NIC called Local Area Connection 2 using Microsoft VMBus Network Adapter #2. The original adapter still configured but not visible. Ordinarily, that’s not a problem – the friendly name for the NIC can be edited but when I went to apply the correct TCP/IP settings, a warning was displayed that:

The IP address ipaddress you have entered for this network adapter is already assigned to another adapter Microsoft VMBus Network Adapter. Microsoft VMBus Network Adapter is hidden from the network and Dial-up Connections folder because it is not physically in the computer or is a legacy adapter that is not working. If the same address is assigned to both adapters and they become active, only one of them will use this address. This may result in incorrect system configuration. Do you want to enter a different IP address for this adapter in the list of IP addresses in the advanced dialog box?

That wasn’t a problem for my domain controller VM, but the ISA Server VM didn’t want to play ball – hardly surprising as I was messing around with the virtual network hardware in a firewall!

In a physical environment, I could have reinserted the original NIC, uninstalled the drivers, removed the NIC and then installed the new one, but that was less straightforward with my virtual hardware as the process had also involved upgrading the Hyper-V gues integration components. I tried getting Device Manager to show the original adapter using:

set devmgr_show_nonpresent_devices=1
start devmgmt.msc

but it was still not visible (even after enabling the option to show hidden devices). Time to break out the command line utilities.

As described in Microsoft knowledge base article 269155, I ran devcon to identify the phantom device and then remove it. Interestingly, running devcon findall =net produced more results than devcon listclass net and the additional entries were the original VMBus Network Adapters. After identifying their identifier for the NIC (e.g. VMBUS\{20AC6313-BD23-41C6-AE17-D1CA99DA4923}\5&37A0B134&0&{20AC6313-BD23-41C6-AE17-D1CA99DA4923}: Microsoft VMBus Network Adapter), I could use devcon to remove the device:

devcon -r remove "@VMBUS\{20AC6313-BD23-41C6-AE17-D1CA99DA4923}\5&37A0B134&0&{20AC6313-BD23-41C6-AE17-D1CA99DA4923}"

Result! devcon reported:

VMBUS\{20AC6313-BD23-41C6-AE17-D1CA99DA4923}\5&37A0B134&0&{20AC6313-BD23-41C6-AE17-D1CA99DA4923}: Removed
1 device(s) removed.

I repeated this for all phantom devices (and uninstalled the extra NICs that had been created but were visible, using Device Manager). I then refreshed Device Manager (scan for hardware changes), plug and play kicked in and I just had the NIC(s) that I wanted, with the original name(s). Finally, I configured TCP/IP as it had been before the Hyper-V upgrade and ISA Server jumped into life.

Just one extra point of note: the devcon package that Microsoft supplies in Microsoft knowledge base article 311272 includes versions for i386 and IA64 architectures but not x64. It worked for me on my ISA Server virtual machine, which is running 32-bit Windows Server 2003 R2, but was unable to remove the phantom device on my domain controller, which uses 64-bit Windows Server 2003 R2. I later found that devcon is one of the Support Tools on the Windows installation media (suptools.msi). After installing these, I was able to use devcon on x64 platforms too.

Upgrading from the Hyper-V beta to RC0

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the problems when you ship a beta product with a released product is that people will use it. Damn those users!

Yeah, well, I’m one of those users and it’s all very well including a comment in the Hyper-V beta release notes warning us that it will not be possible upgrade VMs from the Hyper-V beta to subsequent releases (I think there was such a comment, but I can only find the RC0 release notes now) but someone is just going to do it. I figured that as long as I have the virtual hard disk (.VHD) then recreating a child partition (virtual machine) shouldn’t be too big an issue. Right?

The exact words in Microsoft’s instructions for installing the Windows Server 2008 Hyper-V RC are:

“Migration of virtual machine configurations from Hyper-V Beta is not supported. All virtual machine configurations must be recreated using Hyper-V RC. However, customers will be able to migrate VHD files for released operating systems (Pre-release version of Windows Server 2008 will need to be recreated with the RTM version). There are several important factors to consider and steps to be followed for migrating VHDs to Hyper-V RC. […] Please refer to http://support.microsoft.com/kb/949222 for instructions on how to move VHDs created on Hyper-V Beta to RC.”

What Microsoft knowledge base article 949222 fails to point out is that the process of deleting snapshots does not always complete successfully. As John Howard points out in his recent post about the availability of the Hyper-V release candidate (RC) release:

“If you have any virtual machines running on Hyper-V Beta which have snapshots, these are not compatible with Hyper-V RC0. Deleting the snapshots will cause the changes to be merged back to the parent VHD, but this does take some time to complete (and due to a bug in Hyper-V beta, the merge does not always kick in).”

If you suffer from the bug that John mentions, there is a workaround (unsupported), which is under NDA (so I can’t write the method here), but Ben Armstrong gives a pretty big clue when he describes virtual machine snapshotting under Hyper-V and says:

“You can also delete a snapshot. If you delete a snapshot that has no descendants (snapshot with differencing disks that reference the snapshot being deleted) then the files associated with the snapshot will just be deleted. If you delete a snapshot with only one descendant the configuration and saved state files for the snapshot will be deleted and the snapshot differencing disks will be merged with those of it’s descendant. If you delete a snapshot with more than one descendant the snapshot configuration and saved state files will be deleted – but the differencing disks will not be merged until the number of descendant snapshots is reduced to one.”

I added the emphasis in that quote and it may be useful to note that the Edit Virtual Hard Disk Wizard can be used to merge a differencing disk (which is what a snapshot is) into it’s parent (from the Windows Server 2008 Technical Library).

Thankfully, I didn’t have to go down that route (at least not on my notebook – I’ve not been brave enough to upgrade my server at home yet as I’ll also need to upgrade the parent partition from escrow build 6001.17128.amd64fre.longhorn.080101-1935 to RTM build 6001.18000.amd64fre.longhorn_rtm.080118-1840 – you can check what version a server is running by examining the BuildLabEx string at HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\ in the registry). When I tried to take a backup of all the VM files (including snapshots), I found that some of them were locked – even after a reboot. That was because Hyper-V was (very slowly) merging the contents of the .AVHD files into the .VHDs. I wasn’t convinced until I saw .AVHD files disappearing before my eyes and disk space miraculously appearing on my hard drive, although I have a feeling that the process may have stalled a couple of times and a reboot kicked things off again.

There are two clues that the merge is not yet complete:

  1. The presence of some .AVHD files in the snapshots folder for the virtual machine.
  2. The <disk_merge_pending type="bool">True</disk_merge_pending> line in the corresponding XML file.

Once the merge is complete, the .AVHD files should be deleted and <disk_merge_pending type="bool">True</disk_merge_pending> should read <disk_merge_pending type="bool">False</disk_merge_pending> .

After my snapshots were merged and I had removed the beta integration components from my VMs, the upgrade process was quite straightforward – document everything, apply the Hyper-V RC0 upgrade package (no need to remove the beta first), install the RC (including restarting the computer), remove and recreate any virtual machines (even though they may still be visible in Hyper-V Manager, attempting to start one of virtual machines will result in an access denied error – it’s a simple enough process to delete the virtual machine and recreate it using the original virtual hard disk), set up the virtual networking and install the latest integration components (depending on the operating system in use for each child partition).

Thankfully, I shouldn’t have to endure this pain with subsequent releases (like RC0 to RTM) – Microsoft’s Hyper-V FAQ states that:

“Microsoft is encouraging all customers and partners to test and evaluate the RC of Hyper-V. With RC, Hyper-V is now feature complete and provides a seamless upgrade path to RTM of Hyper-V.”

Phew!

Hyper-V release candidate

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For a couple of days now, I’ve been itching to write something about the Microsoft Hyper-V release candidate (RC), which was made available to beta testers earlier this week. Well, the wait is over as the (feature-complete) product was officially announced earlier today.

According to Microsoft:

The RC forms an important milestone in the development of Hyper-V and being feature complete, customers can now start to evaluate the final implementation scenarios with the knowledge that the upgrade path to the RTM of Hyper-V will be largely non-disruptive in terms of VM settings, VHDs, etc. In this release candidate of Hyper-V, there are 3 new areas of improvement including:

  • An expanded list of tested and qualified guest operating systems including: Windows Server 2003 SP2, Novell SUSE Linux Enterprise Server 10 SP1, Windows Vista SP1, and Windows XP SP3.
  • Host server and language support has been expanded to include the 64-bit (x64) versions of Windows Server 2008 Standard, Enterprise, and Datacenter – with English, partial German, and partial Japanese language options now available and the ability to enable the English version of Hyper-V on other locales.
  • Improved performance & stability for scalability and throughput workloads.

I’ll be upgrading my Hyper-V installations over the coming weeks but even running the beta has been a remarkably good experience, although so far I’ve failed to get the Linux integration components working (on SUSE or RHEL, 32 or 64-bit). I’m also pleased that Microsoft has released Hyper-V management tools for Windows Vista SP1, removing the requirement for another Hyper-V server in order to manage Hyper-V on a Windows Server 2008 server core installation.

There’s more information on the Hyper-V RC at the Windows Virtualization team blog and in the official press release.