Hyper-V and networking

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For those who have worked with hosted virtualisation (Microsoft Virtual PC and Virtual Server, VMware Workstation and Server, Parallels Desktop, etc.) and haven’t experienced hypervisor-based virtualisation, Microsoft Hyper-V is fundamentally different in a number of ways. Architecturally, it’s not dissimilar to the Xen hypervisor (in fact, there are a lot of similarities between the two) and Xen’s domain 0 is analogous to the parent partition in Hyper-V (effectively, when the Hyper-V role is added to a Windows Server 2008 computer, the hypervisor is “slid” underneath the existing Windows installation and that becomes the parent partition). Subsequent virtual machines running on Hyper-V are known as child partitions.

In this approach, a new virtual switch (vswitch) is created and the physical network adapter (pNIC) is unbound from all clients, services and protocols, except the Microsoft Virtual Network Switch Protocol. The virtual network adapters (vNICs) in the parent and child partitions connect to the vswitch. Further vswitches may be created for internal communications, or bound to additional pNICs; however only one vswitch can be bound to a particular pNIC at any one time. Virtual machines can have multiple vNICs connected to multiple vswitches. Ben Armstrong has a good explanation of Hyper-V networking (with pictures) on his blog.

One exception relates to the connection of virtual machines to wireless network adapters (not a common server scenario, but nevertheless useful when Windows Server 2008 is running on a notebook PC). The workaround is to use Internet connection sharing (ICS) on the wireless pNIC and to connect that to a vswitch configured for internal networking in Hyper-V. Effectively, the ICS connection becomes a DHCP server for the 192.168.0.0/24 network, presented via the internal vswitch and I’m pleased to find that the same principle can be applied to mobile data cards. Interestingly, Hyper-V seems quite happy to bind directly to a Bluetooth connection.

Hyper-V network connection example

Using this approach, on my system, the various network adapters are as follows:

  • Dial-up adapters, including an HSDPA/HSUPA modem which I have shared to allow a VMs to connect to mobile networks in place of wired Ethernet.
  • Local Area Connection – the pNIC in my notebook PC, bound only to to the Microsoft Virtual Network Switch Protocol.
    Wireless Network Connection – the WiFi adapter in my notebook PC (if there was WiFi connectivity where I am today then this could have been shared instead of the data card.
  • Local Area Connection 3 – the Bluetooth adapter in my notebook PC.
  • Local Area Connection 4 – the external vswitch in my Hyper-V installation, connected to the external network via the pNIC.
  • Local Area Connection 5 – another vswitch in my Hyper-V installation, operating as an internal network, but connected using the method above to the shared HSDPA/HSUPA modem.

This gives me plenty of flexibility for connectivity and has the useful side-effect of allowing me to circumvent the port security which I suspect is the cause of my frequent disconnections at work because the physical switches are configured to block any device presenting multiple MAC addresses for the same port.

Problems with Hyper-V, ISA Server 2006 and TCP offloading

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For the last few days, I’ve been trying to get an ISA Server 2006 installation working and it’s been driving me nuts. I was pretty sure that I had my networking sorted, following Jim Harrison’s article on configuring ISA Server interface settings (although a colleague did need to point out to me that I didn’t have a static route defined on my ADSL router back to the ISA Server’s internal network – doh!) but even once this was checked there was still something up with the configuration.

My server has three NICs – a Broadcom NetXtreme Gigabit Ethernet card, connected to my Netgear ProSafe GS108 switch and two Intel PRO/100+ Management Adapters – one connected to a NetGear DS108 hub and the other disconnected at the moment but reserved for remote management of the server (the first two are both bound to Hyper-V) virtual switches.

The theory is that the Gigabit connection will be used for all my internal IT resources and the Fast Ethernet hub is just connected to the ADSL router. The server will run a few virtual machines (VMs) – the ISA Server (running with Windows Server 2003 R2 and connected to both virtual switches), another VM with Active Directory and DNS (also running Windows Server 2003 R2), my mail server and various test/development machines.

According to Microsoft:

“There are two rules to remember when setting up DNS on ISA Server. These rules apply to any Windows-based DNS configuration:

  • No matter how many network adapters you have, only assign DNS servers to a single adapter (it doesn’t matter which one). There is no need to set up DNS on all network adapters.
  • Always point DNS to either internal servers or external servers, never to both.”

[Configuring DNS Servers for ISA Server 2004]

Following this advice, my internal DNS Server is set to forward any requests that it can’t resolve to my ISP’s servers. The problem was that this DNS server couldn’t access the Internet through the ISA Server. ISA Server could ping hosts on all networks (so the network configuration was sound) and monitoring the traffic across the ISA Server showed the outbound DNS traffic on port 53 but nothing seemed to be coming back from the ISP’s DNS servers.

I checked another colleague’s working ISA Server 2006 configuration and found nothing major that was different (only an alternative DNS configuration – with the external NIC pointing to the internal DNS server where my external NIC has no DNS server specified – and the addition of the Local Host network in the source list for the Unrestricted Internet Access firewall access rule that is included in the Edge Firewall network template).

Then, after seeking advice from more colleagues and spending the entire day (and evening) on the problem, I finally cracked it…

Because the ISA Server was configured to use the internal DNS server for lookups (which, in turn, couldn’t get back through the ISA Server), nslookup domainname.tld didn’t work; however nslookup domainname.tld alternativednsserveripaddress did (e.g. nslookup www.google.com 4.2.2.2). HTTP(S) traffic seemed fine though – if I used IP addresses instead of domain names, I could access websites via the web proxy client.

Meanwhile, on the ISA Server, I could use nslookup for local name resolution but not for anything on the Internet. And pinging servers on the external side of the ISA server gave some very strange results – The first packet would receive a reply but not the subsequent ones.

After hours of Googling, I came across some good advice in a TechNet forum thread – download and run the ISA Server Best Practices Analyzer (BPA) tool. The ISA BPA presented me with a number of minor warnings (for example, that running ISA Server in a virtual environment can’t protect the underlying operating system) but two seemed particularly significant:

“Receive-side scaling (RSS) is enabled by the Windows Server operating system. If a network adapter installed on the local ISA Server computer supports RSS, ISA Server may function incorrectly. […]”

and:

“TCP-Acceleration (TCPA) is enabled by the Windows Server operating system. If a network adapter installed on the local ISA Server computer supports TCPA, ISA Server may function incorrectly. […]”

I made the registry edits to disable RSS and TCPA (Further details are available in Microsoft knowledge base articles 927695 and 936594), restarted the computer and crossed my fingers.

Even after this change, I still couldn’t successfully ping resources on the external side of the ISA Server from the private network, but I was sure I was onto something. I stopped looking for problems with ISA Server and DNS, and instead I focused my efforts on TCP Offload issues with Hyper-V. That’s when I found Stefaan Pouseele’s post about ISA Server and Windows Server 2003 service pack 2. Stefaan recommends not only disabling RSS and TCPA but also turning off TCP offload and the TCP chimney.

A big more googling and I found a TechNet Forum thread about ISA Server 2006 in a virtual environment where (Virtual PC Guy) Ben Armstrong and VistaGuyRay (Raymond Comvalius) had discussed disabling TCP offloading in the VM. As it happens, only yesterday, Ray blogged about how disabling TCP offloading in the virtual machine (not on the host) had resolved his problems with a Broadcom gigabit Ethernet adapter and Hyper-V (further details are available in Microsoft knowledge base article 888750). So, after making this change (but not doing anything with the TCP chimney) and a final reboot of my ISA server, I noticed that Windows wanted to apply some updates. That meant that name resolution was working, which in turn meant that the internal DNS server was successfully forwarding requests to the ISP servers via the ISA Server and my ADSL router. Result.

The final set of registry changes that I made were as follows:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
"EnableTCPA"=dword:00000000
"EnableRSS"=dword:00000000
"DisableTaskOffload"=dword:00000001

I’ve only made the registry changes on the ISA Server at the moment and the VM running AD/DNS seems to be fine, so this might not be an issue for all virtual machines connected to the Hyper-V virtual switch bound to the Broadcom NetXtreme NIC. What does seem reasonably certain though is that Hyper-V, ISA Server 2006 and TCP offloading don’t play nicely together in this scenario.

Ctrl+Alt+arrow keys

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My new notebook PC has an Intel X3100 integrated graphics chipset and it seems that Intel graphics drivers include a feature whereby holding down the Ctrl and Alt keys together with a directional arrow key will rotate the display:

Ctrl+Alt+left = rotate display to lie down to the left (270° position)
Ctrl+Alt+right = rotate display to lie down to the right (90° position)
Ctrl+Alt+down = flip display upside down (180° position)
Ctrl+Alt+up = rotate display to normal position (0° position)

I’ve never come across this before but it’s a real pain as the Hyper-V Virtual Machine Connection also uses Ctrl+Alt+left by default to release the mouse when integration components are not installed. Luckily Alt+Tab will also break out of the VM and the hotkey can be changed in the Hyper-V settings.

No more heroes {please}

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

That’s it.  A single reference to [IT] heroes.  No more – because I didn’t count how many times that word was used at the 2008 Global Launch event today but I certainly didn’t have enough fingers and toes to keep a tally – and now I’m tired of hearing it.

Although those of us at the UK launch had already heard from a variety of Microsoft executives (including Microsoft UK Managing Director, Gordon Frazer, and Microsoft’s General Manager for the Server and Tools Division, Larry Orecklin) and customers, the highlight was the satellite link-up to the US launch event with Microsoft CEO, Steve Ballmer.Steve Ballmer at the Microsoft 2008 Global Launch  Unfortunately, before we got to hear the big man speak, we had to listen to the warm-up act – Tom Brokaw, who it would seem is a well-known television presenter in the States, but totally unknown over here.  He waffled on for a few minutes with the basic premise being that we are in a transformational age in the history of our world and that the definition of our time and generation comes from unsung heroes (damn, that’s the second time I’ve used the word) – not celebrities.

So.  Windows Server 2008, Visual Studio 2008, SQL Server 2008.  Three new products – one released last year, one earlier this month, and another due later in 2008 in Microsoft’s largest ever launch event with 275,000 people expected to attend events across the globe and another million online at the virtual launch experience website.  Ballmer described them as "The most significant [products] in Microsoft’s history" and "enablers to facilitate the maximum impact that our industry can have".  But what does that mean for you and I – the people that Microsoft likes to refer to with the H word who implement their technology in order to execute this change on an unsuspecting world?

I’ve written plenty here before about Windows Server 2008, but the 2008 global launch wave is about more than just Windows.  For years now, Microsoft has been telling us about dynamic IT and over the last few years we have seen many products that can help to deliver that vision.  The 2008 global launch wave is built around four areas:

  1. A secure and trusted foundation.
  2. Virtualisation.
  3. Web and developer productivity.
  4. Business intelligence (and user experience).

So, taking each of these one at a time, what do the 2008 products offer?

A secure and trusted foundation

Security and reliability are always touted as benefits for the latest version of any product, but in the case of Windows Server there are some real benefits.  The Server Core installation option results in a smaller codebase, meaning a reduced attack surface.  The modular design of IIS (and indeed the role-based architecture for Windows Server) means that only those components that are required are installed. Read-only domain controllers allow for secure deployment of directory servers in branch office situations that previously would have been a major security risk.

Availability is increased with enhancements to failover clustering (including new cluster validation tools), SQL data mirroring and the new resource governor functionality in SQL Server 2008 which allows resources to be allocated to specific workloads.

On the compliance and governance front, there is network access protection, federated rights management, and transparent SQL data encryption.

Microsoft is also keen to point out that their database platform has seen significantly fewer critical vulnerabilities in recent history than Oracle.

Finally, although not strictly security-related, Microsoft cites 40% of data centre costs relating to power and that Windows Server 2008 consumes 10% less power than previous versions of Windows Server, when running the same workload.

Virtualisation

Microsoft’s view on virtualisation is broader than just server virtualisation, encompassing not just the new Hyper-V role that will ship within 180 days of Windows Server 2008 release but also profile virtualisation (document redirection and offline files), client virtualisation (Vista Enterprise Centralised Desktop), application virtualisation (formerly SoftGrid) and presentation virtualisation (Terminal Services RemoteApp), all managed in one integrated, unified manner with System Center.

As for VMware‘s dominance of the server virtualisation space – I asked Larry Orecklin how Microsoft would combat customer perceptions around Microsoft’s lack of maturity in this space. His response was that "the proof is in the pudding" and that many customers are running Hyper-V in beta with positive feedback on performance, scalability and ease of use.  Microsoft UK Server Director, Bruce Lynn added that Hyper-V is actually the tenth virtualisation product that Microsoft has brought to market.

In Steve Ballmer’s keynote, he commented that [customers] have told Microsoft that virtualisation is too hard and too expensive – so Microsoft wants to "democratise virtualisation" – to switch from the current situation where less than 10% of servers are virtualised to a world where 90% are.  Their vision is for a scalable and performant hypervisor-based virtualisation platform, with minimal footprint, interoperability with competitive platforms, and integrated management tools.

Web and developer productivity

At the core of Windows Server 2008 is IIS 7.0 but Visual Studio extends the vision for developer productivity when creating rich web applications including support for AJAX, JavaScript IntelliSense, XAML, LINQ, entity-level data access and multi-targeting.

From a platform perspective, there are improvements around shared configuration, administrative delegation and scalability.

Combined with Silverlight for a rich user experience and Expression Blend (for designers to interact with developers on the same code), Microsoft believes that their platform is enabling customers to provide better performance, improved usability and a better experience for web-based applications.  It all looks good to me, but I’m yet to be convinced by Silverlight, or for that matter Adobe AIR – this all seems to me like a return to the days when every site had a Shockwave/Flash intro page and I’m like to see a greater emphasis on web standards.  Still, at least IIS has new support for running PHP without impacting on performance now – and Visual Studio includes improved CSS styling support.

Business intelligence

Ballmer highlighted that business intelligence (BI) is about letting users engage with applications – providing not just presentation but insight – getting at the data to provide business value.  Excel is still the most popular business intelligence tool, but combined with other products (e.g. SharePoint and PerformancePoint), the Microsoft BI story is strengthened.

SQL Server 2008 is at the core of the BI platform providing highly performant and scalable support for data warehousing with intelligence for both structured and unstructured data.  SQL Server reporting services integrates with Office applications and the ability to store spatial data opens new possibilities for data-driven applications (e.g. the combination of non-relational data and BI data to provide location awareness).

Putting it all together

So, that’s the marketing message – but what does this mean in practice?  Microsoft used a fictitious coffee company to illustrate what could be done with their technology but I was interested to hear what some of their TAP customers had been up to.  Here in the UK there were a number of presentations from well-known organisations that have used 2008 launch wave products to solve specific business issues.

easyJet have carried out a proof of concept that they hope to develop into an improved travel portal for their customers.  As a low-fares airline, you might expect anything more than the most basic website to be an expensive extravagance but far from it – 98% of easyJet’s customers book via the web, and if the conversion rate could be increased by 1% then that translates into £17m of revenue each year.

The easyJet proof of concept uses a Silverlight and AJAX front end to access Microsoft .NET 3.5 web services and SQL Server 2008.  Taking a starting point of, for example, London Luton, a user can select a date and see the lowest prices to all available destinations on a map.  Clicking through to a destination reveals a Microsoft Virtual Earth map with points of interest within a particular radius.  Streaming video is added to the mix, along with the ability to view hotel details using TripAdvisor and book online.

The proof of concept went from design to completion in just 6 weeks.  Windows Server 2008 provided IIS 7.0 with its modular design and simplified configuration.  SQL Server 2008 allowed the use of geospatial data.  And Visual Studio 2008 enhanced developer productivity, team collaboration and the overall user experience.

Next up was McLaren Electronic Systems, using SQL Server 2008 to store telemetry data transmitted in real time from Formula 1 racing cars.  With microwave signals bouncing off objects and data arriving out of sequence, the filestream feature allows data to be streamed into a relational database for fast access.  Tests have shown that for files above 2MB this technology will out-perform a traditional file system.  Formula 1 may sound a little specialised to relate to everyday business but as McLaren explained, a Formula 1 team will typically generate 3TB of data in a season.  That’s a similar volume to a financial services company, or a warehousing and logistics operation – so the technology is equally applicable to many market sectors.

The John Lewis Partnership is using Windows Server 2008 for its branch office infrastructure.  Having rolled out Windows Server 2003, they would like to reduce the number of servers (and the carbon footprint of their IT operations) at the same time as doubling the number of stores.  Security is another major consideration, with the possibility of data corruption if power is removed from a server and a security breach if a directory server is compromised.

By switching branch servers to Windows Server 2008 read-only domain controllers (DCs), John Lewis can combine the DCs with other branch office functions (print, DHCP, System Center Configuration Manager and Operations Manager) to remove one server from every store.  The reduction in replication traffic (AD replication is all one-way from the centre to the RODCs) allows for a reduction in data centre DCs too.  Windows Server 2008 also facilitates improved failover between data centres in a disaster recover scenario.  Other Windows Server technologies of interest to John Lewis include Server Core, 64-bit scalability and clustering.

The University of Cambridge is making use of the ability to store spatial data in SQL Server 2008 to apply modern computing to the investigation of 200 year-old theories on evolution.  And Visual Studio 2008 allowed the construction of the associated application in just 5 days.  As Professor John Parker and his self-confessed "database geek" sidekick, Dr Mark Whitehorn explained, technologies such as this are "allowing the scientific community to wake up to business intelligence".

Finally, the Rural Payments Agency (the UK government agency responsible for paying agricultural subsidies) is using Microsoft Application Virtualization and Terminal Services to provide an ultra-thin client desktop to resolve application conflicts and allow users to work from any desk.

Roadmap

Microsoft never tells us a great deal about the roadmap (at least not past the next year or so) but the 2008 launch wave includes a few more products yet.  Visual Studio 2008 and Windows Server 2008 have already shipped.  SQL Server 2008 will be available in the third quarter of 2008 (with a community technology preview today) and the Hyper-V role for Windows Server will ship within 180 days of Windows Server (although I have heard rumours it may be a lot closer than that).  In the summer we will see a new release of Windows Small Business Server as well as a new product for SMEs – Windows Essential Business Server – and, at the other end of the computing spectrum, Windows High Performance Computing Server.  Finally, a new version of Silverlight will ship at some point this year.

Summary

I may not be a fan of the HEROES happen {here} theme but that’s just marketing – I’ve made no secret of the fact that I think Windows Server 2008 is a great product.  I don’t have the same depth of experience to comment on Visual Studio or SQL Server but the customer presentations that I heard today add credence to Microsoft’s own scenario for a dynamic, agile, IT infrastructure to reduce the demands for maintenance of the infrastructure and drive out innovation to support the demands of modern business. 

Mark Wilson {United Kingdom}

Management of Microsoft Hyper-V on Windows Server 2008 (Server Core)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently bought a new server in order to consolidate various machines onto one host.  The intention here is to license Microsoft Hyper-V Server when it is released but, as that’s not available to me right now, I thought I’d use the latest Windows Server 2008 (Server Core) build with the Hyper-V role enabled.  Everything was looking good until I built the server, installed Hyper-V (using the ocsetup Microsoft-Hyper-V command) and realised that although I had a functioning Hyper-V server, I had no way to manage it.

According to the release notes for the Hyper-V beta:

"To manage Hyper-V on a server core installation, you can do the following:

  • Use Hyper-V Manager to connect to the server core installation remotely from a full installation of Windows Server 2008 on which the Hyper-V role is installed.
  • Use the WMI interface."

I wanted to run Hyper-V on Server Core because my experience of running Virtual Server on Windows Server 2003 has been that patching the host is a major issue involving downtime on each guest virtual machine.  Similarly (unless I migrate the workload to another server) applying updates to the parent partition on Hyper-V will also result in downtime in each child partition.  By using Server Core, I reduce the size of the attack surface and therefore the likelihood of a critical patch being applicable to my server.  If I need another Windows Server 2008 machine with Hyper-V installed just to manage the box then that’s not helping me much – even a version of Hyper-V Manager to run on a Windows client machine and administer the server would be a huge step forward!

I’ve raised a feedback request highlighting this as a potential issue which restricts the scenarios in which Hyper-V will be deployed; however I’m expecting it to be closed as "by design" and therefore not holding out much hope of this getting fixed before product release.

Microsoft releases a beta for Hyper-V

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Windows Server 2008 beta testers are probably aware that the release candidate distributions include a pre-release version of the new virtualisation platform that is now known as Hyper-V (formerly known as Windows Server Virtualisation and codenamed Viridian).

With Hyper-V due to follow Windows Server 2008 release (within 180 days), it was widely anticipated that no formal beta would be available until Windows Server 2008 was finalised but Microsoft is announcing the first Hyper-V beta release today, including support for quick migration and high availability, ability to run Hyper-V as a Server Core role and integration of Hyper-V into Server Manager. Further details of Hyper-V are available on the Microsoft website.

Hyper-V is the new name for Windows Server Virtualization

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I was in Redmond, at a Windows Server 2008 technical conference. Not a word was said about Windows Server 2008 product packaging (except that I think one speaker may have said that the details for the various SKUs were still being worked on). Well, it’s amazing how things can change in a few days, as one of the big announcements at this week’s TechEd IT Forum 2007 in Barcelona is the Windows Server 2008 product pricing, packaging and licensing. I don’t normally cover “news” here – there are others who do a much better job of that than I would – but I am interested in the new Hyper-V announcement.

Hyper-V is the new name for the product codenamed Viridian, also known as Windows Server Virtualization, and expected to ship within 180 days of Windows Server 2008. Interestingly, as well as the SKUs that were expected for web, standard, enterprise, datacenter and Itanium editions of Windows Server 2008, there will be versions of Windows Server 2008 standard, enterprise and datacenter editions without the Hyper-V technology (Hyper-V will only be available for x64 versions of Windows Server 2008) as well as a separate SKU for Hyper-V priced at just $28.

$28 sounds remarkably low – why not just make it free (and greatly simplify the product model)? In any case, this places Hyper-V in a great position to compete on price with Citrix Xen Server or VMware ESX Server 3i (it should be noted that I have yet to see pricing announced for VMware Server 3i) – I’ve already written that I think Hyper-V has the potential to compete on technical merit (something that its predecessor, Virtual Server 2005 R2, couldn’t).

At the same time, Microsoft announced a Windows Server Virtualisation validation programme – designed to validate Windows Server with virtualisation software and enable Microsoft to offer co-operative technical support to customers running Windows Server on validated, non-Windows server virtualisation software platforms (such as Xen) as well as virtualisation solution accelerators and general availability of System Center Virtual Machine Manager 2007.

Whilst VNU are reporting that VMware are “unfazed” by the Microsoft Hyper-V announcement, I have absolutely no doubt that Microsoft is serious about making a name for itself in the x86/x64 server virtualisation market.

Creating and managing a virtual environment on the Microsoft platform

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Several months back, I blogged about a Microsoft event with a difference – one which, by and large, dropped the PowerPoint deck and scripted demos in favour of a more hands-on approach. That was the Windows Vista after hours event (which I know has been popular and re-run several times) but then, a couple of weeks back, I attended another one at Microsoft’s new offices in London, this time about creating and managing a virtual environment on the Microsoft platform.

Now, before I go any further I should point out that, as I write this in late 2007, I would not normally recommend Microsoft Virtual Server for an enterprise virtualisation deployment and tend to favour VMware Virtual Infrastructure (although the XenSource products are starting to look good too). My reasons for this are all about scalability – Virtual Server is limited in a number of ways, most notably that it doesn’t support multiple-processor virtual machines – it is perfectly suitable for a workgroup/departmental deployment though. Having said that, things are changing – next year we will see Windows Server Virtualisation, the management situation is improving with System Center Virtual Machine Manager (VMM).

…expect Microsoft to make a serious dent in VMware’s x86 virtualisation market dominance over the next couple of years

Throughout the day, Microsoft UK’s James O’Neill and Steve Lamb demonstrated a number of technologies for virtualisation on a Microsoft platform and the first scenario involved setting up a Virtual Server cluster, building the second node from a Windows Deployment Services (WDS) image (more on WDS later…) and using the Microsoft iSCSI target for shared storage (currently only available as part of Windows Storage Server although there is a free alternative called Nimbus MySAN iSCSI Server) together with the Microsoft iSCSI initiator – included within Windows Vista and Server 2008 (and available for download on Windows 2000/XP/Server 2003).

When clustering Virtual Server, it’s important to understand that Microsoft’s step by step guide for Virtual Server 2005 R2 host clustering includes an appendix containing a script (havm.vbs) to add as a cluster resource in order to allow servers to behave well in a virtual cluster. Taking the script offline effectively saves the virtual machine (VM), allowing the cluster group to be moved to a new node and then bringing the script back online will restore the state of the VM.

After demonstrating building Windows Server 2008 Server Core (using WDS) and full Windows Server 2008 (from an .ISO image), James and Steve demonstrated VMM, the System Center component for server consolidation through virtual migration and virtual machine provisioning and configuration. Whilst the current version of VMM only supports Virtual Server 2005 and Windows Server Virtualisation, a future version will also support the management of XenSource and VMware virtual machines, providing a single point of management for all virtual machines, regardless of the platform.

At this point, it’s probably worth looking at the components of a VMM enterprise deployment:

  • The VMM engine server is typically deployed on a dedicated server, and managed from the VMM system console.
  • Each virtual server host has a VMM agent installed for communication with the VMM engine.
  • Library servers can be used to store templates, .ISO images, etc. for building the virtual infarstructure, with optional content replication using distributed file system replication (DFS-R).
  • SQL Server is used for storage of configuration and discover information.
  • VMM uses a job metaphor for management, supporting administration from graphical (administration), web (delegated provisioning), or command line interfaces (the command line interface is through the use of VMM extensions for Windows PowerShell, for which a cmdlet reference is available for download and the GUI interface allows identification of the equivalent PowerShell command).

Furthermore, Windows Remote Management (WinRM/WS-Management) can be used to tunnel virtual machine management through HTTPS, allowing a virtual host to be remotely added to VMM.

VMM is currently available as part of an enterprise server management license; however it will soon be available in workstation edition, priced per physical machine.

The next scenario was based around workload management, migrating virtual machines between hosts (in a controlled manner). One thing that VMM cannot do is dynamically redistribute the workload between virtual server hosts – in fact Microsoft were keen to point out that they do not consider virtualisation technology to be mature enough to make the necessary technical decisions for automatic resource allocation. This is one area where my opinion differs – the Microsoft technology may not yet be mature enough (and many organisations’ IT operations processes may not be mature enough) but ruling out dynamic workload management altogether runs against the idea of creating a dynamic data centre.

It’s worth noting that there are two main methodologies for virtual machine migration:

  1. Quick migration requires shared storage (e.g. in a cluster scenario) with the saving of the VM state, transfer of control to another cluster node, and restoration of the VM on the new node. This necessarily involves some downtime but is fault tolerant with the main considerations being the amount of RAM in the VM and the speed at which this can be written to or read from the disk.
  2. Live migration is more complex (and will not be implemented in the forthcoming release of Windows Server Virtualization), involving copying the contents of the virtual machine’s RAM between two hosts whilst it is running. Downtime should be sub-second; however there is a requirement to schedule such a migration and it does involve copying the contents of the virtual machine’s memory across the network.

Some time ago, I wrote about using the Virtual Server Migration Toolkit (VSMT) to perform a physical to virtual (P2V) conversion. At that time, the deployment technology in use was Automated Deployment Services (ADS) but ADS has now been replaced with Windows Deployment Services (WDS), part of the Windows Automated Installation Kit (AIK). WDS supports imaged deployment using Windows imaging format (.WIM) files for installation and boot images or legacy images (not really images at all, but RIS-style file shares including support for pending devices (prestaged computer accounts based on the machine’s GUID). P2V capabilities are now included within VMM, with a wizard for gathering information about the physical host server, then converting it to a virtual format, including analysis of the most suitable host using a star system for host ratings based on CPU, memory, disk and network availability. At the time of writing, VMM supports a P2V conversion as well as virtual to virtual (V2V) conversion from a running VM (strangely, Microsoft still refer to this as P2V) and V2V file format conversion and optimisation (from competing virtualisation products) but not virtual to physical (V2P) conversion (this may be possible using a Windows Vista System Restore but there would be issues around hardware detection – success is more likely by capturing a virtual machine image in WDS and then deploying that to physical hardware). In addition, VMM supports creating template VMs by cloning a VM that is not currently running and it was also highlighted that removing a VM from VMM will actually delete the virtual machine files – not simply removing them from the VMM console.

The other components in the virtual machine management puzzle are System Center Operations Manager (a management pack is available for server health monitoring and management, performance reporting and analysis, including this ability to monitor both the host server workload and the VMs running on the server), System Center Configuration Manager (for patch management and software upgrades) and System Centre Data Protection Manager (DPM), which allows for virtual machine backup and restoration as well as disaster recovery. DPM builds on Windows’ Volume Shadow Copy (VSS) technology to take snapshots of running applications, with agents available for Exchange Server, SharePoint, SQL Server and Virtual Server. Just like traditional backup agents, the DPM agents can be used within the VMs for granular backups, or each VM can be treated as a “black box”, by running just the Virtual Server agent on the hosts and backing up entire VMs.

The final scenarios were all based around Windows Server Virtualization, including running Virtual Server VMs in a WSV environment. WSV is an extensive topic with a completely new architecture and I’ve wanted to write about it for a while but was prevented from doing so by an NDA. Now that James has taken the wraps off much of what I was keeping quiet about, I’ve written a separate post about WSV.

Finally, a couple of points worth noting:

  • When using WDS to capture an image for deployment to a VM, it’s still necessary to sysprep that machine.
  • Virtualisation is not a “silver bullet” – even though Windows Server Virtualisation on hardware that provides virtualisation assistance will run at near native speeds, Virtual Server 2005 is limited by factors of CPU speed, network and disk access and available memory that can compromise performance. In general, if a server is regularly running at ~60-75% CPU utilisation then it’s probably not a good virtualisation candidate but many servers are running at less than 15% of their potential capacity.

Microsoft’s virtualisation technology has come a long way and I expect Microsoft to make a serious dent in VMware’s x86 virtualisation market dominance over the next couple of years. Watch this space!

Controlling Virtual Server 2005 R2 using Windows PowerShell

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my jobs involves looking after a number of demonstration and line of business servers running (mostly) on Virtual Server 2005 R2. Because I’m physically located around 90 miles away from the servers and I have no time allocated to managing the infrastructure, I need to automate as much as possible – which means scripting. The problem is that my scripting abilities are best described as basic. I can write batch files and I can hack around with other people’s scripts – that’s about it – but I did attend a Windows PowerShell Fundamentals course a few weeks back and really enjoyed it, so I decided to write some PowerShell scripts to help out.

Virtual Server 2005 R2 has a Component Object Model (COM) API for programmatic control and monitoring of the environment (this is what the Virtual Server Administration web interface is built upon). For a quick introduction to this API, Microsoft has an on-demand webcast (recorded in December 2004), where Robert Larson explained using the Virtual Server COM API to create scripts to automate tasks like virtual machine (VM) creation, configuration, enumeration, and provisioning VMs.

The Virtual Server COM API has 42 interfaces and hundreds of calls; however the two key interfaces are for virtual machines (IVMVirtualMachine) and the Virtual Server service (IVMVirtualServer). Further details can be found in the Programmers Guide (which is supplied with Virtual Server) and there is a script repository for Virtual Server available on the Microsoft TechNet website.

Because the scripting model is based on COM, developers are not tied to a specific scripting language. This means that, theoretically, Windows PowerShell can be used to access the COM API (although in practice, my PowerShell scripts for Virtual Server are very similar to their VBScript equivalents).

Every script using the Virtual Server COM API needs to initiate the VirtualServer.Application object. For Visual Basic, this would mean calling:

Set objVS=CreateObject("VirtualServer.Application)

Because I want to use PowerShell, I have to do something similar; however there is a complication – as Ben Armstrong explains in his post on controlling Virtual Server through PowerShell, PowerShell is a Microsoft.NET application and as such does not have sufficient priviledges to communicate with the Virtual Server COM interfaces. There is a workaround though:

  1. Compile the C# code that Ben supplies on his blog to produce a dynamic link library (.DLL) that can be used to impersonate the COM security on the required object (I initially had some trouble with this but everything was fine once I located the compiler). I placed the resulting VSWrapperForPSH.dll file in %userprofile%\Documents\WindowsPowerShell\
  2. Load the DLL into PowerShell using [System.Reflection.Assembly]::loadfrom("%userprofile%\Documents\WindowsPowerShell\VSWrapperForPSH.dll") > $null (I do this in my %userprofile%\Documents\WindowsPowerShell\profile.ps1 file as Ben suggests in his follow-up post on PowerShell tweaks for controlling Virtual Server).
  3. After creating each object using the Virtual Server COM API (e.g. $vs=New-Object –com VirtualServer.Application –Strict), set the security on the object with [Microsoft.VirtualServer.Interop.PowerShell]::SetSecurity($vs). Again, following Ben Armstrong’s advice, I do this with a PowerShell script called Set-Security.ps1 which contains the following code:


    Param($object)
    [Microsoft.VirtualServer.Interop.PowerShell]::SetSecurity($object)

    Then, each time I create a new object I call set-security($objectname)

Having got the basics in place, it’s fairly straightforward to manipulate the COM objects in PowerShell and I followed Ben’s examples for listing registered VMs on a given host, querying guest operating system information and examining .VHD files. I then spent quite a lot of time writing a script which will output all the information on a given virtual machine but although it was an interesting exercise, I’m not convinced it has much value. What I did learn was that:

  • Piping objects through Get-Member is can be useful for understanding the available methods and properties.
  • Where a collection is returned (e.g. the NetworkAdapters property on a virtual machine object), individual items within the collection can be accessed with .item($item) and a count of the number of items within a collection can be obtained with .count, for example:

    Param([String]$vmname)

    $vs=New-Object -com VirtualServer.Application -strict
    $result=Set-Security($vs)

    $vm=$vs.FindVirtualMachine($vmname)
    $result=Set-Security($vm)

    $dvdromdrives=$vm.DVDROMDrives
    $result=Set-Security($dvdromdrives)
    Write-Host $vm.Name "has" $dvdromdrives.count "CD/DVD-ROM drives"

Of course, System Center Virtual Machine Manager (SCVMM) includes it’s own PowerShell extensions and therefore makes all of this work totally unnecessary but at least it’s an option for those who are unwilling or unable to spend extra money on SCVMM.

Windows Server Virtualization unwrapped

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, Microsoft released Windows Server 2008 Release Candidate 0 (RC0) to a limited audience and, hidden away in RC0 is an alpha release of Windows Server Virtualization (the two updates to apply from the %systemroot%\wsv folder are numbered 939853 and 929854).

I’ve been limited in what I can write about WSV up to now (although I did write a brief WSV post a few months back); however at yesterday’s event about creating and managing a virtual environment on the Microsoft platform (more on that soon) I heard most of what I’ve been keeping under wraps presented by Microsoft UK’s James O’Neill and Steve Lamb (and a few more snippets on Tuesday from XenSource), meaning that it’s now in the public domain and I can post it here (although I have removed a few of the finer points that are still under NDA):

  • Windows Server Virtualization uses a totally new architecture – it is not just an update to Virtual Server 2005. WSV is Microsoft’s first hypervisor-based virtualisation product where the hypervisor is approximately 1MB in size and is 100% Microsoft code (for reliability and security) – no third party extensions. It is no more than a resource partition in order to provide access to hardware and not opening the hypervisor to third parties provides protection against theoretical hyperjacking attacks such as the blue pill (where a rootkit is installed in the hypervisor and is practically impossible to detect).
  • WSV requires a 64-bit CPU and hardware assisted virtualisation (Intel VT or AMD-V) enabled in the BIOS (often disabled by default).
  • There will also be two methods of installation for WSV:
    • Full installation as a role on Windows Server 2008 (once enabled, a reboot “slides” the hypervisor under the operating system and it becomes virtualised).
    • Server core role for the smallest and most secure footprint (with the advantage of fewer patches to apply).
  • Initial builds require a full installation but WSV will run on Server Core.
  • The first installation becomes the parent, with subsequent VMs acting as children. The parent has elevated permissions. The host/guest relationship no longer applies with the hypervisor model; however if the parent fails, the children will also fail. This may be mitigated by clustering parents and using quick migration to fail children over to another node.
  • Emulated drivers are still available with wide support (440BX chipset, Adaptec SCSI, DEC Ethernet, etc.) but they have a costly performance overhead with multiple calls back and forth between parent and child and context switches from user to kernel mode. WSV also includes a synthetic device driver model with virtual service providers (VSPs) for parents and virtual service clients (VSCs) for children. Synthetic drivers require no emulation and interact directly with hardware assisted virtualisation, providing near-native performance. XenSource drivers for Linux will be compatible with WSV.
  • There will be no USB support – Microsoft see most USB demand for client virtualisation and although USB support may be required for some server functions (e.g. smartcard authentication), this will not be provided in the initial WSV release
  • Microsoft views memory paging to be of limited use and states that over-committing RAM (memory ballooning) is only of practical use in a test and development environment. Furthermore it can actually reduce performance where applications/operating systems attempt to make full use of all available memory and therefore cause excessive paging between physical and virtual RAM. Virtual servers require the same volumes of memory and disk as their physical counterparts.
  • In terms of operating system support, Windows Vista and Server 2008 already support synthetic device driver (with support being added to Windows Server 2003). In response to customer demand, Microsoft has worked with XenSource to provide a platform that will allow both Linux and Windows workloads with near native performance though XenSource’s synthetic device drivers for Linux. Emulation is still available for other operating systems.
  • Virtual Server VMs will run in WSV as the VHD format is unchanged; however virtual machine additions will need to be removed and replaced with ICs (integration components) for synthetic drivers using the integration services setup disk (similar to virtual machine additions, but without emulation) to provide enlightenment for access to the VMbus.
  • Hot addition of resources is not included in the initial WSV release.
  • Live migration will not be included within the first WSV release but quick migration will be. The two technologies are similar but quick migration involves pausing a VM, writing RAM to a shared disk (saving state) and then loading the saved state into RAM on another server and restarting the VM – typically in around 10 seconds – whereas live migration copies the RAM contents between two servers using an iterative process until there are just a few dirty pages left, then briefly pausing the VM, copying the final pages, and restarting on the new host with sub-second downtime.
  • WSV will be released within 180 days of Windows Server 2008.