Thick, thin, virtualised, whatever: it’s how you manage the desktop that counts

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!

  • Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
    • At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
    • Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
    • As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
    • Eventually, without control, update or maintenance, the PC become “sluggish”.
  • Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
  • Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
  • Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
  • Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:

    “Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”

    [source: Gartner, Age Matters When Considering PC TCO]

    and a few months earlier, they wrote that:

    “Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”

    [source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]

    Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).

  • Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
    • In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
    • In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
    • So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
    • “Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”

      [source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]

  • Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
  • Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
    • Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
    • For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
    • Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
  • Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications.  Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
    • Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
    • Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
  • Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
    • In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
    • Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
    • Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).

There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.

Getting ready to deploy Windows 7 on the corporate desktop

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

With Windows 7 (and Server 2008 R2) now released to manufacturing and availability dates published, what does this really mean for companies looking to upgrade their desktop operating system? I’ve previously written about new features in Windows Server 2008 R2 (part 1 and part 2) but now I want to take a look at the Windows client.

Whilst I still maintain that Windows Vista was not as bad as it was made out to be (especially after service pack 1, which contained more driver resolutions and compatibility updates than security fixes), it was a classic case of “mud sticks” and, in the words of one Microsoft representative at a public event this week:

“Windows Vista maybe wasn’t as well received as [Microsoft] had hoped.”

The press was less harsh on Windows Server 2008 (which is closely related to Vista) but, with the new releases (Windows 7 and Windows Server 2008 R2), reaction from the IT press and from industry analysts has been extremely positive. In part, that’s because Windows 7 represents a “minor” update. By this I mean that, whilst Vista had deep changes (which contributed to it’s unpopularity) with new models for security, drivers, deployment and networking, Windows 7 continues with the same underlying architecture (so most software that runs on Vista will run on 7 – the exceptions are products that are deeply integrated with the operating system such as security products – and hardware that runs Vista well will run 7 Windows 7 well).

Indeed, under Steven Sinofsky‘s watch, with Windows 7 Microsoft has followed new approach for development and disclosure including:

  • Increased planning – analysing trends and needs before building features.
  • Providing customers and partners with predictability – a new operating system every 3 years.
  • Working on the ecosystem – with early partner engagement (ISVs and IHVs have plenty of time to get ready – including a program for ISVs to achieve a “green light” for application compatibility – and the other side or the coin, for those of us looking for suitable hardware and software, is the Ready Set 7 site.).

Having said that Windows 7 is a minor update, it does include some major improvements. Indeed, some might say (I believe that Mark Russinovich was one of them) that if you got back to a previous product version and miss the features then it was a major release. In no particular order, here are of some of the features that Microsoft is showing off for Windows 7 (there are many more too):

  • Superbar amalgamates the previous functions of the Taskbar and the Quicklaunch bar and includes larger icons to accommodate touch screen activities (Windows 7 includes multitouch support).
  • Live preview of running applications (not just when task switching but from the superbar too).
  • Jumplists – right click on a superbar icon to pin it to the superbar – even individual files.
  • No more Windows sidebar – gadgets can be anywhere on the desktop and are isolated from one another so if they crash they do not impact the rest of system.
  • Aero user interface improvements: Aero Peek to quickly look at the desktop; Aero Snap to quickly arrange windows such as when comparing and contrast document contents; Aero Shake to minimise all other open windows.
  • The ability to cut and paste from document previews.
  • The ability to deploy a single, hardware agnostic image for all PCs.
  • Group policy improvements to control USB device usage (no more epoxy resin to glue up USB ports!).
  • BitLocker To Go – encrypt the contents of USB sticks, including the ability to read the contents from downlevel operating systems based on a one-time password.
  • Integrated search shows where results come from too (e.g. Programs, OneNote, Outlook, etc.) and only indexes in quiet time. Search Federation extends this to include SharePoint sites and other corporate resources.
  • DirectAccess, point to point authentication for access to corporate resources (e.g. intranet sites) from anywhere including intelligent routing to identify corporate traffic and separate it from Internet-bound traffic avoid sending all traffic across the VPN.
  • BranchCache – locally cache copies of files, and share on a peer-to-peer basis (or, as my colleague Dave Saxon recently described it, “Microsoft’s version of BitTorrent”).
  • AppLocker – create whitelists or blacklists of approved software, including versions.
  • Problem Steps Wizard – record details of problems and send the results for diagnosis, or use to create walkthrough guides, etc.
  • Action Center – one stop shop for PC health.
  • User Access Control (UAC) warnings reduced.

All of this is nice but, faced with the prospect of spending a not-inconsiderable sum of money on an operating system upgrade, features alone are probably not enough! So, why should I deploy a new Windows operating system? Because, for many organisations, the old one (and I mean Windows XP, not Vista) is no longer “good enough”. It’s already on extended support, lacks some features that are required to support modern ways of working, was designed for an era when security was less of a concern and will be retired soon. So, if I’m an IT manager looking at a strategy for the desktop, my choices might include:

  • Do nothing. Possible, but increasingly risky once the operating system stops receiving security updates and manufacturers stop producing drivers for new hardware.
  • Stop using PCs and move to server based computing? This might work in some use cases, but unlikely to be a universal solution for reasons of mobility and application compatibility.
  • Move to a different operating system – maybe Linux or Mac OS X? Both of these have their relative merits but, deep down, Windows, Linux and Mac OS X all provide roughly the same functionality and if moving from XP to Vista was disruptive from an application compatibility standpoint, moving to a Unix-based OS is likely to be more so.
  • Deploy a new version of Windows – either Vista (which is not a bad way to get ready for 7) or 7.
  • Wait a bit longer and deploy Windows 8. That doesn’t leave a whole lot of time to move from XP and the transition is likely to be more complex (jumping forward by three operating system releases).

Assuming I choose to move to Windows 7, there are several versions available but, unlike with Vista, each is a superset of the features in the version below (and Enterprise/Ultimate are identical – just targetted at different markets). For businesses, there are only two versions that are relevant: Professional and Enterprise – and Enterprise is only available as a Software Assurance (SA) benefit. If you don’t have a suitable volume licensing agreement, Professional the only real choice (saving money by buying Home Premium is unlikely to be cost-effective as it lacks functionality like the ability to join a domain, or licensing support for virtualisation – and purchasing Ultimate Edition at full packaged product price is expensive).

There are some Enterprise/Ultimate features that are not available in the Professional Edition, most notably DirectAccess, BranchCache, Search Federation, BitLocker, BitLocker To Go, and AppLocker. Some of these also require a Windows Server 2008 R2 back end (e.g. DirectAccess and BranchCache).

In Europe, things are a little more complicated – thanks to the EU – and we’re still waiting to hear the full details of what that means (e.g. can an organisation deploy a build based on E Edition outside Europe, or deploy a build within the EU based on a “normal” editions sourced from outside Europe and remain supported).

The other variant is 32- or 64-bit. With the exception of some low-end PCs, almost every PC that we buy today is 64-bit capable, 64-bit drivers are available for most devices (I’ve had no problems getting 64-bit drivers for the Windows 7 notebook that I use ever day) and many 32-bit applications will run on a 64-bit platform. Having said that, if all the PCs you buy have between 2 and 4GB of RAM, then there is not a huge advantage. If you are looking to the future, or running applications that can use additional RAM (on hardware that can support it), then 64-bit Windows is now a viable option. Whilst on the subject of hardware, if you are considering Windows XP Mode as a possible application compatibility workaround, then you will also need hardware virtualisation support and hardware DEP. Steve Gibson’s Securable utility is a handy piece of freeware to check that the necessary features are supported on your hardware.

Whilst on the subject of virtualisation, there are four options (from Microsoft – third party solutions are also available):

  • The much-hyped Windows XP Mode. Great for small businesses but lacks the management tools for enterprise deployment and beware that each virtual machine will also require its own antivirus and management agents – which may be potentially expensive if it’s just to run one or two applications that should really be dragged kicking and screaming into the 21st century.
  • Microsoft Enterprise Desktop Virtualisation (MED-V). This is the former Kidaro product and appears to be a good solution for running legacy applications isolated at the operating system level but it still involves managing a second operating system instance and is part of the Microsoft Desktop Optimisation Pack (MDOP) so is only available to customers with SA.
  • Microsoft Application Virtualization (App-V). A popular solution for application-level isolation but requires applications to be repackaged (with consequential support implications) and also only available as part of MDOP.
  • Virtual desktop infrastructure (VDI). Whilst the concept may initially appear attractive, it’s not an inexpensive option (and without careful management may actually increase costs), Microsoft’s desktop broker (Remote Desktop Services) is new in Windows Server 2008 R2 and, crucially for partners, there is no sensible means of licensing this in a managed service context.

The main reason for highlighting virtualisation options in a Windows 7 post is that Windows XP Mode is being held up as a great way to deal with application compatibility issues. It is good but it’s also worth remembering that it’s a sticking plaster solution and the real answer is to look at why the applications don’t work in the first place. Which brings me onto application compatibility.

Even for those of us who are not developers, there are three ways to approach application compatibility in Windows 7:

  • Windows 7’s Program Compatibility wizard can be used to make simple changes to an application’s configuration and make it work (e.g. skip a version check, run in compatibility mode, etc.)
  • Application Compatibility Toolkit (ACT) 5.5 contains tools and documentation to evaluate and mitigate application compatibility issues for Windows Vista, Windows 7, Windows Update, or Windows Internet Explorer (e.g. shims to resolve known issues) – there are also third party tools from companies like ChangeBASE.
  • Windows XP Mode. For those applications that simply refuse to run on Windows 7 but certainly not a solution for organisations trying to shoehorn Windows 7 onto existing hardware and upgrade at minimal cost.

After deciding what to move to, deployment is a major consideration. The Microsoft Deployment Toolkit (MDT) and Windows Automated Installation Kit (WAIK) have both been updated for Windows 7 and can be used together to deploy a fresh operating system installation together with applications and migrate the user data. There is no in-place upgrade path for Windows XP users (or for Windows 7 customers in Europe) and I was amazed at the number of Microsoft partners in the SMB space who were complaining about this at a recent event but a clean installation is the preferred choice for many organisations, allowing a known state to be achieved and avoiding problems when each PC is slightly different to the next and has its own little nuances.

I think I’ve covered most of the bases here: some of the new features; product editions; hardware and software requirements; application compatibility; virtualisation; deployment. What should be the next steps?

Well, firstly, although the release candidate will work through to June next year, wait a couple of weeks and get hold of the RTM bits. Then test, test, and test again before deploying internally (to a select group of users) and start to build skills in preparation for mass deployment.

As for the future – Microsoft has publicly committed to a new client release every 3 years (it’s not clear whether server releases will remain on a 2 year major/minor schedule) so you should expect to see Windows 8 around this time in 2012.

Windows 7 XP Mode and Windows Virtual PC: How it works

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For the last couple of weeks the news sites have been full of speculation and gossip about what is now referred to as Windows 7 XP Mode and Windows Virtual PC. Most of the reporting so far has focused on the high level concepts and, as the beta went live to millions of Windows 7 release candidate testers, this post attempts to give a little more detail about how Windows XP Mode works.

Before diving in to the technology, let’s have a look at why Microsoft felt the need to provide this functionality. Their vision is to drive the overall adoption of Windows 7 by eliminating legacy application compatibility issues in the enterprise, mid-market, and small-medium business sectors.

This is not a developer workstation solution; nor is it for consumers. It’s basically providing functionality to run legacy applications “seamlessly”, meaning that a typical end user will be unaware that their application is actually running virtualised. It draws heavily from MED-V in that IT administrators create a pre-configured virtual machine with a legacy operating system and applications to run isolated from the host operating system; however, unlike Microsoft Enterprise Desktop Virtualization (MED-V), which remains part of the Microsoft Desktop Optimisation Pack, Windows XP Mode will be more broadly available. In an interview with Mary Jo Foley, Microsoft’s Scott Woodgate gave the following description as to the differences between the two products:

Top-level answer:

  • Windows XP Mode is designed to support SMB customers who do not use management infrastructure and need to run Windows XP applications on their windows 7 desktops.
  • MED-V is designed for larger organizations who use management infrastructure and need to deploy a virtual Windows XP environment on Windows Vista or Windows 7 desktops.

He then continued with the following details:

Windows 7 XP Mode with Windows Virtual PC Microsoft Enterprise Desktop Virtualization (MED-V)
Designed to help small businesses users to run their Windows XP applications on their Windows 7 desktop. Designed for IT Professionals.
Available as part of Windows 7 Professional, Enterprise and Ultimate Editions. Enables Virtual PC deployment in larger organizations.
Enables users to launch many older applications seamlessly in a virtual Windows XP environment from the Windows 7 start menu. Provides important centralized management, policy-based provisioning and virtual image delivery to reduce the cost of Virtual PC deployment.
Includes support for USB devices and is based on a new core that includes multi-threading support. Is part of the Microsoft Desktop Optimization Pack (MDOP)
Is best experienced on new PCs from OEMs but will also be available for customers as a separate download. v1 builds on Microsoft Virtual PC 2007 to help enterprises with their upgrade to Windows Vista when applications are not yet compatible. v2 will add support for enterprises upgrading to Windows 7 (both 32 bit and 64bit) and will support Windows Virtual PC on Windows 7.
v2 beta will be available within 90 days of Windows 7 GA.

More information may be found in Microsoft’s Windows XP Mode press release.

To enable Windows XP Mode, Microsoft has produced a new version of Virtual PC – Windows Virtual PC (VPC) 7 – a client-side virtualisation product that runs on Windows 7 (32- or 64-bit versions). As Jason Perlow describes, it’s not using a type-1 (native/bare metal) hypervisor like Hyper-V (sadly, as if there were to be a client side virtualisation product based on Hyper-V it would be great for developer workstations) but instead uses a type-2 hypervisor (hosted) model. Unlike previous versions of Virtual PC though, the new version requires hardware assisted virtualisation capabilities (AMD-V or Intel VT), which are prevalent in many recently-purchased PCs (even if switched off in the BIOS).

Officially, VPC7 only supports Windows XP, Vista and 7 guests but, just like earlier versions of Virtual PC, there is the option of using emulated hardware so it’s still possible to run other operating systems – it’s just not supported. It’s also worth noting that not all Windows Vista and 7 SKUs are supported in a virtual environment. Something else that’s not supported is the ability to run 64-bit or multi-processor guest operating systems, nor is snapshotting. And, because the virtualisation components are incompatible, there’s still no support for moving virtual machines between Hyper-V and VPC7 either. I’ve already been fairly vocal in my feedback to the product team on this; their response is that the priority is on application compatibility (and, on that basis, I can see the reasons for concentrating on single-processor 32-bit Windows XP support) but continuing to maintain incompatibility between client and server virtualisation platforms seems a little strange to me.

VPC7 features include:

  • Desktop mode – enhancing the traditional Virtual PC functionality using Terminal Services technologies (e.g. for drive redirection and smartcard sharing as well as video improvements that enable large resolutions), and still maintaining the functionality for Virtual machine windows to support arbitrary resolutions. For those applications that experience issues working through the terminal services drive redirection etc., it is possible to disable integration features, after which the Virtual PC will operate as Virtual PC has done previously.
  • Seamless application mode – allowing virtual applications to use Terminal Services application remoting capabilities (RemoteApp) to appear as though they are running locally. Applications retain the chrome of the guest operating system rather than the Windows 7 host but, to all intents and purposes, they are integrated with the native desktop.
  • Tight Windows shell integration and a simplified user interface. In the same way that Windows has special folders for Pictures, Music, etc. a Virtual Machines folder is provided, with Windows Explorer integration for creation of virtual machines and editing virtual machine settings (no more Virtual PC Console). Where a supported operating system is used, applications in the virtual machine may be published to the host’s Start Menu and there’s also integration with the Windows 7 superbar. By default, all new applications installed in the virtual machine (whilst running in full desktop mode) are published to the Windows 7 Start Menu (each virtual machine has its own folder) but this can be disabled if required; however, publishing applications that are built into Windows XP (Internet Explorer, Outlook Express, etc.) requires some registry editing.
  • Full USB support is available for supported operating systems: for any USB device where both host and guest drivers are available then there is integrated USB support but for those devices where there are no Windows 7 drivers they are redirected to and controlled from the guest operating system. Microsoft is also advising that certain device types (e.g. mass storage, printers and smart cards) should not be directly connected to the virtual machines and are better redirected using the Terminal Services functionality built into Virtual PC.
  • A simplified virtual machine creation process with three steps: name and location (remembering the last used location); memory and networking options; disk settings (dynamically expanding by default, or optionally launching a wizard for other disk types such as fixed sized and differencing disks). Once built, new hard disks can be added in the virtual machine settings and control over undo disks is also moved to the virtual machine settings. Other new virtual machine settings relate to integration features, logon credentials and auto publishing.

From a technical standpoint, there are three main VPC7 processes to be aware of:

  • vpc.exe is the base process for Virtual PC functionality.
  • vmsal.exe is the seamless application launcher, which waits for a new application request and launches it. Once the application is closed it sets a timer before saving the VM state and exiting. This means that, when the application is closed, the virtual machine is kept up for a few minutes in case the user launches an application that requires it but after a short while it will be put into a saved state. In addition, because undo disk settings are managed within the virtual machine settings, logging off/shutting down/hibernation is handled as normal, with no virtual machine prompts about undo disks and saving state.
  • vmwindow.exe is launched when VPC is not running in integrated mode.

VPC7 will not run on the Windows 7 beta (build 7000) as it requires the RC (or one of the interim builds – I’ve seen it running on builds 7057 and 7068). I haven’t tried this personally but I’m told that it cannot be installed on Windows Server 2008 R2 either; however something similar is possible with Hyper-V by installing the Terminal Services Remote Applications Integrated Locally (RAIL) components (RemoteApp). Certain Windows 7 editions will include the Windows XP virtual machine, so there is no requirement to build a separate Windows XP image.

Architecturally, VPC looks like a hybrid between Virtual Server and Hyper-V: it uses the Virtual Server engine, including a scriptable COM interface for VM automation (and the security model has been modified so it can be called from PowerShell without needing to make security interoperability tweaks); it also uses the VSP/VSC/VPCBus model from Hyper-V; and it integrates RAIL components from Terminal Services but, because the Terminal Services technologies for integrated applications and enhanced desktop support run over the VPCBus, connectivity is available even if there is no network communication between the guest and the host. Because it’s built on the Virtual Server/Virtual PC codebase, VPC7 is limited to 4GB of RAM and 128GB VHDs.

Windows 7 XP Mode and Windows Virtual PC form a neat solution for application compatibility in Windows 7, drawing on established MED-V (from the Kidaro acquisition), Terminal Services (through partnership with Citrix) and Virtual PC/Virtual Server (formerly from Connectix) technologies. They are very much a point solution for application compatibility though and Microsoft still does not have a decent client-side virtualisation solution for high-end users (developer workstations, IT professionals with several desktop variants, etc.). Whether this is enough to allay concerns from Microsoft’s customers who baulked at a move to Vista as a result of the application compatibility issues is yet to be seen but with the general perception of Windows 7 riding high, this might be just the insurance policy that IT managers want to ensure that legacy applications continue to function. My main concerns with this solution are support (Windows XP is still end of life – and legacy applications may not be supported in a virtual environment), the overall complexity of the solution (however much it’s hidden from the end user, there are still two operating systems running on the hardware) and performance (virtualisation typically requires increased memory and CPU requirements – together with the need for hardware assisted virtualisation, this is certainly not a solution for legacy PCs). Whatever the situation, I’m sure there will be plenty more written on this topic over the coming months.

Windows 7 “XP Mode”

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week was a frustrating one… you see, earlier this month Paul Thurrott gave a hint about an exciting Windows 7 secret. I put 2 and 2 together and it seems that I came up with 4. The trouble was that I was given the details from an official source at around the same time – and that information was under NDA so I couldn’t write about it here!

It’s times like this that I’m glad I’m not running a news site and waiting for a “scoop”, but all three of the leading tech journalists covering Windows (i.e. Paul Thurrott, Ed Bott and Mary Jo Foley) have written articles in the last few days about Windows 7 XP Mode and Windows Virtual PC, and I want to pull things together here.

Basically, Paul Thurrott and Rafael Rivera are reporting that there will be a new version of Virtual PC, available as a download for Windows 7, including a licensed copy of Windows XP SP3 to run those applications that don’t behave well on the Vista/Windows 7 codebase. More details will follow (it won’t actually be “in the box” with Windows 7) but Ed Bott has commented that it looks an awful lot like MED-V.

Of course, the technology is already there – as well as drawing comparisons with MED-V, Ed Bott points out that you can do something similar with VirtualBox in seamless mode and the key detail with Windows XP Mode is the licensing situation. Full licensing details have yet to be announced but the only Microsoft blog post I’ve seen on the subject says:

“We will be soon releasing the beta of Windows XP Mode and Windows Virtual PC for Windows 7 Professional and Windows 7 Ultimate”

That reference to Professional and Ultimate would also indicate that it will run on Enterprise (virtually identical to Ultimate), but not Starter, Home Basic or Home Premium. As Microsoft’s main concern is allowing businesses to run legacy applications as they are weaned off XP, that seems fair enough but, then again, MED-V is only available to volume license customers today and Mary Jo Foley suggests that could be the same for XP Mode – I guess we’ll just have to wait and see.

So, will this work? I hope so. Windows Vista (after SP1) was never as bad as its perception in the marketplace indicated but if ever you needed an example that perception is reality, then Vista was it! Strangely, Windows Server 2008 (the server edition of Vista SP1) has been well received as the solid, reliable operating system that it is, without the negative press. Windows 7 is a step forward in many ways and, as XP is now into its extended support phase, many organisations will be looking for something to move to but the application compatibility issues caused by Windows Vista and Windows 7’s improved security model will still cause a few headaches – that’s what this functionality is intended to overcome, although there will still be some testing required as to how well those old XP apps perform in a virtualised environment.

More technical details will follow soon, but either Paul Thurrott and Rafael Rivera are operating on a different NDA to me (which they may well be) or they feel pretty confident that Microsoft will still give them access to information as they continue to spill the beans on this particular feature…

Free Microsoft Virtualization eBook from Microsoft Press

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again, Microsoft Press makes free e-books available. I just missed out on the PDF version of the Windows Vista Resource Kit as part of the Microsoft Press 25th anniversary (the offer was only valid for a few days and it expired yesterday… that’s what happens when I don’t keep on top of my e-mail newsletters) but Mitch Tulloch’s book on Understanding Microsoft Virtualization Solutions is also available for free download (I don’t know how long for though… based on previous experience, that link won’t be valid for long).

This book covers Windows Server 2008 Hyper-V, System Center Virtual Machine Manager 2008, Microsoft Application Virtualization 4.5 (App-V), Microsoft Enterprise Desktop Virtualization (MED-V), and Microsoft Virtual Desktop Infrastructure. If you’re looking to learn about any of these technologies, it would be a good place to start.

Microsoft Virtualization: part 3 (desktop virtualisation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Before the weekend, I started a series of posts on the various technologies that are collectively known as Microsoft Virtualization. So far, I’ve looked at host/server virtualisation and in this post, I’ll look at the various forms of desktop virtualisation that Microsoft offers.

Whilst VMware have a virtual desktop infrastructure (VDI) solution built around Virtual Infrastructure (VI), Microsoft’s options for virtualising the desktop are more varied – although it should be noted that they do not yet have a desktop broker and recommend partner products such as Citrix Xen Desktop or Quest vWorkspace (formerly Provision Networks Virtual Access Suite). With Hyper-V providing the virtualisation platform, System Center Virtual Machine Manager, Configuration Manager and Operations Manager for management of virtualised Vista clients, this is what some people at Microsoft have referred to as Microsoft VDI (although that’s not yet an official marketing concept).

Licensed by access device (PC or thin client) with the ability to run up to four virtual operating system instances per license, the Vista Enterprise Centralized Desktop (VECD) is actually platform agnostic (i.e. VECD can be used with VMware, Xen or other third-party virtualisation solutions). VECD is part of the Microsoft Desktop Optimization Pack (MDOP) and so requires a Software Assurance (SA) subscription.

With a broker to provide granular authentication and support for the Citrix Independent Computing Architecture (ICA) protocol (for better multimedia support than the Remote Desktop Protocol), users can connect to a Windows Vista desktop from any suitable access device.

To access this virtualised infrastructure there are a number of options – from thin-client terminal devices to Windows Fundamentals for Legacy PCs (WinFLP) – an operating system based on Windows XP Embedded and intended for use on older hardware. WinFLP is not a full general purpose operating system, but provides suitable capabilities for security, management, dcument-viewing and the Microsoft .NET framework, together with RDP client support and the ability to install other clients (e.g. Citrix ICA). Running on old, or low-specification hardware, WinFLP is an ideal endpoint for a VDI but it is a software assurance benefit – without SA then the closest alternative is to strip down/lock down Windows XP.

VDI is just one part of the desktop virtualisation solution though – since Microsoft’s purchase of Connectix in 2003, Virtual PC has been available for running virtualised operating system instances on the desktop. With the purchase of Kidaro in March 2008, Microsoft gained an enterprise desktop virtualisation solution, which has now become known as Microsoft Enterprise Desktop Virtualisation (MED-V) and is expected to become part of MDOP in the first half of 2009.

Effectively, MED-V provides a managed workspace, with automatic installation, image delivery and update; centralised management and reporting; usage policies and data transfer controls; and complete end use transparency (i.e. users do not need to know that part of their desktop is virtualised).

The best way I can describe MED-V is something like VMware ACE (for a locked-down virtual desktop) combined with the Unity feature from VMware Fusion/Coherence from Parallels Desktop for Mac, whereby the guest application instances appear to be running natively on the host operating system desktop.

MED-V runs within Virtual PC but integration with the host operating system is seamless (although MED-V applications can optionally be distinguished with a coloured border) – even down to the system tray level and providing simulated task manager entries.

A centralised repository is provided for virtual machine images with a variety of distribution methods possible – even a USB flash drive – and a management console is provided in order to control the user experience. Authentication is via Active Directory permissions, with MED-V icons published to the host desktop.

MED-V can be used to run applications with compatibility issues on a virtual Windows XP desktop running on Windows Vista until application compatibility fixes can be provided (e.g. using Application Compatibility Toolkit shims, or third party solutions such as those from ChangeBASE). Furthermore, whereas using application virtualisation to run two versions of Internet Explorer side-by-side involves breaching the end user licensing agreement (EULA), the MED-V solution (or any operating system-level virtualisation solution) provides a workaround, even allowing the use of lists to spawn an alternative browser for those applications that require it (e.g. Internet Explorer 7 on the desktop, with Internet Explorer 6 launched for certain legacy web applications).

Using technologies such as MED-V for desktop virtualisation allows a corporate desktop to be run on a “dirty” host (although network administrators will almost certainly have kittens). From a security standpoint, MED-V uses a key exchange mechanism to ensure security of client-server communications and the virtual hard disk (.VHD) image itself is encrypted, with the ability to set an expiry date after which the virtual machine is inoperable. Restrictions over access to clipboard controls (copy, paste, print screen, etc.) may be applied to limit interaction between guest and host machines – even to the point that it may be possible to copy data in one direction but not the other.

At this time, MED-V is 32-bit only, although future releases will have support for 64-bit host operating system releases (and I expect to see hypervisor-based virtualisation in a future Windows client release – although I’ve not seen anything from Microsoft to substantiate this, it is a logical progression to replace Virtual PC in the way that Hyper-V has replaced Virtual Server)

Desktop virtualisation has a lot of potential to aid organisations in the move to Windows Vista but, unlike VMware, who see VDI as a replacement for the desktop, Microsoft’s desktop virtualisation solutions are far more holistic, integrating with application and presentation virtualisation to provide a variety of options for application delivery.

In the next post in this series, I’ll take a closer look at application virtualisation.