Installing Ubuntu (10.4) on Windows Virtual PC

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I use a Windows 7 notebook at work but, sometimes, it’s just easier to drop back into a Unix or Linux machine – for example when I was checking out command line Twitter clients a few days ago (yes, there is a Windows one, but Twidge is more functional).  After all, as one of my friends at Microsoft reminds me, it is just an operating system after all…

Anyway, I wanted to install Ubuntu 10.4 in a virtual machine and, as I have Windows Virtual PC installed on my notebook, I didn’t want to use another virtual machine manager (most of the advice on the subject seems to suggest using Virtual Box or VMware Workstation, which is a workaround – not a solution).  My first attempts were unsuccessful but then I stumbled upon a forum thread that helped me considerably – thanks to MrDerekBush and pessimism on the Ubuntu forums – this is what I found I needed to do:

  1. Create a virtual machine in Windows Virtual PC as normal – it’s fine to use a dynamic disk – and boot from an Ubuntu disk image (i.e. an ISO, or physical media).
  2. At the language selection screen, hit Escape, then F6 and bring up the boot options string.  Delete the part that says quiet splash -- and replace it with vga=788 noreplace-paravirt (other vga boot codes may work too).
  3. Select the option to try Ubuntu without installing then, once the desktop environment is fully loaded, select the option to install Ubuntu and follow the prompts.
  4. At the end of the installation, do not restart the virtual machine – there are some changes required to the boot loader (and Ubuntu 10.4 uses GRUB2, so some of the advice on the ‘net does not apply).
  5. From Places, double click the icon that represents the virtual hard disk (probably something like 135GB file system if you have a default sized virtual hard disk). Then, open a Terminal session and type mount, to get the volume identifier.
  6. Enter the following commands:
    sudo mount -o bind /dev /media/volumeidentifier/dev
    sudo chroot /media/volumeidentifier/ /bin/bash
    mount -t proc none /proc
    nano /etc/default/grub
  7. Replace quiet splash with vga=788 and comment out the grub_hidden_timeout line (using #) in /etc/default/grub, then save the file and exit nano.
  8. Enter the following command:
    nano /etc/grub.d/10_linux
  9. In the linux_entry section, change args="$4" to args="$4 noreplace-paravirt", then save the file and exit nano.
  10. Enter the update-grub command and ignore any error messages about not being able to find the list of partitions.
  11. Shut down the virtual machine.  At this point I was left with a message about Casper resyncing snapshots and, ever after leaving the VM for a considerable period it did not progress further.  I hibernated the VM and when I resumed it, it rebooted and Ubuntu loaded as normal.

There are still a few things I need to sort out: there are no Virtual Machine Additions for Linux on Virtual PC (only for Hyper-V), which means no mouse/keyboard integration; and the Ctrl-Alt-left arrow release key combination clashes with the defaults for Intel graphics card drivers (there are some useful Virtual PC keyboard shortcuts).  Even so, getting the OS up and running is a start!

Desktop virtualisation shake-up at Microsoft

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

What an afternoon… for a few days now, I’ve been trying to find out what’s the big announcement from Microsoft and Citrix in the desktop virtualisation hour webcast later today. I was keeping quiet until after the webcast but now the embargo has lifted, Microsoft has issued a press release, and the news is all over the web:

  • Coming in Windows Server 2008 R2 service pack 1 (for which there is no date announced, yet) will be the dynamic memory functionality that was in early releases of Hyper-V for Windows Server 2008 R2 but was later pulled from the product.  Also in SP1 will be a new graphics acceleration platform, known as RemoteFX, that is based on desktop-remoting technology that Microsoft obtained in 2008 when it acquired Calista Technologies, allowing for rich media content to be accessed over the remote desktop protocol, enabling users of virtual desktops and applications to receive a rich 3-D, multimedia experience while accessing information remotely..
  • Microsoft and Citrix are offering a “Rescue for VMware VDI” promotion, which allows VMware View customers to trade in up to 500 licenses at no additional cost, and the “VDI Kick Start” promotion, which offers new customers a more than 50 percent discount off the estimated retail price.
  • There are virtualisation licensing changes too: from July, Windows Client Software Assurance customers will no longer have to buy a separate license to access their Windows operating system in a VDI environment, as virtual desktop access rights now will be a Software Assurance (SA) benefit – effectively, if you have SA, you get Windows on screen, no matter what processor it is running on!  There will also be new roaming usage rights and Windows Client Software Assurance and new Virtual Desktop Access (the new name for VECD) customers will have the right to access their virtual Windows desktop and their Microsoft Office applications hosted on VDI technology on secondary, non-corporate network devices, such as home PCs and kiosks.
  • Citrix will ensure that XenDesktop HDX technology will be interoperable with and will extend RemoteFX within 6 months.
  • Oh yes, and Windows XP Mode (i.e. Windows Virtual PC) will no longer requires hardware virtualisation technology (although, frankly, I find that piece of news a little less exciting as I’d really like to see Virtual PC replaced by a client-side hypervisor).

Thick, thin, virtualised, whatever: it’s how you manage the desktop that counts

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!

  • Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
    • At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
    • Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
    • As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
    • Eventually, without control, update or maintenance, the PC become “sluggish”.
  • Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
  • Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
  • Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
  • Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:

    “Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”

    [source: Gartner, Age Matters When Considering PC TCO]

    and a few months earlier, they wrote that:

    “Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”

    [source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]

    Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).

  • Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
    • In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
    • In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
    • So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
    • “Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”

      [source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]

  • Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
  • Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
    • Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
    • For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
    • Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
  • Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications.  Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
    • Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
    • Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
  • Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
    • In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
    • Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
    • Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).

There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.

Windows Virtual PC and XP Mode release candidates

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this evening, Microsoft announced that the release candidate for WIndows XP Mode (Virtual PC 7) is now available. It’s good timing really. In the next couple of days I should be able to download the RTM bits for Windows 7 and, as upgrading from the XP Mode beta is not supported, it means I should have the new version of WIndows XP Mode before I rebuild my workstation onto an RTM build.

There have been a few changes in Windows XP Mode between beta and RC:

  • USB devices can now be attached to Windows XP Mode applications directly from the Windows 7 superbar, making them available to applications running in Windows XP Mode without the need to go into full screen mode.
  • Windows XP Mode applications can now be accessed with a jump-list allowing a right-click on the Windows XP Mode applications from the superbar to select and open most recently used files.
  • It is possible to customise where Windows XP Mode differencing disk files are stored.
  • Differencing disks may be compacted.
  • There is a new option to turn off and discard changes when using Undo disks.
  • Drive sharing between Windows XP Mode and Windows 7 can be disabled.
  • Initial setup includes a new user tutorial about how to use Windows XP Mode.
  • Faster setup.
  • The ability to install Windows XP components without access to media.

Interestingly, Microsoft is now saying you need an additonal 1GB of RAM for XP Mode (2GB recommended). Of course, you don’t need 1GB of RAM in order to run a copy of Windows XP and a virtual machine manager but that tells you what you might want for any level of performance. In addition to the requirement for hardware that offers virtualisation assistance, this is just one more reason why XP Mode is not a solution for clients looking to sweat their existing hardware assets a while longer… this is purely a software sticking plaster for legacy applications. On the other hand, it’s working pretty well for me with Outlook 2007 in a VM to support Google Calendar Sync and Outlook 2010 on my workstation as my client of choice.

A couple more notes worth mentioning…

Windows 7 XP Mode and Windows Virtual PC: How it works

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For the last couple of weeks the news sites have been full of speculation and gossip about what is now referred to as Windows 7 XP Mode and Windows Virtual PC. Most of the reporting so far has focused on the high level concepts and, as the beta went live to millions of Windows 7 release candidate testers, this post attempts to give a little more detail about how Windows XP Mode works.

Before diving in to the technology, let’s have a look at why Microsoft felt the need to provide this functionality. Their vision is to drive the overall adoption of Windows 7 by eliminating legacy application compatibility issues in the enterprise, mid-market, and small-medium business sectors.

This is not a developer workstation solution; nor is it for consumers. It’s basically providing functionality to run legacy applications “seamlessly”, meaning that a typical end user will be unaware that their application is actually running virtualised. It draws heavily from MED-V in that IT administrators create a pre-configured virtual machine with a legacy operating system and applications to run isolated from the host operating system; however, unlike Microsoft Enterprise Desktop Virtualization (MED-V), which remains part of the Microsoft Desktop Optimisation Pack, Windows XP Mode will be more broadly available. In an interview with Mary Jo Foley, Microsoft’s Scott Woodgate gave the following description as to the differences between the two products:

Top-level answer:

  • Windows XP Mode is designed to support SMB customers who do not use management infrastructure and need to run Windows XP applications on their windows 7 desktops.
  • MED-V is designed for larger organizations who use management infrastructure and need to deploy a virtual Windows XP environment on Windows Vista or Windows 7 desktops.

He then continued with the following details:

Windows 7 XP Mode with Windows Virtual PC Microsoft Enterprise Desktop Virtualization (MED-V)
Designed to help small businesses users to run their Windows XP applications on their Windows 7 desktop. Designed for IT Professionals.
Available as part of Windows 7 Professional, Enterprise and Ultimate Editions. Enables Virtual PC deployment in larger organizations.
Enables users to launch many older applications seamlessly in a virtual Windows XP environment from the Windows 7 start menu. Provides important centralized management, policy-based provisioning and virtual image delivery to reduce the cost of Virtual PC deployment.
Includes support for USB devices and is based on a new core that includes multi-threading support. Is part of the Microsoft Desktop Optimization Pack (MDOP)
Is best experienced on new PCs from OEMs but will also be available for customers as a separate download. v1 builds on Microsoft Virtual PC 2007 to help enterprises with their upgrade to Windows Vista when applications are not yet compatible. v2 will add support for enterprises upgrading to Windows 7 (both 32 bit and 64bit) and will support Windows Virtual PC on Windows 7.
v2 beta will be available within 90 days of Windows 7 GA.

More information may be found in Microsoft’s Windows XP Mode press release.

To enable Windows XP Mode, Microsoft has produced a new version of Virtual PC – Windows Virtual PC (VPC) 7 – a client-side virtualisation product that runs on Windows 7 (32- or 64-bit versions). As Jason Perlow describes, it’s not using a type-1 (native/bare metal) hypervisor like Hyper-V (sadly, as if there were to be a client side virtualisation product based on Hyper-V it would be great for developer workstations) but instead uses a type-2 hypervisor (hosted) model. Unlike previous versions of Virtual PC though, the new version requires hardware assisted virtualisation capabilities (AMD-V or Intel VT), which are prevalent in many recently-purchased PCs (even if switched off in the BIOS).

Officially, VPC7 only supports Windows XP, Vista and 7 guests but, just like earlier versions of Virtual PC, there is the option of using emulated hardware so it’s still possible to run other operating systems – it’s just not supported. It’s also worth noting that not all Windows Vista and 7 SKUs are supported in a virtual environment. Something else that’s not supported is the ability to run 64-bit or multi-processor guest operating systems, nor is snapshotting. And, because the virtualisation components are incompatible, there’s still no support for moving virtual machines between Hyper-V and VPC7 either. I’ve already been fairly vocal in my feedback to the product team on this; their response is that the priority is on application compatibility (and, on that basis, I can see the reasons for concentrating on single-processor 32-bit Windows XP support) but continuing to maintain incompatibility between client and server virtualisation platforms seems a little strange to me.

VPC7 features include:

  • Desktop mode – enhancing the traditional Virtual PC functionality using Terminal Services technologies (e.g. for drive redirection and smartcard sharing as well as video improvements that enable large resolutions), and still maintaining the functionality for Virtual machine windows to support arbitrary resolutions. For those applications that experience issues working through the terminal services drive redirection etc., it is possible to disable integration features, after which the Virtual PC will operate as Virtual PC has done previously.
  • Seamless application mode – allowing virtual applications to use Terminal Services application remoting capabilities (RemoteApp) to appear as though they are running locally. Applications retain the chrome of the guest operating system rather than the Windows 7 host but, to all intents and purposes, they are integrated with the native desktop.
  • Tight Windows shell integration and a simplified user interface. In the same way that Windows has special folders for Pictures, Music, etc. a Virtual Machines folder is provided, with Windows Explorer integration for creation of virtual machines and editing virtual machine settings (no more Virtual PC Console). Where a supported operating system is used, applications in the virtual machine may be published to the host’s Start Menu and there’s also integration with the Windows 7 superbar. By default, all new applications installed in the virtual machine (whilst running in full desktop mode) are published to the Windows 7 Start Menu (each virtual machine has its own folder) but this can be disabled if required; however, publishing applications that are built into Windows XP (Internet Explorer, Outlook Express, etc.) requires some registry editing.
  • Full USB support is available for supported operating systems: for any USB device where both host and guest drivers are available then there is integrated USB support but for those devices where there are no Windows 7 drivers they are redirected to and controlled from the guest operating system. Microsoft is also advising that certain device types (e.g. mass storage, printers and smart cards) should not be directly connected to the virtual machines and are better redirected using the Terminal Services functionality built into Virtual PC.
  • A simplified virtual machine creation process with three steps: name and location (remembering the last used location); memory and networking options; disk settings (dynamically expanding by default, or optionally launching a wizard for other disk types such as fixed sized and differencing disks). Once built, new hard disks can be added in the virtual machine settings and control over undo disks is also moved to the virtual machine settings. Other new virtual machine settings relate to integration features, logon credentials and auto publishing.

From a technical standpoint, there are three main VPC7 processes to be aware of:

  • vpc.exe is the base process for Virtual PC functionality.
  • vmsal.exe is the seamless application launcher, which waits for a new application request and launches it. Once the application is closed it sets a timer before saving the VM state and exiting. This means that, when the application is closed, the virtual machine is kept up for a few minutes in case the user launches an application that requires it but after a short while it will be put into a saved state. In addition, because undo disk settings are managed within the virtual machine settings, logging off/shutting down/hibernation is handled as normal, with no virtual machine prompts about undo disks and saving state.
  • vmwindow.exe is launched when VPC is not running in integrated mode.

VPC7 will not run on the Windows 7 beta (build 7000) as it requires the RC (or one of the interim builds – I’ve seen it running on builds 7057 and 7068). I haven’t tried this personally but I’m told that it cannot be installed on Windows Server 2008 R2 either; however something similar is possible with Hyper-V by installing the Terminal Services Remote Applications Integrated Locally (RAIL) components (RemoteApp). Certain Windows 7 editions will include the Windows XP virtual machine, so there is no requirement to build a separate Windows XP image.

Architecturally, VPC looks like a hybrid between Virtual Server and Hyper-V: it uses the Virtual Server engine, including a scriptable COM interface for VM automation (and the security model has been modified so it can be called from PowerShell without needing to make security interoperability tweaks); it also uses the VSP/VSC/VPCBus model from Hyper-V; and it integrates RAIL components from Terminal Services but, because the Terminal Services technologies for integrated applications and enhanced desktop support run over the VPCBus, connectivity is available even if there is no network communication between the guest and the host. Because it’s built on the Virtual Server/Virtual PC codebase, VPC7 is limited to 4GB of RAM and 128GB VHDs.

Windows 7 XP Mode and Windows Virtual PC form a neat solution for application compatibility in Windows 7, drawing on established MED-V (from the Kidaro acquisition), Terminal Services (through partnership with Citrix) and Virtual PC/Virtual Server (formerly from Connectix) technologies. They are very much a point solution for application compatibility though and Microsoft still does not have a decent client-side virtualisation solution for high-end users (developer workstations, IT professionals with several desktop variants, etc.). Whether this is enough to allay concerns from Microsoft’s customers who baulked at a move to Vista as a result of the application compatibility issues is yet to be seen but with the general perception of Windows 7 riding high, this might be just the insurance policy that IT managers want to ensure that legacy applications continue to function. My main concerns with this solution are support (Windows XP is still end of life – and legacy applications may not be supported in a virtual environment), the overall complexity of the solution (however much it’s hidden from the end user, there are still two operating systems running on the hardware) and performance (virtualisation typically requires increased memory and CPU requirements – together with the need for hardware assisted virtualisation, this is certainly not a solution for legacy PCs). Whatever the situation, I’m sure there will be plenty more written on this topic over the coming months.

Windows 7 “XP Mode”

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week was a frustrating one… you see, earlier this month Paul Thurrott gave a hint about an exciting Windows 7 secret. I put 2 and 2 together and it seems that I came up with 4. The trouble was that I was given the details from an official source at around the same time – and that information was under NDA so I couldn’t write about it here!

It’s times like this that I’m glad I’m not running a news site and waiting for a “scoop”, but all three of the leading tech journalists covering Windows (i.e. Paul Thurrott, Ed Bott and Mary Jo Foley) have written articles in the last few days about Windows 7 XP Mode and Windows Virtual PC, and I want to pull things together here.

Basically, Paul Thurrott and Rafael Rivera are reporting that there will be a new version of Virtual PC, available as a download for Windows 7, including a licensed copy of Windows XP SP3 to run those applications that don’t behave well on the Vista/Windows 7 codebase. More details will follow (it won’t actually be “in the box” with Windows 7) but Ed Bott has commented that it looks an awful lot like MED-V.

Of course, the technology is already there – as well as drawing comparisons with MED-V, Ed Bott points out that you can do something similar with VirtualBox in seamless mode and the key detail with Windows XP Mode is the licensing situation. Full licensing details have yet to be announced but the only Microsoft blog post I’ve seen on the subject says:

“We will be soon releasing the beta of Windows XP Mode and Windows Virtual PC for Windows 7 Professional and Windows 7 Ultimate”

That reference to Professional and Ultimate would also indicate that it will run on Enterprise (virtually identical to Ultimate), but not Starter, Home Basic or Home Premium. As Microsoft’s main concern is allowing businesses to run legacy applications as they are weaned off XP, that seems fair enough but, then again, MED-V is only available to volume license customers today and Mary Jo Foley suggests that could be the same for XP Mode – I guess we’ll just have to wait and see.

So, will this work? I hope so. Windows Vista (after SP1) was never as bad as its perception in the marketplace indicated but if ever you needed an example that perception is reality, then Vista was it! Strangely, Windows Server 2008 (the server edition of Vista SP1) has been well received as the solid, reliable operating system that it is, without the negative press. Windows 7 is a step forward in many ways and, as XP is now into its extended support phase, many organisations will be looking for something to move to but the application compatibility issues caused by Windows Vista and Windows 7’s improved security model will still cause a few headaches – that’s what this functionality is intended to overcome, although there will still be some testing required as to how well those old XP apps perform in a virtualised environment.

More technical details will follow soon, but either Paul Thurrott and Rafael Rivera are operating on a different NDA to me (which they may well be) or they feel pretty confident that Microsoft will still give them access to information as they continue to spill the beans on this particular feature…

Running VMware Server on top of Hyper-V… or not

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days ago, I came across a couple of blog posts about how VMware Server won’t run on top of Hyper-V. Frankly, I’m amazed that any hosted virtualisation product (like VMware Server) will run on top of any hypervisor – I always understood that hosted virtualisation required so many hacks to work at all that if it saw something that wasn’t the real CPU (i.e. a hypervisor handling access to the processor’s hardware virtualisation support) then it might be expected to fall over in a heap – and it seems that VMware even coded VMware Server 2.0 to check for the existence of Hyper-V and fail gracefully. Quite what happens with VMware Server on top of ESX or XenServer, I don’t know – but I wouldn’t expect it to work any better.

Bizarrely, Virtual PC and Virtual Server will run on Hyper-V (I have even installed Hyper-V on top of Hyper-V whilst recording the installation process for a Microsoft TechNet video!) and, for the record, ESX will run in VMware Workstation too (i.e. hypervisor on top of hosted virtualisation). As for Hyper-V in VMware Workstation VM – I’ve not got around to trying it yet but Microsoft’s Matt McSpirit is not hopeful.

Regardless of the above, Steve Graegart did come up with a neat solution for those instances when you really must run a hosted virtualisation solution and Hyper-V on the same box. It involves dual-booting, which is a pain in the proverbial but, according to Steve, it works:

  1. Open a command prompt and create a new [boot loader] entry by copying the default one bcdedit /copy {default} /d "Boot without Hypervisor"
  2. After successful execution copy the GUID (ID of the new boot loader entry) including the curly braces to the clipboard.
  3. Set the HyperVisorLaunchType property to off bcdedit /set {guid} hypervisorlaunchtype off [using] the GUID you’ve previously copied to the clipboard.

After this, you should now have a boot time selection of whether or not to start Hyper-V (and hence whether or not an alternative virtualisation solution will run as expected).

Microsoft Virtualization: part 3 (desktop virtualisation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Before the weekend, I started a series of posts on the various technologies that are collectively known as Microsoft Virtualization. So far, I’ve looked at host/server virtualisation and in this post, I’ll look at the various forms of desktop virtualisation that Microsoft offers.

Whilst VMware have a virtual desktop infrastructure (VDI) solution built around Virtual Infrastructure (VI), Microsoft’s options for virtualising the desktop are more varied – although it should be noted that they do not yet have a desktop broker and recommend partner products such as Citrix Xen Desktop or Quest vWorkspace (formerly Provision Networks Virtual Access Suite). With Hyper-V providing the virtualisation platform, System Center Virtual Machine Manager, Configuration Manager and Operations Manager for management of virtualised Vista clients, this is what some people at Microsoft have referred to as Microsoft VDI (although that’s not yet an official marketing concept).

Licensed by access device (PC or thin client) with the ability to run up to four virtual operating system instances per license, the Vista Enterprise Centralized Desktop (VECD) is actually platform agnostic (i.e. VECD can be used with VMware, Xen or other third-party virtualisation solutions). VECD is part of the Microsoft Desktop Optimization Pack (MDOP) and so requires a Software Assurance (SA) subscription.

With a broker to provide granular authentication and support for the Citrix Independent Computing Architecture (ICA) protocol (for better multimedia support than the Remote Desktop Protocol), users can connect to a Windows Vista desktop from any suitable access device.

To access this virtualised infrastructure there are a number of options – from thin-client terminal devices to Windows Fundamentals for Legacy PCs (WinFLP) – an operating system based on Windows XP Embedded and intended for use on older hardware. WinFLP is not a full general purpose operating system, but provides suitable capabilities for security, management, dcument-viewing and the Microsoft .NET framework, together with RDP client support and the ability to install other clients (e.g. Citrix ICA). Running on old, or low-specification hardware, WinFLP is an ideal endpoint for a VDI but it is a software assurance benefit – without SA then the closest alternative is to strip down/lock down Windows XP.

VDI is just one part of the desktop virtualisation solution though – since Microsoft’s purchase of Connectix in 2003, Virtual PC has been available for running virtualised operating system instances on the desktop. With the purchase of Kidaro in March 2008, Microsoft gained an enterprise desktop virtualisation solution, which has now become known as Microsoft Enterprise Desktop Virtualisation (MED-V) and is expected to become part of MDOP in the first half of 2009.

Effectively, MED-V provides a managed workspace, with automatic installation, image delivery and update; centralised management and reporting; usage policies and data transfer controls; and complete end use transparency (i.e. users do not need to know that part of their desktop is virtualised).

The best way I can describe MED-V is something like VMware ACE (for a locked-down virtual desktop) combined with the Unity feature from VMware Fusion/Coherence from Parallels Desktop for Mac, whereby the guest application instances appear to be running natively on the host operating system desktop.

MED-V runs within Virtual PC but integration with the host operating system is seamless (although MED-V applications can optionally be distinguished with a coloured border) – even down to the system tray level and providing simulated task manager entries.

A centralised repository is provided for virtual machine images with a variety of distribution methods possible – even a USB flash drive – and a management console is provided in order to control the user experience. Authentication is via Active Directory permissions, with MED-V icons published to the host desktop.

MED-V can be used to run applications with compatibility issues on a virtual Windows XP desktop running on Windows Vista until application compatibility fixes can be provided (e.g. using Application Compatibility Toolkit shims, or third party solutions such as those from ChangeBASE). Furthermore, whereas using application virtualisation to run two versions of Internet Explorer side-by-side involves breaching the end user licensing agreement (EULA), the MED-V solution (or any operating system-level virtualisation solution) provides a workaround, even allowing the use of lists to spawn an alternative browser for those applications that require it (e.g. Internet Explorer 7 on the desktop, with Internet Explorer 6 launched for certain legacy web applications).

Using technologies such as MED-V for desktop virtualisation allows a corporate desktop to be run on a “dirty” host (although network administrators will almost certainly have kittens). From a security standpoint, MED-V uses a key exchange mechanism to ensure security of client-server communications and the virtual hard disk (.VHD) image itself is encrypted, with the ability to set an expiry date after which the virtual machine is inoperable. Restrictions over access to clipboard controls (copy, paste, print screen, etc.) may be applied to limit interaction between guest and host machines – even to the point that it may be possible to copy data in one direction but not the other.

At this time, MED-V is 32-bit only, although future releases will have support for 64-bit host operating system releases (and I expect to see hypervisor-based virtualisation in a future Windows client release – although I’ve not seen anything from Microsoft to substantiate this, it is a logical progression to replace Virtual PC in the way that Hyper-V has replaced Virtual Server)

Desktop virtualisation has a lot of potential to aid organisations in the move to Windows Vista but, unlike VMware, who see VDI as a replacement for the desktop, Microsoft’s desktop virtualisation solutions are far more holistic, integrating with application and presentation virtualisation to provide a variety of options for application delivery.

In the next post in this series, I’ll take a closer look at application virtualisation.

Microsoft Virtualization: part 2 (host virtualisation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this evening I kicked off a series of posts on the various technologies that are collectively known as Microsoft Virtualization and the first area I’m going to examine is that of server, or host, virtualisation.

Whilst competitors like VMware have been working in the x86 virtualisation space since 1998, Microsoft got into virtualisation through acquisition of Connectix in 2003. Connectix had a product called Virtual PC and, whilst the Mac version was dropped just as MacOS X started to grow in popularity (with its place in the market taken by Parallels Desktop for Mac and VMware Fusion), there have been two incarnations of Virtual PC for Windows under Microsoft ownership – Virtual PC 2004 and Virtual PC 2007.

Virtual PC provides a host virtualisation capability (cf. VMware Workstation) but is aimed at desktop virtualisation (the subject for a future post). It does have a bastard stepchild (my words, albeit based on the inference of a Microsoft employee) called Virtual Server, which uses the same virtual machine and virtual hard disk technology but is implemented to run as a service rather than as an application (comparable with VMware Server) with a web management interface (which I find clunky – as Microsoft’s Matt McSpirit once described it, it’s a bit like Marmite – you either love it or hate it).

Virtual Server ran its course and the latest version is Virtual Server 2005 R2 SP1. The main problem with Virtual Server is the hosted architecture, whereby the virtualisation stack runs on top of a full operating system and involves very inefficient context switches between user and kernel mode in order to access the server hardware – that and the fact that it only supports 32-bit guest operating systems.

With the launch of Windows Server 2008, came a beta of Hyper-V – which, in my view, is the first enterprise-ready virtualisation product that Microsoft has released. The final product shipped on 26 June 2008 (as Microsoft’s James O’Neill pointed out, the last product to ship under Bill Gates’ tenure as a full-time Microsoft employee) and provides a solid and performant hypervisor-based virtualisation platform within the Windows Server 2008 operating system. Unlike the monolithic hypervisor in VMware ESX which includes device drivers for a limited set of supported hardware, Hyper-V uses a microkernalised model, with a high performance VMbus for communication between guest (child) VMs and the host (parent) partition, which uses the same device drivers as Windows Server 2008 to communicate with the hardware. At the time of writing, there are 419 server models certified for Hyper-V in the Windows Server Catalog.

Architecturally, Hyper-V has almost nothing in common with Virtual PC and Virtual Server, although it does use the same virtual hard disk (.VHD) format and virtual machines can be migrated from the legacy platforms to Hyper-V (although, once the VM additions have been removed and replaced with the Hyper-V integration components, they cannot be taken back into a Virtual PC/Virtual Server environment). Available only in 64-bit editions of Windows Server 2008, Hyper-V makes use of hardware assisted virtualisation as well as security features to protect against buffer overflow attacks.

I’ve written extensively about Hyper-V on this blog but the main posts I would highlight for information on Hyper-V are:

Whilst Hyper-V is a remarkably solid product, to some extent the virtualisation market is moving on from host virtualisation (although it is an enabler for various related technologies) and there are those who are wary of it because it’s from Microsoft and its a version 1 product. Then there are those who highlight it’s supposed weaknesses… mostly FUD from VMware (for example, a few days back a colleague told me that he couldn’t implement Hyper-V in an enterprise environment because it doesn’t support failover – a completely incorrect statement).

When configured to use Windows Server 2008’s failover clustering technologies, Hyper-V can save the state of a virtual machine and restart it on another node, using a technology known as quick migration. Live migration (where the contents of memory are copied on the fly, resulting in seamless failover between cluster nodes in a similar manner to VMware VMotion) is a feature that was removed from the first release of Hyper-V. Whilst this has attracted much comment, many organisations who are using virtualisation in a production environment will only fail virtual machines over in a controlled manner – although there will be some exceptions where live migration is required. Nevertheless, at the recent Microsoft Virtualization launch event, Microsoft demonstrated live migration and said it will be in the next release of Hyper-V.

Memory management is another area that has attracted attention – VMware’s ESX product has the ability to overcommit memory as well as to transparently share pages of memory. Hyper-V does not offer this and Microsoft has openly criticised memory overcommitment because the operating system things it is managing memory paging, meanwhile the virtual memory manager is swapping pages to disk whilst transparent page sharing breaks fundamental rules of isolation between virtual machines.

Even so, quoting from Steven Bink’s interview with Bob Muglia, Vice President of Microsoft’s Server and Tools division:

“We talked about VMware ESX and its features like shared memory between VMs, ‘we definitely need to put that in our product’. Later he said it will be in the next release – like hot add memory, disk and NICs will be and live migration of course, which didn’t make it in this release.”

[some minor edits made for the purposes of grammar]

Based on the comments that have been made elsewhere about shared memory management, this should probably be read as “we need something like that” and not “we need to do what VMware has done”.

Then there is scalabilty. At launch, Microsoft cited 4-core, 4-way servers as the sweet spot for virtualisation, with up to 16 cores supported, running up to 128 virtual machines. Now that Intel has lauched it’s new 6-core Xeon 7400 processors (codenamed Dunnington), an update has been released to allow Hyper-V to support 24 cores (and 192 VMs), as described in Microsoft knowledge base article 956710. Given the speed in which that update was released, I’d expect to see similar improvements in line with processor technology enhancements.

One thing is for sure, Microsoft will make some significant improvements in the next full release of Hyper-V. At the Microsoft Virtualization launch, as he demonstrated live migration, Bob Muglia spoke of the new features in the next release of Windows Server 2008, and Hyper-V (which I interpreted as meaning that Hyper-V v2 will be included in Windows Server 2008 R2currently scheduled for release in early 2010). Muglia continued by saying that:

“There’s actually quite a few new features there which we’ll talk about both at the upcoming PDC (Professional Developer’s Conference) in late October, as well as at WinHEC which is the first week of November. We’ll go into a lot of detail on Server 2008 R2 at that time.”

In the meantime, there is a new development – the standalone Hyper-V Server. Originally positioned as a $28 product for the OEM and enterprise channels, this will now be a free of charge download and is due to be released within 30 days of the Microsoft Virtualization launch (so, any day now).

As detailed in the video above, Hyper-V Server is a “bare-metal” virtualisation product and is not a Windows product (do the marketing people at Microsoft really think that Microsoft Hyper-V Server will not be confused with the Hyper-V role in Microsoft Windows Server?).

With just a command line interface (as in server core installations of Windows Server 2008), it includes a configuration utility for basic setup tasks like renaming the computer, joining a domain, updating network settings, etc. but is intended to be remotely managed using the Hyper-V Manager MMC on Windows Server 2008 or Windows Vista SP1, or with System Center Virtual Machine Manager (SCVMM) 2008.

Whilst it looks similar to server core and uses some Windows features (e.g. the same driver model and update mechanism) it has a single role – Microsoft Hyper-V and does not support features in Windows Server 2008 Enterprise Edition like failover clustering (so no quick migration) although the virtual machines can be moved to Windows Server 2008 Hyper-V if required at a later date. Hyper-V Server is also limited to 4 CPU sockets and 32GB of memory (as for Windows Server 2008 Standard Edition). I’m told that Hyper-V Server has a 100MB memory footprint and uses around 1TB of disk (which sounds a lot for a hypervisor – we’ll see when I get my hands on it in a few days time).

Unlike Windows Server 2008 Standard, Enterprise and Datacenter Editions, Hyper-V Server will not require client access licenses (although the virtual machine workloads may) and it does not include any virtualisation rights.

That just about covers Microsoft’s host virtualisation products. The next post in this series will look at various options for desktop virtualisation. In the meantime, I’ll be spending the day at VMware’s Virtualisation Forum in London, to see what’s happening on their side of the fence.

Virtual PC and Virtual Server updated to support the latest Windows releases

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those who thought (as I did until a few weeks back) that there would be no more Virtual Server development as Microsoft focuses its efforts on Hyper-V may be interested to hear that they have just announced an update for Virtual Server 2005 R2 SP1, providing host and guest support for Windows Server 2008, Windows Vista Service Pack 1 and Windows XP Service Pack 3. Further details can be found in Microsoft knowledge base article 948515.

In addition, Microsoft has shipped service pack 1 for Virtual PC 2007, providing guest support for Windows Server 2008, Windows Vista Service Pack 1 and Windows XP Service Pack 3 as well as host support for Windows Vista Service Pack 1 and Windows XP Service Pack 3. Further details can be found in the accompanying release notes.

Both of these products take the VM Additions version to 13.820.

This information has been floating around for a few weeks but was under NDA until yesterday. Watch out for more news from the virtualisation labs in Redmond soon…