When to use App-V?

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of times this year, I’ve had customers question whether Microsoft App-V is still relevant in this day and age. Well, the answer is “yes” – and especially since it was rolled into the Windows 10 Anniversary Update (build 1607) as predicted by Tim Mangan and verified by my colleague Leo D’Arcy (@LeoDArcy1).

So, when would you use App-V? My colleague Steve Harwood (@steeveeh) coached me on this a while back, and this is what he had to say (with some edits for style but the message unchanged):

“App-V, like .MSI or .EXE is the application packaging format. This wraps all of the application files (e.g. registry keys, DLLs, files, etc.) into a format and that format then needs to be delivered down to the endpoint by a tool, e.g. SCCM, App-V infrastructure or another electronic method.

 

Of all the packaging formats App-V is an extremely versatile solution. It virtualizes the application which gives us a couple of advantages in that it allows an upgraded version of an app to co-exist with a previous version and it allows a clean uninstall (as the bubble is removed in its entirety). Additionally App-V plays really well with VDI as you can host the App-V files in a shared location and multiple differential VDIs can launch from that location which saves on costly high-spec storage space.

 

In short, SCCM is the delivery tool to push the application in whatever format it may be. App-V is a tool to wrap the application to allow it to be a layered onto the OS

 

It’s also important to note that App-V requires no infrastructure and can be fully integrated into System Center Configuration Manager (SCCM) (e.g. create App-Vs then import them into SCCM application lifecycle); however, if you don’t have SCCM you can install the ‘App-V infrastructure’ which is another method that can be used to deliver App-Vs to the endpoint. Alternatively you can use PowerShell for delivery…”

Thick, thin, virtualised, whatever: it’s how you manage the desktop that counts

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!

  • Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
    • At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
    • Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
    • As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
    • Eventually, without control, update or maintenance, the PC become “sluggish”.
  • Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
  • Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
  • Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
  • Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:

    “Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”

    [source: Gartner, Age Matters When Considering PC TCO]

    and a few months earlier, they wrote that:

    “Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”

    [source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]

    Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).

  • Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
    • In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
    • In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
    • So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
    • “Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”

      [source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]

  • Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
  • Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
    • Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
    • For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
    • Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
  • Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications.  Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
    • Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
    • Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
  • Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
    • In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
    • Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
    • Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).

There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.

Getting ready to deploy Windows 7 on the corporate desktop

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

With Windows 7 (and Server 2008 R2) now released to manufacturing and availability dates published, what does this really mean for companies looking to upgrade their desktop operating system? I’ve previously written about new features in Windows Server 2008 R2 (part 1 and part 2) but now I want to take a look at the Windows client.

Whilst I still maintain that Windows Vista was not as bad as it was made out to be (especially after service pack 1, which contained more driver resolutions and compatibility updates than security fixes), it was a classic case of “mud sticks” and, in the words of one Microsoft representative at a public event this week:

“Windows Vista maybe wasn’t as well received as [Microsoft] had hoped.”

The press was less harsh on Windows Server 2008 (which is closely related to Vista) but, with the new releases (Windows 7 and Windows Server 2008 R2), reaction from the IT press and from industry analysts has been extremely positive. In part, that’s because Windows 7 represents a “minor” update. By this I mean that, whilst Vista had deep changes (which contributed to it’s unpopularity) with new models for security, drivers, deployment and networking, Windows 7 continues with the same underlying architecture (so most software that runs on Vista will run on 7 – the exceptions are products that are deeply integrated with the operating system such as security products – and hardware that runs Vista well will run 7 Windows 7 well).

Indeed, under Steven Sinofsky‘s watch, with Windows 7 Microsoft has followed new approach for development and disclosure including:

  • Increased planning – analysing trends and needs before building features.
  • Providing customers and partners with predictability – a new operating system every 3 years.
  • Working on the ecosystem – with early partner engagement (ISVs and IHVs have plenty of time to get ready – including a program for ISVs to achieve a “green light” for application compatibility – and the other side or the coin, for those of us looking for suitable hardware and software, is the Ready Set 7 site.).

Having said that Windows 7 is a minor update, it does include some major improvements. Indeed, some might say (I believe that Mark Russinovich was one of them) that if you got back to a previous product version and miss the features then it was a major release. In no particular order, here are of some of the features that Microsoft is showing off for Windows 7 (there are many more too):

  • Superbar amalgamates the previous functions of the Taskbar and the Quicklaunch bar and includes larger icons to accommodate touch screen activities (Windows 7 includes multitouch support).
  • Live preview of running applications (not just when task switching but from the superbar too).
  • Jumplists – right click on a superbar icon to pin it to the superbar – even individual files.
  • No more Windows sidebar – gadgets can be anywhere on the desktop and are isolated from one another so if they crash they do not impact the rest of system.
  • Aero user interface improvements: Aero Peek to quickly look at the desktop; Aero Snap to quickly arrange windows such as when comparing and contrast document contents; Aero Shake to minimise all other open windows.
  • The ability to cut and paste from document previews.
  • The ability to deploy a single, hardware agnostic image for all PCs.
  • Group policy improvements to control USB device usage (no more epoxy resin to glue up USB ports!).
  • BitLocker To Go – encrypt the contents of USB sticks, including the ability to read the contents from downlevel operating systems based on a one-time password.
  • Integrated search shows where results come from too (e.g. Programs, OneNote, Outlook, etc.) and only indexes in quiet time. Search Federation extends this to include SharePoint sites and other corporate resources.
  • DirectAccess, point to point authentication for access to corporate resources (e.g. intranet sites) from anywhere including intelligent routing to identify corporate traffic and separate it from Internet-bound traffic avoid sending all traffic across the VPN.
  • BranchCache – locally cache copies of files, and share on a peer-to-peer basis (or, as my colleague Dave Saxon recently described it, “Microsoft’s version of BitTorrent”).
  • AppLocker – create whitelists or blacklists of approved software, including versions.
  • Problem Steps Wizard – record details of problems and send the results for diagnosis, or use to create walkthrough guides, etc.
  • Action Center – one stop shop for PC health.
  • User Access Control (UAC) warnings reduced.

All of this is nice but, faced with the prospect of spending a not-inconsiderable sum of money on an operating system upgrade, features alone are probably not enough! So, why should I deploy a new Windows operating system? Because, for many organisations, the old one (and I mean Windows XP, not Vista) is no longer “good enough”. It’s already on extended support, lacks some features that are required to support modern ways of working, was designed for an era when security was less of a concern and will be retired soon. So, if I’m an IT manager looking at a strategy for the desktop, my choices might include:

  • Do nothing. Possible, but increasingly risky once the operating system stops receiving security updates and manufacturers stop producing drivers for new hardware.
  • Stop using PCs and move to server based computing? This might work in some use cases, but unlikely to be a universal solution for reasons of mobility and application compatibility.
  • Move to a different operating system – maybe Linux or Mac OS X? Both of these have their relative merits but, deep down, Windows, Linux and Mac OS X all provide roughly the same functionality and if moving from XP to Vista was disruptive from an application compatibility standpoint, moving to a Unix-based OS is likely to be more so.
  • Deploy a new version of Windows – either Vista (which is not a bad way to get ready for 7) or 7.
  • Wait a bit longer and deploy Windows 8. That doesn’t leave a whole lot of time to move from XP and the transition is likely to be more complex (jumping forward by three operating system releases).

Assuming I choose to move to Windows 7, there are several versions available but, unlike with Vista, each is a superset of the features in the version below (and Enterprise/Ultimate are identical – just targetted at different markets). For businesses, there are only two versions that are relevant: Professional and Enterprise – and Enterprise is only available as a Software Assurance (SA) benefit. If you don’t have a suitable volume licensing agreement, Professional the only real choice (saving money by buying Home Premium is unlikely to be cost-effective as it lacks functionality like the ability to join a domain, or licensing support for virtualisation – and purchasing Ultimate Edition at full packaged product price is expensive).

There are some Enterprise/Ultimate features that are not available in the Professional Edition, most notably DirectAccess, BranchCache, Search Federation, BitLocker, BitLocker To Go, and AppLocker. Some of these also require a Windows Server 2008 R2 back end (e.g. DirectAccess and BranchCache).

In Europe, things are a little more complicated – thanks to the EU – and we’re still waiting to hear the full details of what that means (e.g. can an organisation deploy a build based on E Edition outside Europe, or deploy a build within the EU based on a “normal” editions sourced from outside Europe and remain supported).

The other variant is 32- or 64-bit. With the exception of some low-end PCs, almost every PC that we buy today is 64-bit capable, 64-bit drivers are available for most devices (I’ve had no problems getting 64-bit drivers for the Windows 7 notebook that I use ever day) and many 32-bit applications will run on a 64-bit platform. Having said that, if all the PCs you buy have between 2 and 4GB of RAM, then there is not a huge advantage. If you are looking to the future, or running applications that can use additional RAM (on hardware that can support it), then 64-bit Windows is now a viable option. Whilst on the subject of hardware, if you are considering Windows XP Mode as a possible application compatibility workaround, then you will also need hardware virtualisation support and hardware DEP. Steve Gibson’s Securable utility is a handy piece of freeware to check that the necessary features are supported on your hardware.

Whilst on the subject of virtualisation, there are four options (from Microsoft – third party solutions are also available):

  • The much-hyped Windows XP Mode. Great for small businesses but lacks the management tools for enterprise deployment and beware that each virtual machine will also require its own antivirus and management agents – which may be potentially expensive if it’s just to run one or two applications that should really be dragged kicking and screaming into the 21st century.
  • Microsoft Enterprise Desktop Virtualisation (MED-V). This is the former Kidaro product and appears to be a good solution for running legacy applications isolated at the operating system level but it still involves managing a second operating system instance and is part of the Microsoft Desktop Optimisation Pack (MDOP) so is only available to customers with SA.
  • Microsoft Application Virtualization (App-V). A popular solution for application-level isolation but requires applications to be repackaged (with consequential support implications) and also only available as part of MDOP.
  • Virtual desktop infrastructure (VDI). Whilst the concept may initially appear attractive, it’s not an inexpensive option (and without careful management may actually increase costs), Microsoft’s desktop broker (Remote Desktop Services) is new in Windows Server 2008 R2 and, crucially for partners, there is no sensible means of licensing this in a managed service context.

The main reason for highlighting virtualisation options in a Windows 7 post is that Windows XP Mode is being held up as a great way to deal with application compatibility issues. It is good but it’s also worth remembering that it’s a sticking plaster solution and the real answer is to look at why the applications don’t work in the first place. Which brings me onto application compatibility.

Even for those of us who are not developers, there are three ways to approach application compatibility in Windows 7:

  • Windows 7’s Program Compatibility wizard can be used to make simple changes to an application’s configuration and make it work (e.g. skip a version check, run in compatibility mode, etc.)
  • Application Compatibility Toolkit (ACT) 5.5 contains tools and documentation to evaluate and mitigate application compatibility issues for Windows Vista, Windows 7, Windows Update, or Windows Internet Explorer (e.g. shims to resolve known issues) – there are also third party tools from companies like ChangeBASE.
  • Windows XP Mode. For those applications that simply refuse to run on Windows 7 but certainly not a solution for organisations trying to shoehorn Windows 7 onto existing hardware and upgrade at minimal cost.

After deciding what to move to, deployment is a major consideration. The Microsoft Deployment Toolkit (MDT) and Windows Automated Installation Kit (WAIK) have both been updated for Windows 7 and can be used together to deploy a fresh operating system installation together with applications and migrate the user data. There is no in-place upgrade path for Windows XP users (or for Windows 7 customers in Europe) and I was amazed at the number of Microsoft partners in the SMB space who were complaining about this at a recent event but a clean installation is the preferred choice for many organisations, allowing a known state to be achieved and avoiding problems when each PC is slightly different to the next and has its own little nuances.

I think I’ve covered most of the bases here: some of the new features; product editions; hardware and software requirements; application compatibility; virtualisation; deployment. What should be the next steps?

Well, firstly, although the release candidate will work through to June next year, wait a couple of weeks and get hold of the RTM bits. Then test, test, and test again before deploying internally (to a select group of users) and start to build skills in preparation for mass deployment.

As for the future – Microsoft has publicly committed to a new client release every 3 years (it’s not clear whether server releases will remain on a 2 year major/minor schedule) so you should expect to see Windows 8 around this time in 2012.

Microsoft Virtualization: the R2 wave

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The fourth Microsoft Virtualisation User Group (MVUG) meeting took place last night and Microsoft’s Matt McSpirit presented a session on the R2 wave of virtualisation products. I’ve written previously about some of the things to expect in Windows Server 2008 R2 but Matt’s presentation was specifically related to virtualisation and there are some cool things to look forward to.

Hyper-V in Windows Server 2008 R2

At last night’s event, Matt asked the UK User Group what they saw as the main limitations in the original Hyper-V release and the four main ones were:

  • USB device support
  • Dynamic memory management (ballooning)
  • Live Migration
  • 1 VM per storage LUN

Hyper-V R2 does not address all of these (regardless of feedback, the product group is still unconvinced about the need for USB device support… and dynamic memory was pulled from the beta – it’s unclear whether it will make it back in before release) but live migration is in and Windows finally gets a clustered file system in the 2008 R2 release.

So, starting out with clustering – a few points to note:

  • For the easiest support path, look for cluster solutions on the Windows Server Catalog that have been validated by Microsoft’s Failover Cluster Configuration Program (FCCP).
  • FCCP solutions are recommended by Microsoft but are not strictly required for support – as long as all the components (i.e. server and SAN) are certified for Windows Server 2008 – a failover clustering validation report will still be required though – FCCP provides another level of confidence.
  • When looking at cluster storage, fibre channel (FC) and iSCSI are the dominant SAN technologies. With 10Gbps Ethernet coming onstream, iSCSI looked ready to race ahead and has the advantage of using standard Ethernet hardware (which is why Dell bought EqualLogic and HP bought LeftHand Networks) but then Fibre Channel over Ethernet came onstream, which is potentially even faster (as outlined in a recent RunAs Radio podcast).

With a failover cluster, Hyper-V has always been able to offer high availability for unplanned outages – just as VMware do with their HA product (although Windows Server 2008 Enterprise or Datacenter Editions were required – Standard Edition does not include failover clustering).

For planned outages, quick migration offered the ability to pause a virtual machine and move it to another Hyper-V host but there was one significant downside of this. Because Microsoft didn’t have a clustered file system, each storage LUN could only be owned by one cluster node at a time (a “shared nothing” model). If several VMs were on the same LUN, all of them needed to be managed as a group so that they could be paused, the connectivity failed over, and then restarted, which slowed down transfer times and limited flexibility. The recommendation was for 1 LUN per VM and this doesn’t scale well with tens, hundreds, or thousands of virtual machines although it does offer one advantage as there is no contention for disk access. Third party clustered file system solutions are available for Windows (e.g. Sanbolic Melio FS) but, as Rakesh Malhotra explains on his blog, these products have their limitations too.

Windows Server 2008 R2 Hyper-V can now provide Live Migration for planned failovers – so Microsoft finally has an alternative to VMware VMotion (at no additional cost). This is made possible because of the new clustered shared volume (CSV) feature with IO fault tolerance (dynamic IO) overcomes the limitations with the shared nothing model and allows up to 256TB per LUN, running on NTFS with no need for third party products. The VM is still stored on a shared storage volume and at the time of failover, memory is scanned for dirty pages whilst still running on the source cluster node. Using an iterative process of scanning memory for dirty pages and transferring them to the target node, the memory contents are transferred (over a dedicated network link) until there are so few that the last few pages may be sent and control passed to the target node in fraction of a second with no discernible downtime (including ARP table updates to maintain network connectivity).

Allowing multiple cluster nodes to access a shared LUN is as simple as marking the LUN as a CSV in the Failover Clustering MMC snap-in. Each node has a consistent namespace for LUNS so as many VMs as required my be stored on a CSV as need (although all nodes must use the same letter for the system drive – e.g. C:). Each CSV appears as an NTFS mount point, e.g. C:\ClusterStorage\Volume1
and even though the volume is only mounted on one node, distributed file access is co-ordinated through another node so that the VM can perform direct IO. Dynamic IO ensures that, if the SAN (or Ethernet) connection fails then IO is re-routed accordingly and if the owning node fails then volume ownership is redirected accordingly. CSV is based on two assumptions (that data read/write requests far outnumber metadata access/modification requests; and that concurrent multi-node cached access to files is not needed for files such as VHDs) and is optimised for Hyper-V.

At a technical level, CSVs:

  • Are implemented as a file system mini-filter driver, pinning files to prevent block allocation movement and tracking the logical-to-physical mapping information on a per-file basis, using this to perform direct reads/writes.
  • Enable all nodes to perform high performance direct reads/writes to all clustered storage and read/write IO performance to a volume is the same from any node.
  • Use SMB v2 connections for all namespace and file metadata operations (e.g. to create, open, delete or extend a file).
  • Need:
    • No special hardware requirements.
    • No special application requirements.
    • No file type restrictions.
    • No directory structure or depth limitations.
    • No special agents or additional installations.
    • No proprietary file system (using the well established NTFS).

Live migration and clustered storage are major improvements but other new features for Hyper-V R2 include:

  • 32 logical processor (core) support, up from 16 at RTM and 24 with a hotfix (to support 6-core CPUs) so that Hyper-V will now support up to 4 8-core CPUs (and I would expect this to be increased as multi-core CPUs continue to develop).
  • Core parking to allow more intelligent use of processor cores – putting them into a low power suspend state if the workload allows (configurable via group policy).
  • The ability to hot add/remove storage so that additional VHDs or pass through disks may be assigned to to running VMs if the guest OS supports supports the Hyper-V SCSI controller (which should cover most recent operating systems but not Windows XP 32-bit or 2000).
  • Second Level Address Translation (SLAT) to make use of new virtualisation technologies from Intel (Intel VT extended page tables) and AMD (AMD-V nested paging) – more details on these technologies can be found in Johan De Gelas’s hardware virtualisation article at AnandTech.
  • Boot from VHD – allowing virtual hard disks to be deployed to virtual or or physical machines.
  • Network improvements (jumbo frames to allow larger Ethernet frames and TCP offload for on-NIC TCP/IP processing).

Hyper-V Server

So that’s covered the Hyper-V role in Windows Server 2008 R2 but what about its baby brother – Hyper-V Server 2008 R2? The good news is that Hyper-V Server 2008 R2 will have the same capabilities as Hyper-V in Windows Server 2008 R2 Enterprise Edition (previously it was based on Standard Edition) to allow access to up to 1TB of memory, 32 logical cores, hot addition/removal of storage, and failover clustering (with clustered shared volumes and live migration). It’s also free, and requires no dedicated management product although it does need to be managed using the RSAT tools for Windows Server 2008 R2 of Windows 7 (Microsoft’s advice is never to manage an uplevel operating system from a downlevel client).

With all that for free, why would you buy Windows Server 2008 R2 as a virtualisation host? The answer is that Hyper-V Server does not include licenses for guest operating systems as Windows Server 2008 Standard, Enterprise and Datacenter Editions do; it is intended for running non-Windows workloads in a heterogeneous datacentre standardised on Microsoft virtualisation technologies.

Management

The final piece of the puzzle is management:

There are a couple of caveats to note: the SCVMM 2008 R2 features mentioned are in the beta – more can be expected at final release; and, based on previous experience when Hyper-V RTMed, there may be some incompatibilities between the beta of SCVMM and the release candidate of Windows Server Hyper-V R2 (expected to ship soon).

SCVMM 2008 R2 is not a free upgrade – but most customers will have purchased it as part of the Server Management Suite Enterprise (SMSE) and so will benefit from the two years of software assurance included within the SMSE pricing model.

Wrap-up

That’s about it for the R2 wave of Microsoft Virtualization – for the datacentre at least – but there’s a lot of improvements in the upcoming release. Sure, there are things that are missing (memory ballooning may not a good idea for server consolidation but it will be needed for any kind of scalability with VDI – and using RDP as a workaround for USB device support doesn’t always cut it) and I’m sure there will be a lot of noise about how VMware can do more with vSphere but, as I’ve said previously, VMware costs more too – and I’d rather have most of the functionality at a much lower price point (unless one or more of those extra features will make a significant difference to the business case). Of course there are other factors too – like maturity in the market – but Hyper-V is not far off its first anniversary and, other than a couple of networking issues on guests (which were fixed) I’ve not heard anyone complaining about it.

I’ll write more about Windows 7 and Windows Server 2008 R2 virtualisation options (i.e. client and server) as soon as I can but, based on a page which briefly appeared on the Microsoft website, the release candidate for is expected to ship next month and, after reading Paul Thurrott’s post about a forthcoming Windows 7 announcement, I have a theory (and that’s all it is right now) as to what a couple of the Windows 7 surprises may be…

Free Microsoft Virtualization eBook from Microsoft Press

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again, Microsoft Press makes free e-books available. I just missed out on the PDF version of the Windows Vista Resource Kit as part of the Microsoft Press 25th anniversary (the offer was only valid for a few days and it expired yesterday… that’s what happens when I don’t keep on top of my e-mail newsletters) but Mitch Tulloch’s book on Understanding Microsoft Virtualization Solutions is also available for free download (I don’t know how long for though… based on previous experience, that link won’t be valid for long).

This book covers Windows Server 2008 Hyper-V, System Center Virtual Machine Manager 2008, Microsoft Application Virtualization 4.5 (App-V), Microsoft Enterprise Desktop Virtualization (MED-V), and Microsoft Virtual Desktop Infrastructure. If you’re looking to learn about any of these technologies, it would be a good place to start.

Microsoft Virtualization: part 7 (wrap up and additional resources)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few weeks (it was originally supposed to be a few days… but work got in the way), I’ve written several posts on Microsoft Virtualization:

  1. Introduction.
  2. Host virtualisation.
  3. Desktop virtualisation.
  4. Application virtualisation.
  5. Presentation virtualisation.
  6. Management.

I thought I’d wrap-up the series by mentioning the Microsoft Assessment and Planning Toolkit (MAP) solution accelerator – a free inventory, assessment and reporting tool which can help with planning the implementation of various Microsoft technologies – including Windows Server 2008 Hyper-V (v3.2 is in a public beta at the time of writing) – to find out more about MAP try and catch (in person or virtually) Baldwin Ng’s session at the November meeting of the Microsoft Virtualization User Group.

Also worth noting is the 7 hours of free e-learning courses that Microsoft has made available:

  • Clinic 5935: Introducing Hyper-V in Windows Server 2008
  • Clinic 6334: Exploring Microsoft System Center Virtual Machine Manager 2008
  • Clinic 6335: Exploring Microsoft Application Virtualization
  • Clinic 6336: Exploring Terminal Services in Windows Server 2008

Microsoft’s virtualisation portfolio is not complete (storage and network virtualisation are not included but these are not exactly Microsoft’s core competencies either); however it is strong, growing fast, and not to be dismissed.

Microsoft Virtualization: part 4 (application virtualisation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m getting behind on my blogging (my day job keeps getting in the way) but this post continues the series I started on Microsoft’s virtualisation technologies. So far, I’ve set the scene, looked at host/server virtualisation and desktop virtualisation and this time it’s Microsoft Application Virtualization – formerly known as SoftGrid and also known as App-V.

Microsoft provides a technical overview of App-V but the basic premise is that applications are isolated from one another whilst running on the same operating system. In fact, with App-V, the applications are not even installed but are sequenced into a virtual environment by monitoring file and registry changes made by the application and wrapping these up as a single file which is streamed to users on demand (or loaded from a local cache) to execute in its own “bubble” (technically known as a SystemGuard environment). Whilst not all applications are suitable for virtualisation (e.g. those that run at system level, or require specialist hardware such as a “dongle”) many are and one significant advantage is that the virtualised applications can also be run in a terminal services environment (without needing separate packages for desktop and server-based computing). It’s worth considering though, that virtualising an application doesn’t change the license – so, whilst it may be possible to run two versions of an application side by side, it may not be allowed under the terms of the end user license agreement (e.g. Internet Explorer).

I wrote a post about application virtualisation using Softricity SoftGrid a couple of years ago but, with App-V v4.5, Microsoft has made a number of significant changes. The main investment areas have related to allowing virtualised applications to communicate (through a new feature called dynamic suite composition), extending scalability, globalisation/localisation and security.

Of the many improvements in App-V v4.5, arguably the main feature is the new dynamic suite composition functionality. Using dynamic suite composition, the administrator can group applications so that shared components are re-used, reducing the package size and allowing plugins and middleware to be sequenced separately from the applications that will use them. This is controlled through definition of dependencies (mandatory or optional) so that two SystemGuard environments (App-V “bubbles”) can share the same virtual environment.

On the scalability front, App-V 4.5 also takes a step forward, as it provides three delivery options to strike a balance between enterprise deployment in a distributed environment and retaining the benefits of application isolation and on-demand delivery. The three delivery options are:

  • Full infrastructure – with a desktop publishing service, dynamic delivery and active package upgrades but requiring the use of Active Directory and SQL Server.
  • Lightweight infrastructure – still allowing dynamic delivery and active package upgrades but without the need for SQL Server, allowing application streaming capability to be added to Microsoft System Center Configuration Manager or third party enterprise software delivery frameworks.
  • Standalone mode – with no server infrastructure required and MSI packages as the configuration control, then mode allows standalone execution of virtual applications and is also interoperable with Microsoft System Center Configuration Manager or third party enterprise software delivery applications but it does not allow dynamic delivery or active package upgrades.

Additional scalability enhancements include background streaming (auto-load at login or at first launch for quick launch and offline availability) and the configuration of application source roots (for a local client to determine the appropriate server to use) as well as client support for Windows Server 2008 Terminal Services (in Microsoft Application Virtualization for Terminal Services). There are also new options for resource targeting for the application, open software description (OSD) and icon files, enhanced data metering (a WMI provider to collect application usage information) and better integration with the Windows platform (Microsoft Update and volume shadow copy service support, a System Center Operations Manager (SCOM) 2007 management pack, group policy template support, a best practice analyser and improved diagnostic support. Finally on the scalability front, the sequencer has been enhanced with a streamlined process (fewer wizards and less clicks), MSI creation capability (for standalone use), improvements at the command line and differential SFT file support for updates.

App-V is not the only application virtualisation technology (notable alternatives include VMware ThinApp – formerly Thinstall and Symantec/Altiris SVS) but it is one of the best-known. It’s also an important component of the Microsoft Virtualization strategy. In the next post in this series, I’ll take a look at presentation virtualisation.

Finally, it’s worth noting that I’m not an application virtualisation expert – but Aaron Parker is – if you’re interested in this topic then it’s worth adding Aaron’s blog to your feed reader.

Christmas has come early: App-V, Hyper-V Server, SCVMM and live migration in Hyper-V all on their way!

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Get Virtual Now

I’d heard that something big was happening in Redmond today (well, maybe not in Redmond, but in Bellevue anyway…). I knew about the getVIRTUALnow events and I watched the opening session on the web but there had to be something else. Well, there is – Microsoft Application Virtualization 4.5 (App-V, formerly SoftGrid), which RTMed last week, will be part of the Microsoft Desktop Optimisation Pack (MDOP) R2, due for general availability within the coming weeks. System Center Virtual Machine Manager 2008 will be released within 30 days, as will Hyper-V Server (which will be a free downloadnot $28 as previously announced). And, as Scott Lowe reported earlier, live migration will be supported by Hyper-V in Windows Server 2008 R2.

Read more in the associated Microsoft press release.

PowerShell running on server core

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Aaron Parker saw my presentation on Windows Server 2008 server core earlier this week and it got him thinking…

I said that Microsoft don’t see server core as an application platform but there’s no real reason why not as long as the applications you want to run don’t have dependencies on components that don’t exist in server core. I even suggested that, with a reduced surface attack area and less patching required, server core is a great platform for those applications that don’t rely on the shell, Internet Explorer, the .NET Framework or anything else that has been removed.

I also mentioned that PowerShell doesn’t run on server core because it relies on the .NET Framework.

So Aaron used SoftGrid to repackage the Microsoft .NET Framework and Windows PowerShell for server core – and it worked! He says there are a few errors, but as a proof of concept it’s a great idea – and a good demonstration of how flexible application virtualisation can be.

Application virtualisation using the Softricity SoftGrid platform

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back I had the opportunity to attend a presentation on application virtualisation using the Softricity SoftGrid platform. Server virtualisation is something I’m becoming increasingly familiar with, but application virtualisation is something else entirely and I have to say I was very impressed with what I saw. Clearly I’m not alone, as Microsoft has acquired Softricity in order to add application virtualisation and streaming to its IT management portfolio.

The basic principal of application virtualisation is that, instead of installing an application, an application runtime environment is created which can roam with a user. The application is packaged via a sequencer (similar to the process of creating an MSI application installer) and broken into feature blocks which can be delivered on demand. In this way the most-used functionality can be provided quickly, with additional functionality in another feature block only loaded as required. When I first heard of this I was initially concerned about the potential network bandwidth required for streaming applications; however feature blocks are cached locally and it is also possible to pre-load the cache (either from a CD or using tools such as Microsoft Systems Management Server).

From a client perspective, a user’s group membership is checked at login time and appropriate application icons are delivered, either from cache, or by using the real-time streaming protocol (RTSP) to pull feature blocks from the server component. The virtual application server uses an SQL database and Active Directory to control access to applications based on group membership and this group model can be used to stage rollout of an application (again, reducing the impact on the network avoiding situations where many users download a new version of an application at the same time).

Not all applications are suitable for virtualisation. For example, very large applications used throughout an organisation (e.g. Microsoft Office) may be better left in the base workstation build; however the majority of applications are ideal for virtualisation. The main reason not to virtualise are if an application provides shell integration that might negatively impact upon another application if it is not present – for example the ability to send an e-mail from within an application may depend on Microsoft Outlook being present and configured.

One advantage of the virtualised application approach is that the operating system is not “dirtied” – because each package includes a virtual registry and a virtual file system (which run on top of the traditional “physical” registry and file system), resolving application issues is often a case of resetting the cache. This approach also makes it possible to run multiple versions of an application side-by-side – for example testing a new version of an application alongside the existing production version. Application virtualisation also has major advantages around reducing rollout time.

The Microsoft acquisition of Softricity has so far brought a number of benefits including the inclusion of the ZeroTouch web interface for self-provisioning of applications within the core product and reduction of client pricing. There is no server pricing element, making this a very cost effective solution – especially for Microsoft volume license customers.

Management of the solution is achieved via a web service, allowing the use of the SoftGrid Management Console or third party systems management products. SoftGrid includes features for policy-based management and application deployment as well as usage tracking and compliance.

I’ve really just skimmed the surface here but I find the concept very interesting and can’t help but feel the Microsoft acquisition will either propel this technology into the mainstream (or possibly kill it off forever – less likely). In the meantime, there’s a lot of information available on the Softricity website.